Simon Willison (Django)hupper


Handy Python module for adding "live reload" development support to just about anything. I'm using it with Sanic - I run "hupper -m app" and it starts up my code in and automatically reloads it any time any of the corresponding files changes on disk.

Via Live reload server in develop phase · Issue #168 · channelcat/sanic · GitHub

Jeremy Keith (Adactio)Pattern Libraries, Performance, and Progressive Web Apps

Ever since its founding in 2005, Clearleft has been laser-focused on user experience design.

But we’ve always maintained a strong front-end development arm. The front-end development work at Clearleft is always in service of design. Over the years we’ve built up a wealth of expertise on using HTML, CSS, and JavaScript to make better user experiences.

Recently we’ve been doing a lot of strategic design work—the really in-depth long-term engagements that begin with research and continue through to design consultancy and collaboration. That means we’ve got availability for front-end development work. Whether it’s consultancy or production work you’re looking for, this could be a good opportunity for us to work together.

There are three particular areas of front-end expertise we’re obsessed with…

Pattern Libraries

We caught the design systems bug years ago, way back when Natalie started pioneering pattern libraries as our primary deliverable (or pattern portfolios, as we called them then). This approach has proven effective time and time again. We’ve spent years now refining our workflow and thinking around modular design. Fractal is the natural expression of this obsession. Danielle and Mark have been working flat-out on version 2. They’re very eager to share everything they’ve learned along the way …and help others put together solid pattern libraries.

Danielle Huntrods Mark Perkins


Thinking about it, it’s no surprise that we’re crazy about performance at Clearleft. Like I said, our focus on user experience, and when it comes to user experience on the web, nothing but nothing is more important than performance. The good news is that the majority of performance fixes can be done on the front end—images, scripts, fonts …it’s remarkable how much a good front-end overhaul can make to the bottom line. That’s what Graham has been obsessing over.

Graham Smith

Progressive Web Apps

Over the years I’ve found myself getting swept up in exciting new technologies on the web. When Clearleft first formed, my head was deep into DOM Scripting and Ajax. Half a decade later it was HTML5. Now it’s service workers. I honestly think it’s a technology that could be as revolutionary as Ajax or HTML5 (maybe I should write a book to that effect).

I’ve been talking about service workers at conferences this year, and I can’t hide my excitement:

There’s endless possibilities of what you can do with this technology. It’s very powerful.

Combine a service worker with a web app manifest and you’ve got yourself a Progressive Web App. It’s not just a great marketing term—it’s an opportunity for the web to truly excel at delivering the kind of user experiences previously only associated with native apps.

Jeremy Keith

I’m very very keen to work with companies and organisations that want to harness the power of service workers and Progressive Web Apps. If that’s you, get in touch.

Whether it’s pattern libraries, performance, or Progressive Web Apps, we’ve got the skills and expertise to share with you.

Simon Willison (Django)System Requirements For SQLite

System Requirements For SQLite

Document describing the high level goals and objectives of SQLite. Like everything to do with SQLite this exhibits some incredibly well thought out software engineering. I particularly like “S80000: SQLite shall exhibit ductile failure characteristics“ where ductile is described in opposition to brittle: a ductile system begins showing signs of trouble well in advance of failure.

ProgrammableWebWhat to Consider When Pricing Your API

​It’s no secret that the API business is big business. Over $500m was pumped into API-driven firms just last year and that number is only likely to rise. But how do companies make money from an API and how should you price API usage to get the best return?  Lindsey Kirchoff over at Nordic APIs tells you everything you need to price your API for all possible users be they hobbyists, businesses or enterprise devs. 

Simon Willison (Django)Parse shell one-liners with pyparsing

Parse shell one-liners with pyparsing

Neat introduction to the pyparsing library, both for parsing tokens into labeled sections and constructing an AST from them.

Simon Willison (Django)SurviveJS - Webpack

SurviveJS - Webpack

Free online book about Webpack. I’ve read the first couple of chapters and it looks like a concise, well constructed guide to a key component of the modern JavaScript stack.

Via Reddit

Simon Willison (Django)It’s Not a Feature Problem—Avoiding Startup Tarpits

It’s Not a Feature Problem—Avoiding Startup Tarpits

“When we turned on paid advertising for the first time the increase we had a sizable increase in signups. We always feared that a new user would just churn because of what we perceived as deficiencies in the product. While there were users who churned for that reason, it was never the nightmare scenario that we imagined.”

Via Hacker News

Simon Willison (Django)Quoting Laura McPherson

I am currently documenting a language called Seenku, spoken by fewer than 15,000 people in the rolling hills of southwestern Burkina Faso in West Africa. Like Chinese, it is a tonal language, meaning the pitch on which a word is pronounced can radically alter its meaning. For instance, tsu can mean “thatch” when pronounced with an extra low pitch, but “hippopotamus” when pronounced with falling pitch. In fact, pitch plays such a huge role in Seenku that it can be “spoken” through music alone, most notably on the traditional xylophone.

Laura McPherson

Simon Willison (Django)Getting the Most out of Sqlite3 with Python

Getting the Most out of Sqlite3 with Python

A couple of neat tricks I didn’t know: you can skip cursors entirely by calling .execute and .executemany directly on the connection object, and you can use the connection object as a context manager to execute transactions using a “with” block.

Simon Willison (Django)Crossdressing, Compression, and Colliders: 'The First Photo on the Web'

Crossdressing, Compression, and Colliders: 'The First Photo on the Web'

TIL the first photo shared on the web was of Les Horribles Cernettes, an all-female comedy musical group at CERN who performed songs about particle physics. And Sir Tim Berners-Lee first met them when he played the dame in the CERN panto.

Simon Willison (Django)Porting my blog to Python 3

This blog is now running on Python 3! Admittedly this is nearly nine years after the first release of Python 3.0, but it’s the first Python 3 project I’ve deployed myself so I’m pretty excited about it.

Library authors like to use six to allow them to write code that supports both Python 2 and Python 3 at the same time… but my blog isn’t a library, so I used the 2to3 conversion tool that ships with Python instead.

And… it worked pretty well! I ran the following command from my project’s root directory:

2to3 -w -n blog/ config/ redirects/ feedstats/

The -w option causes the files to be over-written in place. Since everything is already in git, there was no reason to have 2to3 show my a diff without applying it. Likewise, the -n option tells 2to3 not to bother saving backups of the files it modifies.

Here’s the initial commit containing mostly the 2to3 changes.

Next step: run the tests! My test suite may be very thin, but it does at least check that the app can run its migrations, start up and serve a few basic pages without errors. One of my migrations was failing due to rogue bytestrings but that was an easy fix.

At this point I started to lean heavily on my continuous integration setup built on Travis CI. All of my Python 3 work took place in a branch, and all it took was a one line change to my .travis.yml for Travis to start running the tests for that branch using Python 3.

With the basic tests working, I made my first deploy to my Heroku staging instance - after first modifying my Heroku runtime.txt to tell it to use Python 3.6.2. My staging environment allowed me to sanity check that everything would work OK when deployed to Heroku.

At this point I got a bit lazy. The responsible thing to do would have been extensive manual testing plus systematic unit test coverage of core functionality. My blog is hardly a critical piece of infrastructure though, so I went with the faster option: put it all live and use Sentry to see if anything breaks.

This is where Heroku’s ability to deploy a specific branch came in handy: one click to deploy my python3 branch, keep an eye on Sentry (via push notifications from my private slack channel) and then one click to deploy my master branch again for an instant rollback in case of errors. Which I had to do instantly, because it turned out I had stored some data in Django’s cache using Python 2 pickle and was trying to read it back out again using Python 3.

I fixed that by bumping my cache VERSION setting and deployed again. This deploy lasted a few minute longer before Sentry started to fill up with encoding errors and I rolled it back again.

The single biggest difference between Python 2 and Python 3 is how strings are handled. Python 3 strings are unicode sequences. Learning to live in a world where strings are all unicode and byte strings are the rare, deliberate exceptions takes some getting used to.

The key challenge for my blog actually came from my custom markup handling template tags. 15 years ago I made the decision to store all of my blog entries as valid XHTML fragments. This meant I could use XML processors - back then in PHP, today Python’s ElementTree - to perform various transformations on my content.

ElementTree in Python 2 can only consume bytestrings. In Python 3 it expects unicode strings. Cleaning this up took a while, eventually inspiring me to refactor my custom template tags completely. In the process I realized that my blog templates were mostly written back before Django’s template language implemented autoescape (in Django 1.0), so my code was littered with unnecessary |escape and |safe filters. Those are all gone now.

Sentry lets you mark an exception as “resolved” when you think you’ve fixed it - if it occurs again after that it will be re-reported to your Slack channel and added back to the Sentry list of unresolved issues. Once Sentry was clear (especially given Googlebot had crawled my older pages) I could be pretty confident there were no more critical 500-causing errors.

That left logic errors, of which only one has cropped up so far: the “zero years ago” bug. Entries on my homepage include a relative date representation, e.g. “three days ago”. Python 3 changed how the divison operator works on integers - 3 / 2 == 1.5 where in Python 2 it gets truncated to 1. As a result, every entry on my homepage showed “zero years ago”. Thankfully this was a one-line fix.

All in all this process was much less painful than I expected. It took me longer to write this blog entry than it did to actually make the conversion (thanks to 2to3 doing most of the tedious work), and the combination of Travis CI, Sentry and Heroku allowed me to ship aggressively with the knowledge that I could promptly identify and resolve any issues that came up.

Next upgrade: Django 2.0!

Simon Willison (Django)github-dashboard


Nice little self-contained example of a React app with no build step by Shing Lyu.

Via Shing Lyu

Simon Willison (Django)Minimal React.js Without A Build Step

Minimal React.js Without A Build Step

React is pretty dependent on a build phase, to handle things like JSX compilation. This is fine for most projects, but sometimes I just want to hot-link react and react-dom from a CDN and knock out a quick self-contained mini-application. Shing Lyu points out that this is much easier if you ditch JSX in favour of direct calls to React.createElement().

ProgrammableWebDaily API RoundUp: Airbnb, LIFX,, FormAPI

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Simon Willison (Django)Quoting Tim Hunkin

Goldberg’s machines are always described as useless and my machines are too. But they both made us enough money to live off, which is quite useful. Also making people laugh is useful, a lot more beneficial than many ‘serious’ advances in technology like yet another new computer operating system. My aunt Lis, who is very religious, describes my arcade as my ministry.

Tim Hunkin

ProgrammableWebCloudRail Releases Universal Messaging API For Deploying Chatbots Across Platforms

CloudRail, an API integration solution provider, has announced the release of its Unified Messaging API which allows developers to deploy chatbots and other conversational user interfaces (UI) across platforms. The API provides a unified interface bundling Facebook Messenger, Line, Telegram, and Viber together so that developers can deploy conversational UIs across these platforms using a single API.

ProgrammableWebThread Genius API Powers Visual Style Recommendation as a Service

Thread Genius recently launched with a goal of improving content recommendation based on computer vision. At the heart of their strategy is the Thread Genius API. The Thread Genius API drives what the company has termed visual style recommendation as a service. The service helps API users make product recommendations to consumers based on the consumers' taste through the use of images instead of words.

Simon Willison (Django)Quoting Laurie Voss

Serverless is a somewhat unhelpfully misleading term for "highly scalable stateless code". All the times I've seen serverless stuff work really well it was workloads that were usually zero but occasionally 30k/sec without warning. I've run a company with that kind of workload and serverless stuff would have saved us a ton of money. Publishing to the [npm] registry could be done as a serverless app but there's little benefit because we do not get huge spikes in publishing. We get huge spikes in *downloads* but serverless isn't useful there because it's a read-only case and very little processing is done. Serverless is a great solution to one type of problem. It's very seldom the case that you can convert all your problems into that shape.

Laurie Voss

Simon Willison (Django)Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant

Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant

“The “Hey Siri” detector uses a Deep Neural Network (DNN) to convert the acoustic pattern of your voice at each instant into a probability distribution over speech sounds. It then uses a temporal integration process to compute a confidence score that the phrase you uttered was “Hey Siri”. If the score is high enough, Siri wakes up.”

Via John Gruber

Simon Willison (Django)React is the new Dojo

React is the new Dojo

In which Mikeal Rogers provides his perspective on the history of Dojo, the earliest break-out JavaScript framework, how jQuery eclipsed it and contemplates the same thing eventually happening to React.

ProgrammableWebHow to Run UI-Driven API Tests with Postman

Today in our API Testing Series we'll take a look at Postman. Along the way you'll create an API test suite running against the Trello API, which is a real API with formal authentication tokens.

Amazon Web ServicesIntroducing Cost Allocation Tags for Amazon SQS

You have long had the ability to tag your AWS resources and to see cost breakouts on a per-tag basis. Cost allocation was launched in 2012 (see AWS Cost Allocation for Customer Bills) and we have steadily added support for additional services, most recently DynamoDB (Introducing Cost Allocation Tags for Amazon DynamoDB), Lambda (AWS Lambda Supports Tagging and Cost Allocations), and EBS (New – Cost Allocation for AWS Snapshots).

Today, we are launching tag-based cost allocation for Amazon Simple Queue Service (SQS). You can now assign tags to your queues and use them to manage your costs at any desired level: application, application stage (for a loosely coupled application that communicates via queues), project, department, or developer. After you have tagged your queues, you can use the AWS Tag Editor to search queues that have tags of interest.

Here’s how I would add three tags (app, stage, and department) to one of my queues:

This feature is available now in all AWS Regions and you can start using in today! To learn more about tagging, read Tagging Your Amazon SQS Queues. To learn more about cost allocation via tags, read Using Cost Allocation Tags. To learn more about how to use message queues to build loosely coupled microservices for modern applications, read our blog post (Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS) and watch the recording of our recent webinar, Decouple and Scale Applications Using Amazon SQS and Amazon SNS.

If you are coming to AWS re:Invent, plan to attend session ARC 330: How the BBC Built a Massive Media Pipeline Using Microservices. In the talk you will find out how they used SNS and SQS to improve the elasticity and reliability of the BBC iPlayer architecture.


Simon Willison (Django)Carbon


Beautiful little tool that you can paste source code into to generate an image of that code with syntax highlighting applied, ready to be tweeted or shared anywhere that lets you share an image. Built in Node and next.js, with image generation handled client-side by the dom-to-image JavaScript library which loads HTML into a SVG foreignObject (sadly not yet supported by Safari) and uses that to populate a canvas and produce a PNG.

Via Guillermo Rauch

ProgrammableWebBuild Chatbots and Other Conversational Apps with the Conversational AI Toolkit, a Netherlands-based chatbot platform startup, has launched the Conversational AI Toolkit which provides tools to create chatbots, intelligent assistants, and other conversational applications. The platform allows users to build chatbots and other conversational interfaces for multiple channels such as Facebook Messenger, CRMs, and web clients. The platform is powered by machine learning and a proprietary natural language processing (NLP) engine.

Simon Willison (Django)Quoting Fredrik deBoer

By cutting out a hundred voices or fewer, things and people that everybody talks about became things and people that nobody talks about. The internet is a technology for creating small ponds for us to all be big fish in. But you change your perspective just slightly, move over just an inch, and suddenly you get a sense of just how few people know about you or could possibly care.

Fredrik deBoer

Simon Willison (Django)Streaming Dataframes

Streaming Dataframes

This is some deep and brilliant magic: Matthew Rocklin’s Streamz Python library provides some elegant abstractions for consuming infinite streams of data and calculating cumulative averages and rolling reductions... and now he’s added an integration with jupyter that lets you embed bokeh graphs and pandas dataframe tables that continue to update in realtime as the stream continues! Check out the animated screenshots, this really is a phenomenal piece of work.

Daniel Glazman (Disruptive Innovations)OS X High Sierra installer hell (OSInstall.mpkg missing or corrupted)

Dear Apple, this is the fourth time in a row one of your system upgrades on iOS or OS X make me loose a day or two - when it does not make me loose a lot of data - and I am fed up with it. My last experience with your High Sierra upgrade is truly shocking:

  • this morning, I decided to finally upgrade my eligible MacBookPro to High Sierra
  • I did it the right way, and everything initially seemed to work fine
  • then suddenly the installer stopped, announcing that "mac OS could not be installed on your computer" because "file OSInstall.mpkg was missing or damaged". Uuuuh???? What the hell?!? I was really scared since my backup missed two days of data, some of them being extremely important to me.
  • I tried the Recovery mode to install, no result
  • I tried to locate the missing file somewhere else in the installer's filesystem, no result
  • I tried the Disk Utility and it was worse since the app was struck with a spinning wheel...
  • I tried disk utils in the Terminal but my HD was gone. Just gone. Awful. I was so shaken I had to stay away from the computer for a few minutes.
  • then I discovered there are literally thousands of Mac users complaining about High Sierra's installer bricking their Mac with the same error... We're not speaking of a beta here, we're not speaking of something released yesterday. How can this remain broken?
  • fortunately, we have a few other Macs at home so I downloaded High Sierra from another one, downloaded the excellent and free Disk Creator to create a bootable USB version of the High Sierra installer
  • the install from that USB stick seemed to work and my data is still there, wooooof.

So for the visitor hitting this article and willing to upgrade a Mac to High Sierra, these are my VERY strong recommendations:

  1. full Time Machine backup first. Full. Mandatory. More than ever with the filesystem change. Make 100% sure your backup ended correctly and is usable. Do it, whatever the time cost.
  2. download High Sierra from the App Store but do NOT install; hit Cmd-Q to close the installer.
  3. download Disk Creator (link above) and create a bootable USB version of the High Sierra installer (located in your /Applications folder). Of course, you need a USB key...
  4. shut down your Mac ; insert your bootable USB key and reboot while pressing the Alt/Option key. At prompt, use the arrows and the CR key to select the USB bootable installer.
  5. install High Sierra on your disk that way and if it fails, use the Time Machine backup you fortunately did at step 1.

My Mac went bricked at 10am. All in all, it took me 6 hours and 36 minutes to find how to get it fixed, stop being scared of launching that process that could wipe all my HD out, and do it. Let's be very clear : this is totally unacceptable. The High Sierra installer is still broken and thousands of people are hit by that breakage.

On another hand, last Windows10 upgrade was so smooth it felt old-days-Apple, ahem.

I had to recommend my less geeky dad, kids, friends to avoid High Sierra's installer if I am not around. Wake up Apple, you're reaching unacceptable limits here. Your hardware starts sucking (incredibly noisy and ugly keyboard, bad touchpad design, useless and expensive touchbar, USB-C hell, no more SD slot) and some of your software are now below expectations. Wake up. Now!

Amazon Web ServicesGetting Ready for AWS re:Invent 2017

With just 40 days remaining before AWS re:Invent begins, my colleagues and I want to share some tips that will help you to make the most of your time in Las Vegas. As always, our focus is on training and education, mixed in with some after-hours fun and recreation for balance.

Locations, Locations, Locations
The re:Invent Campus will span the length of the Las Vegas strip, with events taking place at the MGM Grand, Aria, Mirage, Venetian, Palazzo, the Sands Expo Hall, the Linq Lot, and the Encore. Each venue will host tracks devoted to specific topics:

MGM Grand – Business Apps, Enterprise, Security, Compliance, Identity, Windows.

Aria – Analytics & Big Data, Alexa, Container, IoT, AI & Machine Learning, and Serverless.

Mirage – Bootcamps, Certifications & Certification Exams.

Venetian / Palazzo / Sands Expo Hall – Architecture, AWS Marketplace & Service Catalog, Compute, Content Delivery, Database, DevOps, Mobile, Networking, and Storage.

Linq Lot – Alexa Hackathons, Gameday, Jam Sessions, re:Play Party, Speaker Meet & Greets.

EncoreBookable meeting space.

If your interests span more than one topic, plan to take advantage of the re:Invent shuttles that will be making the rounds between the venues.

Lots of Content
The re:Invent Session Catalog is now live and you should start to choose the sessions of interest to you now.

With more than 1100 sessions on the agenda, planning is essential! Some of the most popular “deep dive” sessions will be run more than once and others will be streamed to overflow rooms at other venues. We’ve analyzed a lot of data, run some simulations, and are doing our best to provide you with multiple opportunities to build an action-packed schedule.

We’re just about ready to let you reserve seats for your sessions (follow me and/or @awscloud on Twitter for a heads-up). Based on feedback from earlier years, we have fine-tuned our seat reservation model. This year, 75% of the seats for each session will be reserved and the other 25% are for walk-up attendees. We’ll start to admit walk-in attendees 10 minutes before the start of the session.

Las Vegas never sleeps and neither should you! This year we have a host of late-night sessions, workshops, chalk talks, and hands-on labs to keep you busy after dark.

To learn more about our plans for sessions and content, watch the Get Ready for re:Invent 2017 Content Overview video.

Have Fun
After you’ve had enough training and learning for the day, plan to attend the Pub Crawl, the re:Play party, the Tatonka Challenge (two locations this year), our Hands-On LEGO Activities, and the Harley Ride. Stay fit with our 4K Run, Spinning Challenge, Fitness Bootcamps, and Broomball (a longstanding Amazon tradition).

See You in Vegas
As always, I am looking forward to meeting as many AWS users and blog readers as possible. Never hesitate to stop me and to say hello!




ProgrammableWeb: Web provides a WebSocket based API that allows you to send and receive messages from in real-time. It enables you to manage automated interactions and step in with human supervision, design and create conversational UI's NLP with a drag and drop interface, integrate services and customize workflows and connect with customers across different channels. provides a platform to make conversational AI more accessible.

Simon Willison (Django)A Brief Intro to Docker for Djangonauts

A Brief Intro to Docker for Djangonauts

This is great - a really clear introduction to both Docker and Docker Compose, aimed at Django developers. Includes line-by-line annotations of an example Dockerfile and docker-compose.yml.

Via @revsys on Twitter

Simon Willison (Django)SRI Hash Generator

SRI Hash Generator

Handy utility for generating SRI hashes - just give it a URL and it will show you the script or link href block you need to use to safely embed that URL in your page with the correct SRI hash.

Simon Willison (Django)Subresource Integrity

Subresource Integrity

Now supported in Firefox 55, Chrome 49+ and Safari 11+. This makes me much more comfortable about hot-linking to JavaScript and CSS hosted by the various CDN providers, since it means that should they get breached any evil new scripts hosted at the same URL will be denied by modern browsers.

ProgrammableWebMicrosoft Makes Two More Cognitive APIs Generally Available

Microsoft has announced the general availability of two of its cognitive services APIs: the Bing Custom Search API and Bing Search API v7. Microsoft originally introduced the two APIs last month at its Ignite conference. Both are currently available through the Azure Portal.

Simon Willison (Django)Select Transform: JSON Template over JSON

Select Transform: JSON Template over JSON

A barrage of interesting ideas here. Having clients transmit up a JSON template which is then executed against data on the server and used to return exactly the data the client needs is just one of them (significant overlap with GraphQL there).

Via Hacker News

ProgrammableWebHow to Run Functional API Tests with SmartBear Ready API

This installment of the API Testing series will focus on using SmartBear's Ready API testing solution to test an API. If you follow these steps, you'll create an actual API test suite running against the Trello API, a real API with formal authentication tokens. Trello is a virtual kanban tool whose service is available through a web front-end, a mobile app, and programmatically via a RESTful API.

ProgrammableWeb: APIsThread Genius

The Thread Genius API provides visual style recognition as a service that allows you to tag photos with stylistic concepts, find shoppable products within the photos, perform a visual search on social media images, or manage and search against your own photos. This includes resources for Catalog, Public Catalogs, Prediction, Search and more. Thread Genius is a visual search and image recognition API for style focused applications.
Date Updated: 2017-10-18
Tags: Fashion, Applications, , Images, , Recognition, , Search

ProgrammableWeb: APIsIVA Entertainment Express

The Entertainment Express API is a gateway to building Movies, TV, and Game Content discovery experiences. The Entertainment Express service features Add ServiceStack Reference which allows adding generated Native Types for the most popular typed languages and client platforms directly from within most major IDE's including Visual Studio, Xamarin Studio, Xcode, Android Studio, IntelliJ and Eclipse. This includes resources for Analytics, Changes, Charts, Common, ExternalIds, Find and more. IVA's service allows access to their library of video assets, movies, trailers, TV series, music videos, game trailers, images and more.
Date Updated: 2017-10-18
Tags: Entertainment, Video

ProgrammableWebRecent API Security Incidents Show Perils of Ignoring Best Practices

Despite the fact that the costs associated with hacking and data breaches have arguably never been higher, recent API-related security incidents highlight the fact that basic API security best practices are still often not being adhered to.

Earlier this month, it was discovered that a vulnerability in the T-Mobile website made it possible for attackers to retrieve customer data, including email address, account number and phone IMSI number, using nothing more than a customer's phone number.

ProgrammableWebDaily API RoundUp: Grafeas, FBI, Mendix, Codemojo

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebAirbnb Officially Launches an API

Airbnb has finally launched the official Airbnb API. While the developer community has pushed for an Airbnb API for quite some time, and the company indicated its interest in opening an API, Airbnb’s official integration partners have historically been limited to private, wall-blocked partnerships. Until now.

Amazon Web ServicesAmazon Elasticsearch Service now supports VPC

Starting today, you can connect to your Amazon Elasticsearch Service domains from within an Amazon VPC without the need for NAT instances or Internet gateways. VPC support for Amazon ES is easy to configure, reliable, and offers an extra layer of security. With VPC support, traffic between other services and Amazon ES stays entirely within the AWS network, isolated from the public Internet. You can manage network access using existing VPC security groups, and you can use AWS Identity and Access Management (IAM) policies for additional protection. VPC support for Amazon ES domains is available at no additional charge.

Getting Started

Creating an Amazon Elasticsearch Service domain in your VPC is easy. Follow all the steps you would normally follow to create your cluster and then select “VPC access”.

That’s it. There are no additional steps. You can now access your domain from within your VPC!

Things To Know

To support VPCs, Amazon ES places an endpoint into at least one subnet of your VPC. Amazon ES places an Elastic Network Interface (ENI) into the VPC for each data node in the cluster. Each ENI uses a private IP address from the IPv4 range of your subnet and receives a public DNS hostname. If you enable zone awareness, Amazon ES creates endpoints in two subnets in different availability zones, which provides greater data durability.

You need to set aside three times the number of IP addresses as the number of nodes in your cluster. You can divide that number by two if Zone Awareness is enabled. Ideally, you would create separate subnets just for Amazon ES.

A few notes:

  • Currently, you cannot move existing domains to a VPC or vice-versa. To take advantage of VPC support, you must create a new domain and migrate your data.
  • Currently, Amazon ES does not support Amazon Kinesis Firehose integration for domains inside a VPC.

To learn more, see the Amazon ES documentation.


Matt Webb (Schulze & Webb)Filtered for things I learned over the weekend


Computers can be trained to see. But they don't necessarily fixate on the features humans see.

Adversarial Machine Learning is a technique to change an image to be recognised as something else, without looking any different to humans.

For example: a panda that - with the right fuzz of pixels added to it - looks to the computer 99.3% like a gibbon.

A hack: adversarial stop signs.

the team was able to create a stop sign that just looks splotchy or faded to human eyes but that was consistently classified by a computer vision system as a Speed Limit 45 sign.

Examples are given.


Ontology is the philosophical study of existence. Object-oriented ontology:

puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally -- plumbers, cotton, bonobos, DVD players, and sandstone, for example.

Things from their own perspective.

A desk telephone, from its own perspective, is constructed to entice (a curve of a handle, buttons that want to be pushed) to feed on sound. To be nourished by sound. And with that consumed energy, to reach out across the world and touch - out of an infinity of destinations and through the tangle - one other. And to breath in relief at this connection, a sigh: another voice.


The Ethics of Mars Exploration, an interview with Lucianne Walkowicz:

it remains a fact that Mars is a place unto its own that has its own history, and what respect do we owe to that history? What rights does that history have?

Which makes me ask this:

Yes I believe there's a human imperative to go to Mars; yes I believe it has to be done in an inclusive way; yes space mustn't be about resource exploitation, a cosmic Gestell; yes potential life on Mars must be preserved.

But also, what Walkowicz said, the land, the land, the land.

I hike, and the land has an intrinsic right to be itself. But I also believe in the human experience of the land, that this is a component of meaning: so, paths? When you walk the trails of the American south west, you come to understand that the trail-makers are poets, giving the land a voice to sing through human experience: effort, surprise, endurance, revelation, breathlessness.

So there should be trails on Mars too.

Which makes me think this:

Who is working to understand this interplay of the subjectivity of the land, and the human gaze, right now? Not necessarily on Mars.

Landscape artists - landscape photographers - do this well.

And that's a process that, for Mars, could start today.

There is Mars exploration via rover right now. The rovers, of course, have cameras. Do they have landscape photographers on the team? Are those artists given reign to look, be, and create?

Why Hasn’t David Hockney Been Given The Keys To The Mars Rover Yet.


A list of interstellar radio messages. That is, ones we've transmitted, not ones we've received.

The first one, from 1962, in Morse code: MIR LENIN SSSR Sent to Venus.

A more recent one, A Simple Response to an Elemental Message, was transmitted in October 2016 and comprised 3,755 crowdsourced responses to the question How will our present, environmental interactions shape the future? It was transmitted towards Polaris and will take 434 years to arrive. (Then another 434 years to hear back.)

The Golden Record is not a radio transmission but a physical item, copies of which were placed on Voyagers 1 and 2 in 1977, includes pictures, sounds, music, and greetings in 55 languages including, in Amoy, spoken in southern China, these words:

Friends of space, how are you all? Have you eaten yet? Come visit us if you have time.

Which I hope desperately isn't misinterpreted as offering humanity up for lunch.

Voyager 1 will make a flyby of a star in 40,000 years. Star AC +79 3888 is 17.6 lightyears away, so the earliest we will receive a radio message back is in 40,017.6 years. We should remember to listen out for that. Year 42,034. June.

The Rosetta Project is an archive of all the world's languages by the Long Now Foundation, and is intended to be a code for future civilisations to unlock... what? An archive that we leave behind.

Over the weekend I heard it asked:

Who is keeping an archive of all the messages we send into space, and how will that archive be maintained? We won't receive an answer from the stars, if any, for hundreds or maybe tens of thousands of years.

If, when, we receive a reply saying YES then how will we know what it's a YES about?

My weekend

I spent the weekend at Kickstarter HQ in Brooklyn for PWL Camp 2017 -- a 48 hour, 200 person unconference where the agenda is created by the attendees at the beginning of the meeting. Anyone who wants to initiate a discussion on a topic can claim a time and a space.

Tons of great conversations. A very open, generous, and talented crowd. My notebook is full but mostly incomprehensible. The above are four things that came up. I'm grateful for having been invited.

ProgrammableWebHow to Build a 3D Airport Experience with the Wrld.js SDK

Wrld.js is an SDK built on the Leaflet API that can be used  for embedding interactive, 3D, indoor and outdoor maps in a web page.

This guide will walk you through the process of using wrld.js to build a 3D airport experience for LAX, including searching, point-of-interest markers, and indoor routing.

Simon Willison (Django)How to set up world-class continuous deployment using free hosted tools

I’m going to describe a way to put together a world-class continuous deployment infrastructure for your side-project without spending any money.

With continuous deployment every code commit is tested against an automated test suite. If the tests pass it gets deployed directly to the production environment! How’s that for an incentive to write comprehensive tests?

Each of the tools I’m using offers a free tier which is easily enough to handle most side-projects. And once you outgrow those free plans, you can solve those limitations in exchange for money!

Here’s the magic combination:

Step one: Publish some code to GitHub with some tests

I’ll be using the code for my blog as an example. It’s a classic Django application, with a small (OK, tiny) suite of unit tests. The tests are run using the standard Django ./ test command.

Writing a Django application with tests is outside the scope of this article. Thankfully the official Django tutorial covers testing in some detail.

Step two: Hook up Travis CI

Travis CI is an outstanding hosted platform for continuous integration. Given a small configuration file it can check out code from GitHub, set up an isolated test environment (including hefty dependencies like a PostgreSQL database server, Elasticsearch, Redis etc), run your test suite and report the resulting pass/fail grade back to GitHub.

It’s free for publicly hosted GitHub projects. If you want to test code in a private repository you’ll have to pay them some money.

Here’s my .travis.yml configuration file:

language: python

  - 2.7

services: postgresql

  postgresql: "9.6"

  - pip install -r requirements.txt

  - psql -c "CREATE DATABASE travisci;" -U postgres
  - python migrate --noinput
  - python collectstatic

  - python test

And here’s the resulting Travis CI dashboard.

The integration of Travis with GitHub runs deep. Once you’ve set up Travis, it will automatically test every push to every branch - driven by GitHub webhooks, so test runs are set off almost instantly. Travis will then report the test results back to GitHub, where they’ll show up in a bunch of different places - including these pleasing green ticks on the branches page:

GitHub branches page showing CI results

Travis will also run tests against any open pull requests. This is a great incentive to build new features in a pull request even if you aren’t using them for code review:

GitHub pull request showing CI results

Circle CI deserves a mention as an alternative to Travis. The two are close competitors and offer very similar feature sets, and Circle CI's free plan allows up to 1,500 build minutes of private repositories per month.

Step 3: Deploy to Heroku and turn on continuous deployment

I’m a big fan of Heroku for side projects, because it means not having to worry about ongoing server-maintenance. I’ve lost several side-projects to entropy and software erosion - getting an initial VPS set up may be pretty simple, but a year later security patches need applying and the OS needs upgrading and the log files have filled up the disk and you’ve forgotten how you set everything up in the first place…

It turns out Heroku has basic support for continuous deployment baked in, and it’s trivially easy to set up. You can tell Heroku to deploy on every commit to GitHub, and then if you’ve attached a CI service like Travis that reports build health back you can check the box for “Wait for CI to pass before deploy”:

Heroku deployment settings for continuous deployment

Since small dynos on Heroku are free, you can even set up a separate Heroku app as a staging environment. I started my continuous integration adventure just deploying automatically to my staging instance, then switched over to deploying to production once I gained some confidence in how it all fitted together.

If you’re using continuous deployment with Heroku and Django, it’s a good idea to set up Heroku to automatically run your migrations for every deploy - otherwise you might merge a pull request with a model change and forget to run the migrations before the deploy goes out. You can do that using Heroku’s release phase feature, by adding the line release: python migrate --noinput to your Heroku Procfile (here’s mine).

Once you go beyond Heroku’s free tier things get much more powerful: Heroku Flow combines pipelines, review apps and their own CI solution to provide a comprehensive solution for much larger teams.

Step 4: Monitor errors with Sentry

If you’re going to move fast and break things, you need to know when things have broken. Sentry is a fantastic tool for collecting exceptions, aggregating them and spotting when something new crops up. It’s open source so you can host it yourself, but they also offer a robust hosted version with a free plan that can track up to 10,000 errors a month.

My favourite feature of Sentry is that it gives each exception it sees a “signature” based on a MD5 hash of its traceback. This means it can tell if errors are the same underlying issue or something different, and can hence de-dupe them and only alert you the first time it spots an error it has not seen before.

Notifications from Travis CI and GitHub in Slack

Sentry has integrations for most modern languages, but it’s particularly easy to use with Django. Just install raven and add few extra lines to your

SENTRY_DSN = os.environ.get('SENTRY_DSN')
        'dsn': SENTRY_DSN,
        'release': os.environ.get('HEROKU_SLUG_COMMIT', ''),

Here I’m using the Heroku pattern of keeping configuration in environment variables. SENTRY_DSN is provided by Sentry when you create your project there - you just have to add it as a Heroku config variable.

The HEROKU_SLUG_COMMIT line causes the currently deployed git commit hash to be fed to Sentry so that it knows what version of your code was running when it reports an error. To enable that variable, you’ll need to enable Dyno Metadata by running heroku labs:enable runtime-dyno-metadata against your application.

Step 5: Hook it all together with Slack

Would you like a push notification to your phone every time your site gets code committed / the tests pass or fail / a deploy goes out / a new error is detected? All of the above tools can report such things to Slack, and Slack’s free plan is easily enough to collect all of these notifications and push them to your phone via the free Slack iOS or Android apps.

Notifications from Travis CI and GitHub in Slack

Here are instructions for setting up Slack with GitHub, Travis CI, Heroku and Sentry.

Need more? Pay for it!

Having run much of this kind of infrastructure myself in the past I for one am delighted by the idea of outsourcing it, especially when the hosted options are of such high quality.

Each of these tools offers a free tier which is generous enough to work great for small side projects. As you start scaling up, you can start paying for them - that’s why they gave you a free tier in the first place.

Comments or suggestions? Join this thread on Hacker News.

Amazon Web ServicesAmazon Lightsail Update – Launch and Manage Windows Virtual Private Servers

I first told you about Amazon Lightsail last year in my blog post, Amazon Lightsail – the Power of AWS, the Simplicity of a VPS. Since last year’s launch, thousands of customers have used Lightsail to get started with AWS, launching Linux-based Virtual Private Servers.

Today we are adding support for Windows-based Virtual Private Servers. You can launch a VPS that runs Windows Server 2012 R2, Windows Server 2016, or Windows Server 2016 with SQL Server 2016 Express and be up and running in minutes. You can use your VPS to build, test, and deploy .NET or Windows applications without having to set up or run any infrastructure. Backups, DNS management, and operational metrics are all accessible with a click or two.

Servers are available in five sizes, with 512 MB to 8 GB of RAM, 1 or 2 vCPUs, and up to 80 GB of SSD storage. Prices (including software licenses) start at $10 per month:

You can try out a 512 MB server for one month (up to 750 hours) at no charge.

Launching a Windows VPS
To launch a Windows VPS, log in to Lightsail , click on Create instance, and select the Microsoft Windows platform. Then click on Apps + OS if you want to run SQL Server 2016 Express, or OS Only if Windows is all you need:

If you want to use a Powershell script to customize your instance after it launches for the first time, click on Add launch script and enter the script:

Choose your instance plan, enter a name for your instance(s), and select the quantity to be launched, then click on Create:

Your instance will be up and running within a minute or so:

Click on the instance, and then click on Connect using RDP:

This will connect using a built-in, browser-based RDP client (you can also use the IP address and the credentials with another client):

Available Today
This feature is available today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (London), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.



ProgrammableWebVerdigris Launches APIs to Manage Energy for Smart Buildings

Verdigris, a company that provides an AI-powered energy management and IoT platform for commercial facilities, has announced the launch of a new set of APIs to power smart buildings. This new set of Verdigris APIs includes an Energy API, Forecasting API, and Disaggregation API.

ProgrammableWebHow Cost Analysis Tools Can Prevent Cloud Computing Calamity

AWS and other cloud computing giants have been enjoying bumper profits in recent years, which means startups must be spending more on cloud computing. Indeed cloud costs can jump massively if you don’t keep an eye on your budget. Casey Benko over at SearchCloudComputing recommends investing in cloud cost analysis tools to keep costs down.

Simon Willison (Django)Quoting Graham Sutherland

TL;DR on the KRACK WPA2 stuff - you can repeatedly resend the 3rd packet in a WPA2 handshake and it'll reset the key state, which leads to nonce reuse, which leads to trivial decryption with known plaintext. Can be easily leveraged to dump TCP SYN traffic and hijack connections.

Graham Sutherland

ProgrammableWeb: APIsVerdigris

The Verdigris API allows for fetching of data collected by buds in the Verdigris ecosystem for Buildings, Circuits and more. Verdigris allows you to Get notifications when equipment is consuming too much energy, oscillating, or spiking. It enables you to Track your energy consumption with smart sensors, to see how your buildings consume energy from anywhere at anytime. Receive detailed reports and recommendations to increase energy efficiency of your buildings. It also allows you to turn off power usage elements when Verdigris detects a higher than normal electrical load. Verdigris provides responsive energy intelligence thru Machine Learning, Artificial Intelligence and Internet of Things.
Date Updated: 2017-10-16
Tags: Artificial Intelligence, Energy, , Internet of Things, , Machine Learning

ProgrammableWeb: APIsGrafeas

Grafeas is an open artifact metadata Google API to audit and govern a users software supply chain. It allows users to store, query, and retrieve critical metadata about all of your software artifacts, add new metadata types and providers, query all metadata across all of your components in real time, control read/write access to critical metadata and more. Google Cloud provides developers with a way to build visionary cloud tools and infrastructure, applications, maps and devices.
Date Updated: 2017-10-16
Tags: DevOps, Cloud, , Metadata, , Open Source

Simon Willison (Django)Explorable Explanations

Explorable Explanations

I’m fascinated by web articles and essays that embed interactive visualizations - taking advantage of the unique capabilities of the medium to help explain complex concepts. Explorable Explanations collects exactly these, under the banner of “learning through play”. They also gather tools and tutorials to help build more of them.

Simon Willison (Django)Deploying an asynchronous Python microservice with Sanic and Zeit Now

Back in 2008 Natalie Downe and I deployed what today we would call a microservice: json-head, a tiny Google App Engine app that allowed you to make an HTTP head request against a URL and get back the HTTP headers as JSON. One of our initial use-scase for this was Natalie’s addSizes.js, an unobtrusive jQuery script that could annotate links to PDFs and other large files with their corresponding file size pulled from the Content-Length header. Another potential use-case is detecting broken links, since the API can be used to spot 404 status codes (as in this example).

At some point in the following decade stopped working. Today I’m bringing it back, mainly as an excuse to try out the combination of Python 3.5 async, the Sanic microframework and Zeit’s brilliant Now deployment platform.

First, a demo. returns the following:

        "ok": true,
        "headers": {
            "Date": "Sat, 14 Oct 2017 18:37:52 GMT",
            "Content-Type": "text/html; charset=utf-8",
            "Connection": "keep-alive",
            "Set-Cookie": "__cfduid=dd0b71b4e89bbaca5b27fa06c0b95af4a1508006272; expires=Sun, 14-Oct-18 18:37:52 GMT; path=/;; HttpOnly; Secure",
            "Cache-Control": "s-maxage=200",
            "X-Frame-Options": "SAMEORIGIN",
            "Via": "1.1 vegur",
            "CF-Cache-Status": "HIT",
            "Vary": "Accept-Encoding",
            "Server": "cloudflare-nginx",
            "CF-RAY": "3adca70269a51e8f-SJC",
            "Content-Encoding": "gzip"
        "status": 200,
        "url": ""

Given a URL, performs an HTTP HEAD request and returns the resulting status code and the HTTP headers. Results are returned with the Access-Control-Allow-Origin: * header so you can call the API using fetch() or XMLHttpRequest from JavaScript running on any page.

Sanic and Python async/await

A key new feature added to Python 3.5 back in September 2015 was built-in syntactic support for coroutine control via the async/await statements. Python now has some serious credibility as a platform for asynchronous I/O (the concept that got me so excited about Node.js back in 2009). This has lead to an explosion of asynchronous innovation around the Python community.

json-head is the perfect application for async - it’s little more than a dumbed-down HTTP proxy, accepting incoming HTTP requests, making its own requests elsewhere and then returning the results.

Sanic is a Flask-like web framework built specifically to take advantage of async/await in Python 3.5. It’s designed for speed - built on top of uvloop, a Python wrapper for libuv (which itself was originally built to power Node.js). uvloop’s self-selected benchmarks are extremely impressive.

Zeit Now

To host this new microservice, I chose Zeit Now. It’s a truly beautiful piece of software design.

Now lets you treat deployments as immutable. Every time you deploy you get a brand new URL. You can then interact with your deployment directly, or point an existing alias to it if you want a persistent URL for your project.

Deployments are free, and deployed code stays available forever due to some clever engineering behind the scenes.

Best of all: deploying a project takes just a single command: type now and the code in your current directory will be deployed to their cloud and assigned a unique URL.

Now was originally built for Node.js projects, but last August Zeit added Docker support. If the directory you run it in contains a Dockerfile, running now will upload, build and run the corresponding image.

There’s just one thing missing: good examples of how to deploy Python projects to Now using Docker. I’m hoping this article can help fill that gap.

Here’s the complete Dockerfile I’m using for json-head:

FROM python:3
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", ""]

I’m using the official Docker Python image as a base, copying the current directory into the image, using pip install to install dependencies and then exposing port 8006 (for no reason other than that’s the port I use for local development environment) and running the script. Now is smart enough to forward incoming HTTP traffic on port 80 to the port that was exposed by the container.

If you setup Now yourself (npm install -g now or use one of their installers) you can deploy my code directly from GitHub to your own instance with a single command:

$ now simonw/json-head
> Didn't find directory. Searching on GitHub...
> Deploying GitHub repository "simonw/json-head" under simonw
> Ready! (copied to clipboard) [1s]
> Initializing…
> Building
> ▲ docker build
Sending build context to Docker daemon 7.168 kBkB
> Step 1 : FROM python:3
> 3: Pulling from library/python
> ... lots more stuff here ...

Initial implementation

Here’s my first working version of json-head using Sanic:

from sanic import Sanic
from sanic import response
import aiohttp

app = Sanic(__name__)

async def head(session, url):
        async with session.head(url) as response:
            return {
                'ok': True,
                'headers': dict(response.headers),
                'status': response.status,
                'url': url,
    except Exception as e:
        return {
            'ok': False,
            'error': str(e),
            'url': url,

async def handle_request(request):
    url = request.args.get('url')
    if url:
        async with aiohttp.ClientSession() as session:
            head_info = await head(session, url)
            return response.json(
                    'Access-Control-Allow-Origin': '*'
        return response.html('Try /?url=xxx')

if __name__ == '__main__':"", port=8006)

This exact code is deployed at - since Now deployments are free, there’s no reason not to leave work-in-progress examples hosted as throwaway deployments.

In addition to Sanic, I’m also using the handy aiohttp asynchronous HTTP library - which features API design clearly inspired by my all-time favourite HTTP library, requests.

The key new pieces of syntax to understand in the above code are the async and await statements. async def is used to declare a function that acts as a coroutine. Coroutines need to be executed inside an event loop (which Sanic handles for us), but gain the ability to use the await statement.

The await statement is the real magic here: it suspends the current coroutine until the coroutine it is calling has finished executing. It is this that allows us to write asynchronous code without descending into a messy hell of callback functions.

Adding parallel requests

So far we haven’t really taken advantage of what async I/O can do - if every incoming HTTP request results in a single outgoing HTTP response then async may help us scale to serve more incoming requests at once but it’s not really giving us any new functionality.

Executing multiple outbound HTTP requests in parallel is a much more interesting use-case. Let’s add support for multiple ?url= parameters, such as the following:

        "ok": true,
        "headers": {
            "Date": "Sat, 14 Oct 2017 19:35:29 GMT",
            "Content-Type": "text/html; charset=utf-8",
            "Connection": "keep-alive",
            "Set-Cookie": "__cfduid=ded486c1faaac166e8ae72a87979c02101508009729; expires=Sun, 14-Oct-18 19:35:29 GMT; path=/;; HttpOnly; Secure",
            "Cache-Control": "s-maxage=200",
            "X-Frame-Options": "SAMEORIGIN",
            "Via": "1.1 vegur",
            "CF-Cache-Status": "EXPIRED",
            "Vary": "Accept-Encoding",
            "Server": "cloudflare-nginx",
            "CF-RAY": "3adcfb671c862888-SJC",
            "Content-Encoding": "gzip"
        "status": 200,
        "url": ""
        "ok": true,
        "headers": {
            "Date": "Sat, 14 Oct 2017 19:35:29 GMT",
            "Expires": "-1",
            "Cache-Control": "private, max-age=0",
            "Content-Type": "text/html; charset=ISO-8859-1",
            "P3P": "CP=\"This is not a P3P policy! See for more info.\"",
            "Content-Encoding": "gzip",
            "Server": "gws",
            "X-XSS-Protection": "1; mode=block",
            "X-Frame-Options": "SAMEORIGIN",
            "Set-Cookie": "1P_JAR=2017-10-14-19; expires=Sat, 21-Oct-2017 19:35:29 GMT; path=/;",
            "Alt-Svc": "quic=\":443\"; ma=2592000; v=\"39,38,37,35\"",
            "Transfer-Encoding": "chunked"
        "status": 200,
        "url": ""

We’re now accepting multiple URLs and executing multiple HEAD requests… but Python 3.5 async makes it easy to do this in parallel, so our overall request time should match that of the single longest HEAD request that we triggered.

Here’s an implementation that adds support for multiple, parallel outbound HTTP requests:

async def handle_request(request):
    urls = request.args.getlist('url')
    if urls:
        async with aiohttp.ClientSession() as session:
            head_infos = await asyncio.gather(*[
                head(session, url) for url in urls
            return response.json(
                headers={'Access-Control-Allow-Origin': '*'},
        return response.html(INDEX)

We’re using the asyncio module from the Python 3.5 standard library here - in particular the gather function. async.gather takes a list of coroutines and returns a future aggregating their results. This future will resolve (and return to a corresponding await statement) as soon as all of those coroutines have returned their values.

My final code for json-head can be found on GitHub. As I hope I’ve demonstrated, the combination of Python 3.5+, Sanic and Now makes deploying asynchronous Python microservices trivially easy.

ProgrammableWebGoogle, IBM, and Others Introduce Grafeas Open Source API

Google, IBM, along with a number of other technology companies have announced Grafeas, an open source API that stores, queries, and retrieves crucial metadata on all types of software components. Using the Grafeas API, companies can combine data with other metadata to build a comprehensive model for security and governance at scale.

Simon Willison (Django)What's New In DevTools (Chrome 62)

What's New In DevTools (Chrome 62)

Some really neat stuff. Highlights include top-level "await" support in the console, the ability to take screenshots of specific HTML nodes, CSS grid highlighting and the ability to drop a .HAR file onto the network panel in order to view it as a waterfall.


Updated: .  Michael(tm) Smith <>