Simon Willison (Django)How to turn a list of JSON objects into a Datasette

How to turn a list of JSON objects into a Datasette

ramadis on GitHub cleaned up data on 184,879 crimes reported in Buenos Aires since 2016 and shared them on GitHub as a JSON file. Here are my notes on how to use Pandas to convert JSON into SQLite and publish it using Datasette.

Amazon Web ServicesRecent EC2 Goodies – Launch Templates and Spread Placement

We launched some important new EC2 instance types and features at AWS re:Invent. I’ve already told you about the M5, H1, T2 Unlimited and Bare Metal instances, and about Spot features such as Hibernation and the New Pricing Model. Randall told you about the Amazon Time Sync Service. Today I would like to tell you about two of the features that we launched: Spread placement groups and Launch Templates. Both features are available in the EC2 Console and from the EC2 APIs, and can be used in all of the AWS Regions in the “aws” partition:

Launch Templates
You can use launch templates to store the instance, network, security, storage, and advanced parameters that you use to launch EC2 instances, and can also include any desired tags. Each template can include any desired subset of the full collection of parameters. You can, for example, define common configuration parameters such as tags or network configurations in a template, and allow the other parameters to be specified as part of the actual launch.

Templates give you the power to set up a consistent launch environment that spans instances launched in On-Demand and Spot form, as well as through EC2 Auto Scaling and as part of a Spot Fleet. You can use them to implement organization-wide standards and to enforce best practices, and you can give your IAM users the ability to launch instances via templates while withholding the ability to do so via the underlying APIs.

Templates are versioned and you can use any desired version when you launch an instance. You can create templates from scratch, base them on the previous version, or copy the parameters from a running instance.

Here’s how you create a launch template in the Console:

Here’s how to include network interfaces, storage volumes, tags, and security groups:

And here’s how to specify advanced and specialized parameters:

You don’t have to specify values for all of these parameters in your templates; enter the values that are common to multiple instances or launches and specify the rest at launch time.

When you click Create launch template, the template is created and can be used to launch On-Demand instances, create Auto Scaling Groups, and create Spot Fleets:

The Launch Instance button now gives you the option to launch from a template:

Simply choose the template and the version, and finalize all of the launch parameters:

You can also manage your templates and template versions from the Console:

To learn more about this feature, read Launching an Instance from a Launch Template.

Spread Placement Groups
Spread placement groups indicate that you do not want the instances in the group to share the same underlying hardware. Applications that rely on a small number of critical instances can launch them in a spread placement group to reduce the odds that one hardware failure will impact more than one instance. Here are a couple of things to keep in mind when you use spread placement groups:

  • Availability Zones – A single spread placement group can span multiple Availability Zones. You can have a maximum of seven running instances per Availability Zone per group.
  • Unique Hardware – Launch requests can fail if there is insufficient unique hardware available. The situation changes over time as overall usage changes and as we add additional hardware; you can retry failed requests at a later time.
  • Instance Types – You can launch a wide variety of M4, M5, C3, R3, R4, X1, X1e, D2, H1, I2, I3, HS1, F1, G2, G3, P2, and P3 instances types in spread placement groups.
  • Reserved Instances – Instances launched into a spread placement group can make use of reserved capacity. However, you cannot currently reserve capacity for a placement group and could receive an ICE (Insufficient Capacity Error) even if you have some RI’s available.
  • Applicability – You cannot use spread placement groups in conjunction with Dedicated Instances or Dedicated Hosts.

You can create and use spread placement groups from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs. The console has a new feature that will help you to learn how to use the command line:

You can specify an existing placement group or create a new one when you launch an EC2 instance:

To learn more, read about Placement Groups.

Jeff;

ProgrammableWebWebhose.io Adds Dark Web and Broadcast Data to its API

Webhose.io, an advanced data crawling API service provider, has added dark web and broadcast data to its Webhose.io API. The Webhose.io Dark Web Data Feed allows cyber security companies and researchers to monitor TOR network activity and provides programmatic access to anonymized web content available on TOR (The Onion Router). The API can be used to discover and parse specific information from within content located on .onion domains.

Simon Willison (Django)GaretJax/django-click

GaretJax/django-click

I've been using Click to write command-line tools in Python recently (big datasette and csvs-to-sqlite use it) and its a delightful way of composing simple and complex CLI interfaces. I've always found Django's default management command syntax hard to fit in my head - django-click means I can combine the two.

Via Lacey Williams Henschel

Jeremy Keith (Adactio)Heisenberg

I wrote about Google Analytics yesterday. As usual, I syndicated the post to Ev’s blog, and I got an interesting response over there. Kelly Burgett set me straight on some of the finer details of how goals work, and finished with this thought:

You mention “delivering a performant, accessible, responsive, scalable website isn’t enough” as if it should be, and I have to disagree. It’s not enough for a business to simply have a great website if you are unable to understand performance of channel marketing, track user demographics and behavior on-site, and optimize your site/brand based on that data. I’ve seen a lot of ugly sites who have done exceptionally well in terms of ROI, simply because they are getting the data they need from the site in order make better business decisions. If your site cannot do that (ie. through data collection, often third party scripts), then your beautifully-designed site can only take you so far.

That makes an excellent case for having analytics. But that’s not necessarily the same as having Google analytics, or even JavaScript-driven analytics at all.

By far the most useful information you get from analytics is around where people have come from, where did they go next, and what kind of device are they using. None of that information requires JavaScript. It’s all available from your server logs.

I don’t want to come across all old-man-yell-at-cloud here, but I’m trying to remember at what point self-hosted software for analysing your log traffic became not good enough.

Here’s the thing: logging on the server has no effect on the user experience. It’s basically free, in terms of performance. Logging via JavaScript, by its very nature, has some cost. Even if its negligible, that’s one more request, and that’s one more bit of processing for the CPU.

All of the data that you can only get via JavaScript (in-page actions, heat maps, etc.) are, in my experience, better handled by dedicated software. To me, that kind of more precise data feels different to analytics in the sense of funnels, conversions, goals and all that stuff.

So in order to get more fine-grained data to analyse, our analytics software has now doubled down on a technology—JavaScript—that has an impact on the end user, where previously the act of observation could be done at a distance.

There are also blind spots that come with JavaScript-based tracking. According to Google Analytics, 0% of your customers don’t have JavaScript. That’s not necessarily true, but there’s literally no way for Google Analytics—which relies on JavaScript—to even do its job in the absence of JavaScript. That can lead to a dangerous situation where you might be led to think that 100% of your potential customers are getting by, when actually a proportion might be struggling, but you’ll never find out about it.

Related: according to Google Analytics, 0% of your customers are using ad-blockers that block requests to Google’s servers. Again, that’s not necessarily a true fact.

So I completely agree than analytics are a good thing to have for your business. But it does not follow that Google Analytics is a good thing for your business. Other options are available.

I feel like the assumption that “analytics = Google Analytics” is like the slippery slope in reverse. If we’re all agreed that analytics are important, then aren’t we also all agreed that JavaScript-based tracking is important?

In a word, no.

This reminds me of the arguments made in favour of intrusive, bloated advertising scripts. All of the arguments focus on the need for advertising—to stay in business, to pay the writers—which are all great reasons for advertising, but have nothing to do with JavaScript, which is at the root of the problem. Everyone I know who uses an ad-blocker—including me—doesn’t use it to stop seeing adverts, but to stop the performance of the page being degraded (and to avoid being tracked across domains).

So let’s not confuse the means with the ends. If you need to have advertising, that doesn’t mean you need to have horribly bloated JavaScript-based advertising. If you need analytics, that doesn’t mean you need an analytics script on your front end.

ProgrammableWebJoyToken Releases API for Building Online Casino Games Using Smart Contracts

JoyToken, a blockchain startup that aims to develop an infrastructure protocol for the online gaming industry, has released an API that demonstrates how games will run on its platform.

The platform employs smart contracts to create what the company calls a "'trustless' gambling ecosystem." The smart contracts determine the outcome of games, which the company says offers numerous advantages.

Jeremy Keith (Adactio)Analysing analytics

Hell is other people’s JavaScript.

There’s nothing quite so crushing as building a beautifully performant website only to have it infested with a plague of third-party scripts that add to the weight of each page and reduce the responsiveness, making a mockery of your well-considered performance budget.

Trent has been writing about this:

My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

He’s started the process by itemising third-party scripts. Frustratingly though, there’s rarely one single culprit that you can point to—it’s the cumulative effect of “just one more beacon” and “just one more analytics script” and “just one more A/B testing tool” that adds up to a crappy experience that warms your user’s hands by ensuring your site is constantly draining their battery.

Actually, having just said that there’s rarely one single culprit, Adobe Tag Manager is often at the root of third-party problems. That and adverts. It’s like opening the door of your beautifully curated dream home, and inviting a pack of diarrhetic elephants in: “Please, crap wherever you like.”

But even the more well-behaved third-party scripts can get out of hand. Google Analytics is so ubiquitous that it’s hardly even considered in the list of potentially harmful third-party scripts. On the whole, it’s a fairly well-behaved citizen of your site’s population of third-party scripts (y’know, leaving aside the whole surveillance capitalism business model that allows you to use such a useful tool for free in exchange for Google tracking your site’s visitors across the web and selling the insights from that data to advertisers).

The initial analytics script that you—asynchronously—load into your page isn’t very big. But depending on how you’ve configured your Google Analytics account, that might just be the start of a longer chain of downloads and event handlers.

Ed recently gave a lunchtime presentation at Clearleft on using Google Analytics—he professes modesty but he really knows his stuff. He was making sure that everyone knew how to set up goals’n’stuff.

As I understand it, there are two main categories of goals: events and destinations (there are also durations and pages, but they feel similar to destinations). You use events to answer questions like “Did the user click on this button?” or “Did the user click on that search field?”. You use destinations to answer questions like “Did the user arrive at this page?” or “Did the user come from that page?”

You can add as many goals to your site’s analytics as you want. That’s an intoxicating offer. The problem is that there is potentially a cost for each goal you create. It’s an invisible cost. It’s paid by the user in the currency of JavaScript sent down the wire (I wish that the Google Analytics admin interface were more like the old interface for Google Fonts, where each extra file you added literally pushed a needle higher on a dial).

It strikes me that the event-based goals would necessarily require more JavaScript in order to listen out for those clicks and fire off that information. The destination-based goals should be able to get all the information needed from regular page navigations.

So I have a hypothesis. I think that destination-based goals are less harmful to performance than event-based goals. I might well be wrong about that, and if I am, please let me know.

With that hypothesis in mind, and until I learn otherwise, I’ve got two rules of thumb to offer when it comes to using Google Analytics:

  1. Try to keep the number of goals to a minimum.
  2. If you must create a goal, favour destinations over events.

Simon Willison (Django)Quoting Steve Souders

The biggest bottleneck in web performance today is CPU. Compared to seven years ago, there’s 5x more JavaScript downloaded on the top 1000 websites over the last seven years, and 3x more CSS. Half of web activity comes from mobile devices with a smaller CPU and limited battery power.

Steve Souders

Daniel Glazman (Disruptive Innovations)Announcing WebBook Level 1, a new Web-based format for electronic books

TL;DR: the title says it all, and it's available there.

Eons ago, at a time BlueGriffon was only a Wysiwyg editor for the Web, my friend Mohamed Zergaoui asked why I was not turning BlueGriffon into an EPUB editor... I had been observing the electronic book market since the early days of Cytale and its Cybook but I was not involved into it on a daily basis. That seemed not only an excellent idea, but also a fairly workable one. EPUB is based on flavors of HTML so I would not have to reinvent the wheel.

I started diving into the EPUB specs the very same day, EPUB 2.0.1 (released in 2009) at that time. I immediately discovered a technology that was not far away from the Web but that was also clearly not the Web. In particular, I immediately saw that two crucial features were missing: it was impossible to aggregate a set of Web pages into a EPUB book through a trivial zip, and it was impossible to unzip a EPUB book and make it trivially readable inside a Web browser even with graceful degradation.

When the IDPF started working on EPUB 3.0 (with its 3.0.1 revision) and 3.1, I said this was coming too fast, and that the lack of Test Suites with interoperable implementations as we often have in W3C exit criteria was a critical issue. More importantly, the market was, in my opinion, not ready to absorb so quickly two major and one minor revisions of EPUB given the huge cost on both publishing chains and existing ebook bases. I also thought - and said - the EPUB 3.x specifications were suffering from clear technical issues, including the two missing features quoted above.

Today, times have changed and the Standards Committee that oversaw the future of EPUB, the IDPF, has now merged with the World Wide Web Consortium (W3C). As Jeff Jaffe, CEO of the W3C, said at that time,

Working together, Publishing@W3C will bring exciting new capabilities and features to the future of publishing, authoring and reading using Web technologies

Since the beginning of 2017, and with a steep acceleration during spring 2017, the Publishing@W3C activity has restarted work on the EPUB 3.x line and the future EPUB 4 line, creating a EPUB 3 Community Group (CG) for the former and a Publishing Working Group (WG) for the latter. If I had some reservations about the work division between these two entities, the whole thing seemed to be a very good idea. In fact, I started advocating for the merger between IDPF and W3C back in 2012, at a moment only a handful of people were willing to listen. It seemed to me that Publishing was a underrated first-class user of Web technologies and EPUB's growth was suffering from two critical ailments:

  1. IDPF members were not at W3C so they could not confront their technical choices to browser vendors and the Web industry. It also meant they were inventing new solutions in a silo without bringing them to W3C standardization tables and too often without even knowing if the rendering engine vendors would implement them.
  2. on another hand, W3C members had too little knowledge of the Publishing activity, that was historically quite skeptical about the Web... Working Groups at W3C were lacking ebook expertise and were therefore designing things without having ebooks in mind.

I was then particularly happy when the merger I advocated for was announced.

As I recently wrote on Medium, I am not any more. I am not convinced by the current approach implemented by Publishing@W3C on many counts:

  • the organization of the Publishing@W3C activity, with a Publishing Business Group (BG) formally ruling (see Process section, second paragraph) the EPUB3 CG and a Steering Committee (see Process section, first paragraph) recreated the former IDPF structure inside W3C.  The BG Charter even says that it « advises W3C on the direction of current and future publishing activity work » as if the IDPF and W3C did not merge and as if W3C was still only a Liaison. It also says « the initial members of the Steering Committee shall be the individuals who served on IDPF’s Board of Directors immediately prior to the effective date of the Combination of IDPF with W3C », maintaining the silo we wanted to eliminate.
  • the EPUB3 Community Group faces a major technical challenge, recently highlighted by representatives of the Japanese Publishing Industry: EPUB 3.1 represents too much of a technical change compared to EPUB 3.0.1 and is not implementable at a reasonable cost in a reasonable timeframe for them. Since EPUB 3 is recommended by the Japanese Government as the official ebook format in Japan, that's a bit of a blocker for EPUB 3.1 and its successors. The EPUB3 CG is then actively discussing a potential rescindment of EPUB 3.1, an extraction of the good bits we want to preserve, and the release of a EPUB 3.0.2 specification based on 3.0.1 plus those good bits. In short, the EPUB 3.1 line, that saw important clarifying changes from 3.0.1, is dead.
  • the Publishing Working Group is working on a collection of specifications known as Web Publications (WP), Packaged Web Publications (PWP), and EPUB 4. What these specifications represent is extremely complicated to describe. With a daily observation of the activities of the Working Group, I still can't firmly say what they're up to, even if I am already convinced that some technological choices (for instance JSON-LD for manifests) are highly questionable and do not « lead Publishing to its full Web potential », to paraphrase the famous W3C motto. It must also be said that the EPUB 3.1 hiatus in the EPUB3 CG shakes the EPUB 4 plan to the ground, since it's now extremely clear the ebook market is not ready at all to move to yet another EPUB version, potentially incompatible with EPUB 3.x (for the record, backwards-compatibility in the EPUB world is a myth).
  • the original sins of EPUB quoted above, including the two missing major features quoted in the second paragraph of the present article, are a minor requirement only. Editability of EPUB, one of the greatest flaws of that ecosystem, is still not a first-class requirement if not a requirement at all. Convergence with the Web is severely encumbered by personal agendas and technical choices made by one implementation vendor for its own sake; the whole W3C process based on consensus is worked around not because there is no consensus (the WG minutes show consensus all the time) but mostly beacause the rendering engine vendors are still not in the loop and their potential crucial contributions are sadly missed. And they are not in the loop because they don't understand a strategy that seems decorrelated from the Web; the financial impact of any commitment to the Publishing@W3C is then a understandable no-go.
  • the original design choices of EPUB, using painful-to-edit-or-render XML dialects, were also an original sin. We're about to make the same mistake, again and again, either retaining things that partly block the software ecosystem or imagining new silos that won't be editable nor grokable by a Web Browser. Simplicity, Web-centricity and mainstream implementations are not in sight.

Since the whole organization of Publishing @W3C is governed by the merger agreement between IDPF and W3C, I do not expect to change anyone's mind with the present article. I only felt the need to express my opinion, in both public and private fora. Unsurprisingly, the feedback to my private warnings was fairly negative. In short, it works as expected and I should stop spitting in the soup. Well, if that works as expected, the expectations were pretty low, sorry to say, and were not worth a merger between two Standard Bodies.

I have then decided to work on a different format for electronic books, called WebBook. A format strictly based on Web technologies and when I say "Web technologies", I mean the most basic ones: html, CSS, JavaScript, SVG and friends; the class of specifications all Web authors use and master on a daily basis. Not all details are decided or even ironed, the proposal is still a work in progress at this point, but I know where I want to go to.

I will of course happily accept all feedback. If people like my idea, great! If people disagree with it, too bad for me but fine! At least during the early moments of my proposal, and because my guts tell me my goals are A Good Thing™️, I'm running this as a Benevolent Dictator, not as a consensus-based effort. Convince me and your suggestions will make it in.

I have started from a list of requirements, something that was never done that way in the EPUB world:

  1. one URL is enough to retrieve a remote WebBook instance, there is no need to download every resource composing that instance

  2. the contents of a WebBook instance can be placed inside a Web site’s directory and are directly readable by a Web browser using the URL for that directory

  3. the contents of a WebBook instance can be placed inside a local directory and are directly readable by a Web browser opening its index.html or index.xhtml topmost file

  4. each individual resource in a WebBook instance, on a Web site or on a local disk, is directly readable by a Web browser

  5. any html document can be used as content document inside a WebBook instance, without restriction

  6. any stylesheet, replaced resource (images, audio, video, etc.) or additional resource useable by a html document (JavaScript, manifests, etc.) can be used inside the navigation document or the content documents of a WebBook instance, without restriction

  7. the navigation document and the content documents inside a WebBook instance can be created and edited by any html editor

  8. the metadata, table of contents contained in the navigation document of a WebBook instance can be created and edited by any html editor

  9. the WebBook specification is backwards-compatible

  10. the WebBook specification is forwards-compatible, at the potential cost of graceful degradation of some content

  11. WebBook instances can be recognized without having to detect their MIME type

  12. it’s possible to deliver electronic books in a form that is compatible with both WebBook and EPUB 3.0.1

I also made a strong design choice: the Level 1 of the specification will not be a fit-all-cases document. WebBook will start small, simple and extensible, and each use case will be evaluated individually, sequentially and will result in light extensions at a speed the Publishing industry can bear with. So don't tell me WebBook Level 1 doesn't support a given type of ebook or is not at parity level with EPUB 3.x. It's on purpose.

With that said, the WebBook Level 1 is available here and, again, I am happily accepting Issues and PR on github. You'll find in the spec references to:

  • « Moby Dick » released as a WebBook instance
  • « Moby Dick » released as a EPUB3-compatible WebBook instance
  • a script usable with Node.js to automagically convert a EPUB3 package into a EPUB3-compatible WebBook

My EPUB Editor BlueGriffon is already modified to deal with WebBook. The next public version will allow users to create EPUB3-compatible WebBooks.

I hope this proposal will show stakeholders of the Publishing@W3C activity another path to greater convergence with the Web is possible. Should this proposal be considered by them, I will of course happily contribute to the debate, and hopefully the solution.

ProgrammableWebDaily API RoundUp: CronAlarm, The Things Network, TINT, ParallelDots

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Simon Willison (Django)Generating polygon representing a rough 100km circle around latitude/longitude point using Python

Generating polygon representing a rough 100km circle around latitude/longitude point using Python

A question I posted to the GIS Stack Exchange - I found my own answer using a Python library called geog, then someone else posted a better solution using pyproj.

Simon Willison (Django)API 2.0: Log-In with ZEIT, New Docs & More

API 2.0: Log-In with ZEIT, New Docs & More

Here's Zeit's write-up of their brand new API 2.0, which adds OAuth support and allows anything that can be done with their command-line tools to be achieved via their public API as well. This is the enabling technology that allowed me to build Datasette Publish.

Simon Willison (Django)Datasette Publish: a web app for publishing CSV files as an online database

I’ve just released Datasette Publish, a web tool for turning one or more CSV files into an online database with a JSON API.

Here’s a demo application I built using Datasette Publish, showing Californian campaign finance data using CSV files released by the California Civic Data Coalition.

And here’s an animated screencast showing exactly how I built it:

Animated demo of Datasette Publish

Datasette Publish combines my Datasette tool for publishing SQLite databases as an API with my csvs-to-sqlite tool for generating them.

It’s built on top of the Zeit Now hosting service, which means anything you deploy with it lives on your own account with Zeit and stays entirely under your control. I used the brand new Zeit API 2.0.

Zeit’s generous free plan means you can try the tool out as many times as you like - and if you want to use it for an API powering a production website you can easily upgrade to a paid hosting plan.

Who should use it

Anyone who has data they want to share with the world!

The fundamental idea behind Datasette is that publishing structured data as both a web interface and a JSON API should be as quick and easy as possible.

The world is full of interesting data that often ends up trapped in PDF blobs or other hard-to-use formats, if it gets published at all. Datasette encourages using SQLite instead: a powerful, flexible format that enables analysis via SQL queries and can easily be shared and hosted online.

Since so much of the data that IS published today uses CSV, this first release of Datasette Publish focuses on CSV conversion above anything else. I plan to add support for other useful formats in the future.

The three areas I’m most excited in seeing adoption of Datasette are data journalism, civic open data and cultural institutions.

Data journalism because when I worked at the Guardian Datasette is the tool I wish I had had for publishing data. When we started the Guardian Datablog we ended up using Google Sheets for this.

Civic open data because it turns out the open data movement mostly won! It’s incredible how much high quality data is published by local and national governments these days. My San Francisco tree search project for example uses data from the Department of Public Works - a CSV of 190,000 trees around the city.

Cultural institutions because the museums and libraries of the world are sitting on enormous treasure troves of valuable information, and have an institutional mandate to share that data as widely as possible.

If you are involved in any of the above please get in touch. I’d love your help improving the Datasette ecosystem to better serve your needs.

How it works

Datasette Publish would not be possible without Zeit Now. Now is a revolutionary approach to hosting: it lets you instantly create immutable deployments with a unique URL, via a command-line tool or using their recently updated API. It’s by far the most productive hosting environment I’ve ever worked with.

I built the main Datasette Publish interface using React. Building a SPA here made a lot of sense, because it allowed me to construct the entire application without any form of server-side storage (aside from Keen for analytics).

When you sign in via Zeit OAuth I store your access token in a signed cookie. Each time you upload a CSV the file is stored directly using Zeit’s upload API, and the file metadata is persisted in JavaScript state in the React app. When you click “publish” the accumulated state is sent to the server where it is used to construct a new Zeit deployment.

The deployment itself consists of the CSV files plus a Dockerfile that installs Python, Datasette, csvs-to-sqlite and their dependencies, then runs csvs-to-sqlite against the CSV files and starts up Datasette against the resulting database.

If you specified a title, description, source or license I generate a Datasette metadata.json file and include that in the deployment as well.

Since free deployments to Zeit are “source code visible”, you can see exactly how the resulting application is structured by visiting https://datasette-onrlszntsq.now.sh/_src (the campaign finance app I built earlier).

Using the Zeit API in this way has the neat effect that I don’t ever store any user data myself - neither the access token used to access your account nor any of the CSVs that you upload. Uploaded files go straight to your own Zeit account and stay under your control. Access tokens are never persisted. The deployed application lives on your own hosting account, where you can terminate it or upgrade it to a paid plan without any further involvement from the tool I have built.

Not having to worry about storing encrypted access tokens or covering any hosting costs beyond the Datasette Publish tool itself is delightful.

This ability to build tools that themselves deploy other tools is fascinating. I can’t wait to see what other kinds of interesting new applications it enables.

Discussion on Hacker News.

Amazon Web ServicesNew AWS Auto Scaling – Unified Scaling For Your Cloud Applications

I’ve been talking about scalability for servers and other cloud resources for a very long time! Back in 2006, I wrote “This is the new world of scalable, on-demand web services. Pay for what you need and use, and not a byte more.” Shortly after we launched Amazon Elastic Compute Cloud (EC2), we made it easy for you to do this with the simultaneous launch of Elastic Load Balancing, EC2 Auto Scaling, and Amazon CloudWatch. Since then we have added Auto Scaling to other AWS services including ECS, Spot Fleets, DynamoDB, Aurora, AppStream 2.0, and EMR. We have also added features such as target tracking to make it easier for you to scale based on the metric that is most appropriate for your application.

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

AWS Auto Scaling in Action
I will use AWS Auto Scaling on a simple CloudFormation stack consisting of an Auto Scaling group of EC2 instances and a pair of DynamoDB tables. I start by removing the existing Scaling Policies from my Auto Scaling group:

Then I open up the new Auto Scaling Console and selecting the stack:

Behind the scenes, Elastic Beanstalk applications are always launched via a CloudFormation stack. In the screen shot above, awseb-e-sdwttqizbp-stack is an Elastic Beanstalk application that I launched.

I can click on any stack to learn more about it before proceeding:

I select the desired stack and click on Next to proceed. Then I enter a name for my scaling plan and choose the resources that I’d like it to include:

I choose the scaling strategy for each type of resource:

After I have selected the desired strategies, I click Next to proceed. Then I review the proposed scaling plan, and click Create scaling plan to move ahead:

The scaling plan is created and in effect within a few minutes:

I can click on the plan to learn more:

I can also inspect each scaling policy:

I tested my new policy by applying a load to the initial EC2 instance, and watched the scale out activity take place:

I also took a look at the CloudWatch metrics for the EC2 Auto Scaling group:

Available Now
We are launching AWS Auto Scaling today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore) Regions today, with more to follow. There’s no charge for AWS Auto Scaling; you pay only for the CloudWatch Alarms that it creates and any AWS resources that you consume.

As is often the case with our new services, this is just the first step on what we hope to be a long and interesting journey! We have a long roadmap, and we’ll be adding new features and options throughout 2018 in response to your feedback.

Jeff;

ProgrammableWebNew Research Predicts Continued Growth in API Testing Market

New research predicts continued growth in the API testing market through 2022. The research credits the Internet of Things (IoT), artificial intelligence, and machine learning as major drivers for growth in API testing. The research breaks the API testing market into components (testing tools and testing services), deployment type (cloud and premises), vertical, and region.

ProgrammableWebMonzo Releases Interim API to Comply with PSD2 Banking Regulations

Monzo, an API-first bank, has launched an interim API (the AIS API) to comply with Europe's Payment Services Directive (PSD2) which recently became law. The AIS API includes three core functions: list of accounts, list of account's balance, and list of account's transactions (all features specifically addressed by PSD2).

Matt Webb (Schulze & Webb)Filtered for recent computer exploits

1.

Recents hacks are about finding holes in the deep physics of computing.

Here's a technical explanation of Spectre and Meltdown, the two recent big ones. The words alone are beautiful: Spectre can be thought of as a (previously unknown) fundamental risk of speculative execution, one that can now be weaponized.

Here's a metaphor explaining both exploits, to do with librarians. In short, they involve measuring how long it takes for the computer to look up hidden data. Even if the data is eventually not shared, the computer has a terrible poker face.

I see this as a kind of information asymmetry. Computer chip architecture is about the regulated control of information. The design never anticipated that unregulated information - time - would be brought in from the outside.

See also: Rowhammer, which is an exploit of how memory chips work where the wild information, intruding from the outer reality, is electromagnetism and geography.

As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell’s value from 1 to 0 or vice versa.

That is, the fact that two memory address happen to be physically close to one another is completely outside the computer's knowledge of itself. Geography and electromagnetism have no presence in the computer's inner reality. But bring that knowledge in from the outer reality...

Rowhammer was able to use this to induce bit flips ... and hence gain read-write access to all of physical memory.

2.

Long profile in the New Yorker of Sam Altman, the head of Y Combinator (the incubator behind startups such as Airbnb, Dropbox, Stripe, and reddit).

This line:

Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.

Emphasis mine.

3.

Here's an interesting exploit that I feel should be better known: System Bus Radio.

Some computers are intentionally disconnected from the rest of the world. This includes having their internet, wireless, bluetooth, USB, external file storage and audio capabilities removed. This is called "air gapping". [However] Even in such a situation, this program can transmit radio.

Computers can now write to memory with a high enough frequency that it's in the radio spectrum. Now you're hitting the RAM fast enough, you can play it like a xylophone and carve radio waves into the air.

There is demo code provided. And:

Run this using a 2015 model MacBook Air. Then use a Sony STR-K670P radio receiver with the included antenna and tune it to 1580 kHz on AM.

You should hear the "Mary Had a Little Lamb" tune playing repeatedly.

So what happens when my mobile web browser loads an ad that loads some Javascript that reads my bitcoin exchange password and then runs tight array loops that hammer out arpeggios on memory, broadcasting access to all my worldly possessions to anyone standing nearby with a old-fashioned AM radio tuned into 1580 kHz?

Breaking out of the simulation.

4.

By volume, the Sun produces about the same amount of heat as a reptile.

Also the average density of the Sun is 1,410 kg per cubic meter: 1.4x that of water. Or to put it another way, the same as honey.

So yeah. The Sun. A million times bigger than the Earth. As hot as a reptile. As thick as honey.

Amazon Web ServicesNow Open – Third AWS Availability Zone in London

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than 4.5 million downloads. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

Simon Willison (Django)A SIM Switch Account Takeover (Mine)

A SIM Switch Account Takeover (Mine)

Someone walked into a T-Mobile store with a fake ID in his name and stole Albert Wenger's SIM identity, then used it to gain access to his Yahoo mail account, reset his Twitter password and post a tweet boosting a specific cryptocurrency. His accounts with Google Authenticator 2FA stayed safe.

ProgrammableWebCoursera Dogfoods REST-to-GraphQL Translation Tool

GraphQL is the alternative to REST APIs offered by Facebook. Developers using Coursera APIs love the flexibility, type-safety and documentation that GraphQL brings. Coursera’s own developers are no less fans. But that’s not to say it was always plain sailing. Coursera engineer and GraphQL enthusiast Bryan Kane describes the bumpy journey from REST to GraphQL over at the Apollo dev blog

Simon Willison (Django)How the industry-breaking Spectre bug stayed secret for seven months

How the industry-breaking Spectre bug stayed secret for seven months

It's pretty amazing that the bug only became public knowledge a week before the intended embargo date, considering the number of individuals and companies that has to be looped in. The biggest public clues were patches being applied in public to the Linux kernel - one smart observer noted that the page table issue “has all the markings of a security patch being readied under pressure from a deadline.”

Simon Willison (Django)Telling stories through your commits

Telling stories through your commits

Joel Chippendale's excellent guide to writing a useful commit history. I spend a lot of time on my commit messages, because when I'm trying to understand code later on they are the only form of documentation that is guaranteed to remain up-to-date against the code at that exact point of time. These tips are clear, concise, teadabale and include some great examples.

Simon Willison (Django)Notes on Kafka in Python

Notes on Kafka in Python

Useful review by Matthew Rocklin of the three main open source Python Kafka client libraries as of October 2017.

ProgrammableWebDaily API RoundUp: BioID, SlipStream, ApilityIO

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebHow to Build an Industrial Sensor

One of the most effective ways for many businesses to safeguard their profits is by closely monitoring inventory. This is particularly important for businesses with perishable goods requiring specific storage conditions and inventory levels. If these are not monitored, the company runs the risk of inventory shortages or spoilage. Luckily, inventory monitoring solutions make it easy to predict production demands and inventory spoilage remotely in order to increase efficiency and profits.

ProgrammableWebGoogle's Apps Script for G Suite Adds API

Google today gave G Suite developers three new tools to put to use in their Apps Script projects. The updates include a new dashboard, a new API, and a new command line interface. Here's the skinny on Google's latest dev offerings.

ProgrammableWebHow and Why to Transform Your Business into a Digital Ecosystem

By now, most business leaders know that in today's landscape, all businesses need to be technology businesses. It's the sort of catchy truism that can sound good in a PR statement—but when it comes to actually running a company, what does "be a technology business" really mean?

ProgrammableWebConfide Launches an SDK to Prevent Screenshots in iOS Apps

Private messaging app Confide is making the anti-screenshot technology it uses in its own app available to app developers via a new SDK for iOS versions 10 and 11.

Simon Willison (Django)Incident report: npm

Incident report: npm

Fascinating insight into the challenges involved in managing a massive scale community code repository. An algorithm incorrectly labeled a legit user as spam, an NPM staff member acted on the report, dependent package installations started failing and because the package had been removed as spam other users were able to try and fix the bug by publishing fresh copies of the missing package to the same namespace.

Daniel Glazman (Disruptive Innovations)Web. Period.

Well. I have published something on Medium. #epub #web #future #eprdctn

ProgrammableWebNew Relic Launches Insights Dashboard API

New Relic, a digital performance monitoring and management platform provider, has launched the New Relic Insights Dashboard API. The new API provides programmatic access to sub-account dashboards and accounts across an organization. The API includes five types of endpoints which allow users to programmatically list, show, create, update, and delete Insights dashboards.

Simon Willison (Django)How to compile and run the SQLite JSON1 extension on OS X

How to compile and run the SQLite JSON1 extension on OS X

Thanks, Stack Overflow! I've been battling this one for a while - it turns out you can download the SQLite source bundle, compile just the json1.c file using gcc and load that extension in Python's sqlite3 module (or with Datasette's --load-extension= option) to gain access to the full suite of SQLite JSON functions - json(), json_extract() etc.

Amazon Web ServicesAWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform will use Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Matt Webb (Schulze & Webb)Filtered for nice turns of phrase

1.

How to award the contracts to run the overground railways in London:

Risk is like a balloon with a price tag attached to it

Nice turn of phrase.

2.

PCalc is a calculator app, and it's 25 years old. From the announcement of the original version, in 1992:

Enclosed is a binhex file containing a submission for your archives. PCalc is a neat simulation of a programmable scientific calculator.

A simulation of a calculator! Now simply a calculator. Since the 90s, software has become part of the real world. The virtual no longer exists.

3.

I like words and I like how they change. I like that sometimes everyone is using a particular word or phrase for a year or two, but look at the word closely and you'll see how weird it really is. Or there are some new words that are weird now, but I know they will be commonplace in the future.

I keep a list of words on Twitter.

4.

From Rolling Stone's coverage of the unveiling of Magic Leap, the (potentially) groundbreaking augmented reality device:

"You're basically creating the visual world," he says. "You're really co-creating it with this massive visual signal which we call the dynamic analog light field signal. That is sort of our term for the totality of the photon wavefront and particle light field everywhere in the universe. It's like this gigantic ocean; it's everywhere. It's an infinite signal and it contains a massive amount of information."

Beautiful nonsense.

Matt Biddulph (Hackdiary)Making musical hardware

I’ve been interested in sound engineering with synthesizers, mixing desks and effects units since I was at school. A couple of years ago, I discovered Eurorack modular synths and started to combine these interests with my more recently-obtained skills in amateur hardware hacking. The Eurorack scene that’s been developing over the last 20 years is a fascinating one, attracting all sizes of makers from synth manufacturing giants to one-person DIY operations. Because it’s based on a simple analog standard of 3.5mm jacks and control voltages, it’s trivial to combine hardware from all over the world into a rack of modules that’s entirely your own design. Creating your own modules is also within the reach of a reasonably experienced Arduino hacker.

After buying some Eurorack modules in October 2015, I quickly decided that I wanted to integrate my laptop into the setup. Unfortunately most computers aren’t capable of generating the full range of positive and negative voltages (ideally from +/-5V or +/-10V) required to connect to Eurorack. There are a small number of external audio interfaces that are “DC Coupled” which allows a full range of voltages to pass. I was lucky enough to find one such interface on my local Craigslist for $75: a MOTU 828 Firewire unit from 2001 that is still perfectly compatible with modern Macs after adding a Firewire to Thunderbolt dongle.

Using the Expert Sleepers Silent Way plugin, I was able to generate waveforms through the MOTU to control my synth hardware. This was only a partial success, however: measuring the output signals on my oscilloscope I discovered that the minimum and maximum voltages at full gain were about +/-2.88 volts. I decided to dive into the analog electronics domain and fix this problem.

The Swiss Army knife of analog electronics is the op-amp. This incredibly flexible part can be used to construct signal amplifiers, adders, inverters, filters and all sorts of other circuits. It’s essentially an analog computer, performing realtime arithmetic on continuously-varying voltages. After years of only tinkering with Arduinos in the digital domain, this was a revelation to me. There is a world of signal between the binary zero and one.

Using a handy op amp gain calculator I calculated the correct values of resistors that could be used to create a signal gain of around 1.77 without inverting or offsetting. This would result in my +/-2.88 volt signal being boosted to around +/-5V, good for most Eurorack hardware. Packaged ICs containing multiple op-amps are cheap and easily available, so I picked the TL074 quad op-amp package in order to give me four parallel channels of gain. The TL07x family are very common in the DIY synth community and are generally liked for their low levels of noise and distortion in musical applications. I wired the 3.5mm jacks, op-amp and resistors up on a breadboard and was thrilled that it worked first time: my oscilloscope was now measuring a full range of +/-5V for my output signals.

Next, it was time to learn Eagle and create some circuitboards. Here’s the schematic that I came up with:

At the time, the free version of Eagle was limited to a small size of circuit board. Luckily this was a close fit with the Eurorack size constraints, so I was able to lay out my schematic as a PCB with appropriate dimensions and send it off to OSH Park for fabbing. The boards arrived two weeks later and I soldered everything together:

The final step was to create a front panel for my 3U rack so that the PCB could sit with my other modules. I downloaded a laser-cutting template from Ponoko and designed a simple faceplate in Illustrator, using a PDF of the PCB from Eagle as a transparent layer to ensure that the holes for screws and audio jacks would line up. I uploaded this order for production, choosing bamboo wood in the mistaken impression that it would make an interesting alternative to the usual acrylic or metal Eurorack faceplates. Unfortunately it’s not the strongest material for a faceplate, and the laser engraving burns look pretty ugly, but it worked out OK in the end:

This was a pretty involved process for such a simple outcome, but it was immensely satisfying and I learnt a lot of new skills. All the Eagle and Illustrator files are in this Github repository in case you’re interested.

ProgrammableWebSwitching from Python to Go? Here's What You Need To Know

Switching from one programming language to another can be a monumental step for any team, but when a new language offers benefits that outshine the one you're using, you may find it's time to make the move.

ProgrammableWebDaily API RoundUp: Xillio, StockMarketClock, Twitter, Redis Labs

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebSWIFT Institute Challenges Students to Protect Data in an Open API World

The SWIFT Institute, a living library for financial research managed by the Society for Worldwide Interbank Financial Telecommunications, announced its third annual student challenge which asks students to find ways to protect personal information and other sensitive data in open data environments.

ProgrammableWebAria Solutions Announces API-Based Softphone Solution

Aria Solutions, contact center and customer engagement solutions provider, announced an API-based softphone (the Velocity Softphone) that integrates Salesforce.com workflows into Cisco telephony systems.

Simon Willison (Django)ftfy - fix unicode that's broken in various ways

ftfy - fix unicode that's broken in various ways

I shipped a small web UI wrapper around the excellent Python FTFY library, which can take broken unicode strings and suggest a sequence of operations that can be applied to get back sensible text.

Via Me on Twitter

ProgrammableWeb: APIsIntrinio Trop-X Prices

The Intrinio Trop-X Prices API is a data feed that allows developers to retrieve end of day prices (EOD) from the Trop-X in Seychelles. Available data includes high, low, open, and close prices as well as prices adjusted for splits and dividends. This API is updated daily at the close of trading and includes historical data back to 2007.
Date Updated: 2018-01-09
Tags: Stocks, Financial

ProgrammableWeb: APIsIntrinio Muscat Securities Market Prices

The Exchange Data International EOD Muscat data feed brings end of day (EOD) prices from the Muscat, Oman based exchange directly to Excel, Google Sheets or API, allowing developers and analysts to keep up with this market. Prices are released after the close of trading in Oman’s time zone. The Intrinio Muscat Stock Exchange Prices API provides access to Exchange Data International’s end of day prices and other daily trading summaries from the Muscat Stock Exchange. Developers can access all historical prices for the exchange with simple syntax.
Date Updated: 2018-01-09
Tags: Stocks, Financial

ProgrammableWeb: APIsP2000

P2000 is the national communication service of the Dutch emergency services like firefighters, police and ambulances. Most traffic on this system is unencrypted and can be easily captured. This API allows you to query emergency services events.
Date Updated: 2018-01-09
Tags: Emergency, Audio, , European, , Messaging, , Police

ProgrammableWeb: APIsGreendeck

Greendeck is a provider of AI-powered pricing optimization tools. The Greendeck API provides access to various endpoints that include authentication, products, events, transactions, and fetch price that represent the various price optimization tools of the company. The API is useful to determine competitive prices, customer segmentations, personalized product and feature bundling, maximum revenue and more. The RESTful API conveys requests and responses in JSON format.
Date Updated: 2018-01-09
Tags: Tools, Business, , eCommerce, , Prices

ProgrammableWeb: APIsIntrinio Mongolian Stock Exchange Prices

The Exchange Data International EOD Mongolian data feed brings end of day (EOD) prices from the Mongolian Stock Exchange directly to Excel, Google Sheets or API, allowing developers and analysts to keep up with this market. Prices are released after the close of trading in Mongolia’s time zone. The Intrinio Mongolian Stock Exchange Prices API provides access to Exchange Data International’s end of day prices and other daily trading summaries from the Mongolian Stock Exchange. Developers can access all historical prices for the exchange with simple syntax.
Date Updated: 2018-01-09
Tags: Stocks, Financial

ProgrammableWebAmazon Brings the Alexa Smart Home Skill API to the Kitchen

Last week Amazon announced that it is adding cooking capabilities to the Smart Home Skill API, which enables its voice-controlled digital personal assistant, Alexa, to control and check the status of cloud-connected devices.

ProgrammableWebGo2mobi Announces Total Control API for Automated Mobile Advertising Campaigns

Go2mobi, mobile programmatic advertising platform, has announced the Go2mobi Total Control API. The API allows developers to build custom programmatic features. Add agencies, trade desks, performance marketers, and more gain access to Go2mobi's demand side platform through an API.

Simon Willison (Django)csvkit

csvkit

"A suite of command-line tools for converting to and working with CSV" - includes a huge range of utilities for things like converting Excel and JSON to CSV, grepping, sorting and extracting a subset of columns, combining multiple CSV files together and exporting CSV to a relational database. Worth reading through the tutorial which shows how the different commands can be piped together.

Simon Willison (Django)Quoting Miguel de Icaza‏

[On Meltdown's impact on hosting costs] The reality is that we have been living with borrowed performance. The new reality is that security is too important and can not be exchanged for speed. Time to profile, tune and optimize.

Miguel de Icaza‏

Simon Willison (Django)Statistical NLP on OpenStreetMap

Statistical NLP on OpenStreetMap

libpostal is ferociously clever: it's a library for parsing and understanding worldwide addresses, built on top of a machine learning model trained on millions of addresses from OpenStreetMap. Al Barrentine describes how it works in this fascinating and detailed essay.

Amazon Web ServicesAWS Online Tech Talks – January 2018

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

Simon Willison (Django)Himalayan Database: From Visual FoxPro GUI to JSON API with Datasette

Himalayan Database: From Visual FoxPro GUI to JSON API with Datasette

The Himalayan Database is a compilation of records for all expeditions that have climbed in the Nepalese Himalaya, originally compiled by journalist Elizabeth Hawley over several decades. The database is published as a Visual FoxPro database - here Raffaele Messuti‏ provides step-by-step instructions for extracting the data from the published archive, converting them to CSV using dbfcsv and then converting the CSVs to SQLite using csvs-to-sqlite so you can browse them using Datasette.

Via Raffaele Messuti‏

ProgrammableWeb: APIsSeedfoxy Torrent Search

Search the public torrents database of Seedfoxy, a french torrent website with 20.000 torrents, sorted by category, type, seeders and leechers. The API contains the magnet links, cover photo, category, and description. Seedfoxy provides access to 100% French torrents making the archives of culture and the arts accessible to all.
Date Updated: 2018-01-08
Tags: Torrents, Adult, , eBooks, , File Sharing, , French, , Games, , Movies

ProgrammableWebDaily API RoundUp: SparrowOne, DeepAI, 3play Media, Kobiton

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebWhat Airport and API Security Have in Common

Just like airport security, a system hosting a public API has to deal with heavy loads of incoming traffic every day. Most of that traffic, most of the time, is legitimate but sometimes some of it is not. David Andrzejek over at Helpnetsecurity explains to you how, by taking a leaf out of the airport security book, you can keep the bad apples out of your API while still serving millions of requests each day without latency.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>