ProgrammableWebGoogle Updates its Search Console Search Analytics API

Google has updated its Search Console Search Analytics API, which allows users to query their search traffic data, to support the retrieval of 25,000 rows of data per request, up from a previous limit of 5,000 rows per request.

In addition to increasing the number of rows that can be returned per request, Google has added 16 months of data to the API.

Simon Willison (Django)Bowiebranchia

Bowiebranchia

I spent the weekend learning about Nudibranchs, which are beautiful sea slugs (common on the coast of California) which are definitely best explained by their resemblance to different eras of David Bowie.

ProgrammableWebDaily API RoundUp: AerisWeather, Loom, Zoom, Walmart, HTNG

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesAmazon Kinesis Video Streams Adds Support For HLS Output Streams

Today I’m excited to announce and demonstrate the new HTTP Live Streams (HLS) output feature for Amazon Kinesis Video Streams (KVS). If you’re not already familiar with KVS, Jeff covered the release for AWS re:Invent in 2017. In short, Amazon Kinesis Video Streams is a service for securely capturing, processing, and storing video for analytics and machine learning – from one device or millions. Customers are using Kinesis Video with machine learning algorithms to power everything from home automation and smart cities to industrial automation and security.

After iterating on customer feedback, we’ve launched a number of features in the past few months including a plugin for GStreamer, the popular open source multimedia framework, and docker containers which make it easy to start streaming video to Kinesis. We could talk about each of those features at length, but today is all about the new HLS output feature! Fair warning, there are a few pictures of my incredibly messy office in this post.

HLS output is a convenient new feature that allows customers to create HLS endpoints for their Kinesis Video Streams, convenient for building custom UIs and tools that can playback live and on-demand video. The HLS-based playback capability is fully managed, so you don’t have to build any infrastructure to transmux the incoming media. You simply create a new streaming session, up to 5 (for now), with the new GetHLSStreamingSessionURL API and you’re off to the races. The great thing about HLS is that it’s already an industry standard and really easy to leverage in existing web-players like JW Player, hls.js, VideoJS, Google’s Shaka Player, or even rendering natively in mobile apps with Android’s Exoplayer and iOS’s AV Foundation. Let’s take a quick look at the API, feel free to skip to the walk-through below as well.

Kinesis Video HLS Output API

The documentation covers this in more detail than what we can go over in the Blog but I’ll cover the broad components.

  1. Get an endpoint with the GetDataEndpoint API
  2. Use that endpoint to get an HLS streaming URL with the GetHLSStreamingSessionURL API
  3. Render the content in the HLS URL with whatever tools you want!

This is pretty easy in a Jupyter notebook with a quick bit of Python and boto3.

import boto3
STREAM_NAME = "RandallDeepLens"
kvs = boto3.client("kinesisvideo")
# Grab the endpoint from GetDataEndpoint
endpoint = kvs.get_data_endpoint(
    APIName="GET_HLS_STREAMING_SESSION_URL",
    StreamName=STREAM_NAME
)['DataEndpoint']
# Grab the HLS Stream URL from the endpoint
kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint)
url = kvam.get_hls_streaming_session_url(
    StreamName=STREAM_NAME,
    PlaybackMode="LIVE"
)['HLSStreamingSessionURL']

You can even visualize everything right away in Safari which can render HLS streams natively.

from IPython.display import HTML
HTML(data='<video src="{0}" autoplay="autoplay" controls="controls" width="300" height="400"></video>'.format(url)) 

We can also stream directly from a AWS DeepLens with just a bit of code:

import DeepLens_Kinesis_Video as dkv
import time
aws_access_key = "super_fake"
aws_secret_key = "even_more_fake"
region = "us-east-1"
stream_name ="RandallDeepLens"
retention = 1 #in minutes.
wait_time_sec = 60*300 #The number of seconds to stream the data
# will create the stream if it does not already exist
producer = dkv.createProducer(aws_access_key, aws_secret_key, "", region)
my_stream = producer.createStream(stream_name, retention)
my_stream.start()
time.sleep(wait_time_sec)
my_stream.stop()

<video autoplay="autoplay" controls="controls" loop="loop" src="https://static.ranman.com/blogstuff/2018/ExampleHLS_cropped.mp4"></video>

How to use Kinesis Video Streams HLS Output Streams

We definitely need a Kinesis Video Stream, which we can create easily in the Kinesis Video Streams Console.

Now, we need to get some content into the stream. We have a few options here. Perhaps the easiest is the docker container. I decided to take the more adventurous route and compile the GStreamer plugin locally on my mac, following the scripts on github. Be warned, compiling this plugin takes a while and can cause your computer to transform into a space heater.

With our freshly compiled GStreamer binaries like gst-launch-1.0 and the kvssink plugin we can stream directly from my macbook’s webcam, or any other GStreamer source, into Kinesis Video Streams. I just use the kvssink output plugin and my data will wind up in the video stream. There are a few parameters to configure around this, so pay attention.

Here’s an example command that I ran to stream my macbook’s webcam to Kinesis Video Streams:

gst-launch-1.0 autovideosrc ! videoconvert \
! video/x-raw,format=I420,width=640,height=480,framerate=30/1 \
! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=45 bitrate=500 \
! h264parse \
! video/x-h264,stream-format=avc,alignment=au,width=640,height=480,framerate=30/1 \
! kvssink stream-name="BlogStream" storage-size=1024 aws-region=us-west-2 log-config=kvslog

Now that we’re streaming some data into Kinesis, I can use the getting started sample static website to test my HLS stream with a few different video players. I just fill in my AWS credentials and ask it to start playing. The GetHLSStreamingSessionURL API supports a number of parameters so you can play both on-demand segments and live streams from various timestamps.

Additional Info

Data Consumed from Kinesis Video Streams using HLS is charged $0.0119 per GB in US East (N. Virginia) and US West (Oregon) and pricing for other regions is available on the service pricing page. This feature is available now, in all regions where Kinesis Video Streams is available.

The Kinesis Video team told me they’re working hard on getting more integration with the AWS Media services, like MediaLive, which will make it easier to serve Kinesis Video Stream content to larger audiences.

As always, let us know what you you think on twitter or in the comments. I’ve had a ton of fun playing around with this feature over the past few days and I’m excited to see customers build some new tools with it!

Randall

Simon Willison (Django)XARs: An efficient system for self-contained executables

XARs: An efficient system for self-contained executables

Really interesting new open source project from Facebook: a XAR is a new way of packaging up a Python executable complete with its dependencies and resources such that it can be distributed and executed elsewhere as a single file. It's kind of like a Docker container without Docker - it uses the SquashFS compressed read-only filesystem. I can't wait to try this out with Datasette.

Via @llanga

Jeremy Keith (Adactio)CSS grid in Internet Explorer 11

When I was in Boston, speaking on a lunchtime panel with Rachel at An Event Apart, we took some questions from the audience about CSS grid. Inevitably, a question about browser support came up—specifically about support in Internet Explorer 11.

(Technically, you can use CSS grid in IE11—in fact it was the first browser to ship a version of grid—but the prefixed syntax is different to the standard and certain features are missing.)

Rachel gave a great balanced response, saying that you need to look at your site’s stats to determine whether it’s worth the investment of your time trying to make a grid work in IE11.

My response was blunter. I said I just don’t consider IE11 as a browser that supports grid.

Now, that might sound harsh, but what I meant was: you’re already dividing your visitors into browsers that support grid, and browsers that don’t …and you’re giving something to those browsers that don’t support grid. So I’m suggesting that IE11 falls into that category and should receive the layout you’re giving to browsers that don’t support grid …because really, IE11 doesn’t support grid: that’s the whole reason why the syntax is namespaced by -ms.

You could jump through hoops to try to get your grid layout working in IE11, as detailed in a three-part series on CSS Tricks, but at that point, the amount of effort you’re putting in negates the time-saving benefits of using CSS grid in the first place.

Frankly, the whole point of prefixed CSS is that is not used after a reasonable amount of time (originally, the idea was that it would not be used in production, but that didn’t last long). As we’ve moved away from prefixes to flags in browsers, I’m seeing the amount of prefixed properties dropping, and that’s very, very good. I’ve stopped using autoprefixer on new projects, and I’ve been able to remove it from some existing ones—please consider doing the same.

And when it comes to IE11, I’ll continue to categorise it as a browser that doesn’t support CSS grid. That doesn’t mean I’m abandoning users of IE11—far from it. It means I’m giving them the layout that’s appropriate for the browser they’re using.

Remember, websites do not need to look exactly the same in every browser.

Amazon Web ServicesNew – Lifecycle Management for Amazon EBS Snapshots

It is always interesting to zoom in on the history of a single AWS service or feature and watch how it has evolved over time in response to customer feedback. For example, Amazon Elastic Block Store (EBS) launched a decade ago and has been gaining more features and functionality every since. Here are a few of the most significant announcements:

Several of the items that I chose to highlight above make EBS snapshots more useful and more flexible. As you may already know, it is easy to create snapshots. Each snapshot is a point-in-time copy of the blocks that have changed since the previous snapshot, with automatic management to ensure that only the data unique to a snapshot is removed when it is deleted. This incremental model reduces your costs and minimizes the time needed to create a snapshot.

Because snapshots are so easy to create and use, our customers create a lot of them, and make great use of tags to categorize, organize, and manage them. Going back to my list, you can see that we have added multiple tagging features over the years.

Lifecycle Management – The Amazon Data Lifecycle Manager
We want to make it even easier for you to create, use, and benefit from EBS snapshots! Today we are launching Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Instead of creating snapshots manually and deleting them in the same way (or building a tool to do it for you), you simply create a policy, indicating (via tags) which volumes are to be snapshotted, set a retention model, fill in a few other details, and let Data Lifecycle Manager do the rest. Data Lifecycle Manager is powered by tags, so you should start by setting up a clear and comprehensive tagging model for your organization (refer to the links above to learn more).

It turns out that many of our customers have invested in tools to automate the creation of snapshots, but have skimped on the retention and deletion. Sooner or later they receive a surprisingly large AWS bill and find that their scripts are not working as expected. The Data Lifecycle Manager should help them to save money and to be able to rest assured that their snapshots are being managed as expected.

Creating and Using a Lifecycle Policy
Data Lifecycle Manager uses lifecycle policies to figure out when to run, which volumes to snapshot, and how long to keep the snapshots around. You can create the policies in the AWS Management Console, from the AWS Command Line Interface (CLI) or via the Data Lifecycle Manager APIs; I’ll use the Console today. Here are my EBS volumes, all suitably tagged with a department:

I access the Lifecycle Manager from the Elastic Block Store section of the menu:

Then I click Create Snapshot Lifecycle Policy to proceed:

Then I create my first policy:

I use tags to specify the volumes that the policy applies to. If I specify multiple tags, then the policy applies to volumes that have any of the tags:

I can create snapshots at 12 or 24 hour intervals, and I can specify the desired snapshot time. Snapshot creation will start no more than an hour after this time, with completion based on the size of the volume and the degree of change since the last snapshot.

I can use the built-in default IAM role or I can create one of my own. If I use my own role, I need to enable the EC2 snapshot operations and all of the DLM (Data Lifecycle Manager) operations; read the docs to learn more.

Newly created snapshots will be tagged with the aws:dlm:lifecycle-policy-id and  aws:dlm:lifecycle-schedule-name automatically; I can also specify up to 50 additional key/value pairs for each policy:

I can see all of my policies at a glance:

I took a short break and came back to find that the first set of snapshots had been created, as expected (I configured the console to show the two tags created on the snapshots):

Things to Know
Here are a couple of things to keep in mind when you start to use Data Lifecycle Manager to automate your snapshot management:

Data Consistency – Snapshots will contain the data from all completed I/O operations, also known as crash consistent.

Pricing – You can create and use Data Lifecyle Manager policies at no charge; you pay the usual storage charges for the EBS snapshots that it creates.

Availability – Data Lifecycle Manager is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Tags and Policies – If a volume has more than one tag and the tags match multiple policies, each policy will create a separate snapshot and both policies will govern the retention. No two policies can specify the same key/value pair for a tag.

Programmatic Access – You can create and manage policies programmatically! Take a look at the CreateLifecyclePolicy, GetLifecyclePolicies, and UpdateLifeCyclePolicy functions to get started. You can also write an AWS Lambda function in response to the createSnapshot event.

Error Handling – Data Lifecycle Manager generates a “DLM Policy State Change” event if a policy enters the error state.

In the Works – As you might have guessed from the name, we plan to add support for additional AWS data sources over time. We also plan to support policies that will let you do weekly and monthly snapshots, and also expect to give you additional scheduling flexibility.

Jeff;

Simon Willison (Django)future-fstrings

future-fstrings

Clever module that backports fstrings to versions of Python earlier than 3.6, by registering itself as a codec and abusing Python's # -*- coding: future_fstrings -*- feature. Via a conversation on Twitter that pointed out that the JavaScript community have been using transpilation to successfully experiment with new language features for years now.

Via @codewithanthony

ProgrammableWeb: APIsOpenRates

The OpenRates API returns JSON data over HTTPS requests with the latest rates, historical rates, base currency, and specific currencies. For more information, contact@openrates.io
Date Updated: 2018-07-13
Tags: Currency

ProgrammableWeb: APIsNREL Utility Rates

The NREL Utility Rates API returns JSON and XML annual average utility rates data for residential, commercial and industrial sectors. API Key is required. NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsNREL Energy Incentives

The NREL Energy Incentives API returns JSON and XML data listed in the energy DSIRE database available at http://www.dsireusa.org/ API Key is required to authenticate. NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsNREL Emissions

The NREL Emissions API returns greenhouse gas JSON data after making GET requests and authenticating with API Key. Sign up for a Key at https://developer.nrel.gov/signup/ NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsNREL Electricity and Natural Gas

The NREL Electricity and Natural Gas API returns JSON data with energy usage, expenditures, and GHG emissions. API Key is required to make GET requests. NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsNREL Building Component

The NREL Building Component API in REST architecture returns XML, JSON, or YAML data containing whole buildings to detailed files, like duct sealing components information. NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsNREL Reopt Lite

The NREL Reopt Lite API returns JSON data after making POST requests via API Key with recommendations for renewable energy, conventional generation, and energy storage technologies. NREL stands for National Renewable Energy Laboratory and is based on Golden, CO. NREL is a national laboratory of the Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy.
Date Updated: 2018-07-13
Tags: Energy, Government

ProgrammableWeb: APIsMelissa Cloud Property

The Melissa Cloud Property API is a WebSmart Property Service that returns information about a given parcel of property, that includes: assessed value, last sale price, current mortgage, square footage and more. Melissa specializes in global contact data quality and mailing preparation solutions for both small businesses and large enterprises.
Date Updated: 2018-07-13
Tags: Data, Location, , Prices, , Real Estate, , Statistics

ProgrammableWeb: APIsMelissa Cloud Global Phone V3

The Melissa Cloud Global Phone V3 API allows you to verify phone numbers and append useful geographic information. The API verifies and appends country dialing codes, international exit codes, national prefixes, Appends geographic information on the telephone line such as latitude, longitude, administrative area, and language, Parses the phone number into its various components and more. Melissa specializes in global contact data quality and mailing preparation solutions for both small businesses and large enterprises.
Date Updated: 2018-07-13
Tags: Data, Verification

ProgrammableWebKik&#039;s Kin cryptocurrency Unveils a New SDK, Launches a Developer Program

Messaging service Kik launched a cryptocurrency called Kin through a near-$100 million initial coin offering (ICO) last September, and now its Kin Ecosystem Foundation will give as many as 25 developers up to $3 million in Kin to bolster the cryptocurrency's ecosystem as part of a newly-launched Kin Developer Program.

ProgrammableWebGitHub Enterprise 2.14 Now Available

GitHub has released GitHub Enterprise version 2.14 which includes a number of new tools for developers. Among the new tools are unified search, Checks API (public beta), and multiple issue templates.

Amazon Web ServicesAWS Storage Gateway Recap – SMB Support, RefreshCache Event, and More

To borrow my own words, the AWS Storage Gateway is a service that includes a multi-protocol storage appliance that fits in between your existing application and the AWS Cloud. Your applications see the gateway as a file system, a local disk volume, or a Virtual Tape Library, depending on how it was configured.

Today I would like to share a few recent updates to the File Gateway configuration of the Storage Gateway, and also show you how they come together to enable some new processing models. First, the most recent updates:

SMB Support – The File Gateway already supports access from clients that speak NFS (versions 3 and 4.1 are supported). Last month we added support for the Server Message Block (SMB) protocol. This allows Windows applications that communicate using v2 or v3 of SMB to store files as objects in S3 through the gateway, enabling hybrid cloud use cases such as backup, content distribution, and processing of machine learning and big data workloads. You can control access to the gateway using your existing on-premises Active Directory (AD) domain or a cloud-based domain hosted in AWS Directory Service, or you can use authenticated guest access. To learn more about this update, read AWS Storage Gateway Adds SMB Support to Store and Access Objects in Amazon S3 Buckets.

Cross-Account Permissions – Some of our customers run their gateways in one AWS account and configure them to upload to an S3 bucket owned by another account. This allows them to track departmental storage and retrieval costs using chargeback and showback models. In order to simplify this important use case, you can configure the gateway to provide the bucket owner with full permissions. This avoids a pain point which could arise if the bucket owner was unable to see the objects. To learn how to set this up, read Using a File Share for Cross-Account Access.

Requester Pays – Bucket owners are responsible for storage costs. Owners pay for data transfer costs by default, but also have the option to have the requester pay. To support this use case, the File Gateway now supports S3’s Requester Pays Buckets. Data collectors and aggregators can use this feature to share data with research organizations such as universities and labs without incurring the costs of access themselves. File Gateway provides file based access to the S3 objects, caches recently accessed data locally, helping requesters reduce latency and costs. To learn more, read about Creating an NFS File Share and Creating an SMB File Share.

File Upload Notification – The gateway caches files locally, and uploads them to a designated S3 bucket in the background. Late last year we gave you the ability to request notification (in the form of a CloudWatch Event) when new files have been uploaded. You can use this to initiate cloud-based processing or to implement advanced logging. To learn more, read Getting File Upload Notification and study the NotifyWhenUploaded function.

Cache Refresh Event – You have long had the ability to use the RefreshCache function to make sure that the gateway is aware of objects that have been added, removed, or replaced in the bucket. The new Storage Gateway Cache Refresh Event lets you know that the cache is now in sync with S3, and can be used as a signal to initiate local processing. To learn more, read Getting Refresh Cache Notification.

Hybrid Processing Using File Gateway
You can use the File Upload Notification and Cache Refresh to automate some of your routine hybrid process tasks!

Let’s say that you run a geographically distributed office or retail business, with locations all over the world. Raw data (metrics, cash register receipts, or time sheets) is collected at each location, and then uploaded to S3 using a File Gateway hosted at each location. As the data arrives, you use the File Upload Notifications to process each S3 object, perhaps using an AWS Lambda function that invokes Amazon Athena to run a stock set of queries against each one. The data arrives over the course of a couple of hours, and results accumulate in another bucket. At the end of the reporting period, the intermediate results are processed, custom reports are generated for each branch location, and then stored in another bucket (this bucket, as it turns out, is also associated with a gateway, and each gateway will have cached copies of the prior versions of the reports). After you generate your reports, you can refresh each of the gateway caches, wait for the corresponding notifications, and then send an email to the branch managers to tell them that their new report is available.

Here’s a video (and presentation) with more information about this processing model:

Now Available
All of the features listed above are available now and you can start using them today in all regions where Storage Gateway is available.

Jeff;

ProgrammableWebDing Introduces Mobile Top-Up API

Ding, mobile top-up services provider, has announced that it will open up its services through its DingConnect API. Through the API, third parties can sell or offer mobile top-up through websites and apps.

Shelley Powers (Burningbird)What’s at risk with Kavanaugh’s Appointment

I would like to claim prescience for correctly guessing that Brett Kavanaugh would be the Supreme Court pick, but his choice was fairly obvious. And has been noted in various press publications, Justice Kennedy likely retired now  because he knew of Kavanaugh’s pick. Kavanaugh’s rulings and writings are now being scrutinized, particularly when it comes …

The post What’s at risk with Kavanaugh’s Appointment appeared first on Burningbird.

Simon Willison (Django)The Now CDN

The Now CDN

Huge announcement from Zeit Now today: all .now.sh deployments are now served through the Cloudflare CDN, which means they benefit from 150 worldwide CDN locations that obey HTTP caching headers. This is particularly relevant for Datasette, since it serves far-future cache headers by default and uses Cloudflare-compatible HTTP/2 push hints to accelerate 302 redirects. This means that both the "datasette publish now" CLI command and the Datasette Publish web app will now result in Cloudflare-accelerated deployments.

Via @zeithq

Simon Willison (Django)Usage of ARIA attributes via HTTP Archive

Usage of ARIA attributes via HTTP Archive

A neat example of a Google BigQuery query you can run against the HTTP Archive public dataset (a crawl of the "top" websites run periodically by the Internet Archive, which captures the full details of every resource fetched) to see which ARIA attributes are used the most often. Linking to this because I used it successfully today as the basis for my own custom query - I love that it's possible to analyze a huge representative sample of the modern web in this way.

ProgrammableWebDaily API RoundUp: Plino, Coinlayers, MapQuest, Erste Group

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Jeremy Keith (Adactio)Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

Amazon Web ServicesAWS re:Invent 2018 is Coming – Are You Ready?

As I write this, there are just 138 days until re:Invent 2018. My colleagues on the events team are going all-out to make sure that you, our customer, will have the best possible experience in Las Vegas. After meeting with them, I decided to write this post so that you can have a better understanding of what we have in store, know what to expect, and have time to plan and to prepare.

Dealing with Scale
We started out by talking about some of the challenges that come with scale. Approximately 43,000 people (AWS customers, partners, members of the press, industry analysts, and AWS employees) attended in 2017 and we are expecting an even larger crowd this year. We are applying many of the scaling principles and best practices that apply to cloud architectures to the physical, logistical, and communication challenges that are part-and-parcel of an event that is this large and complex.

We want to make it easier for you to move from place to place, while also reducing the need for you to do so! Here’s what we are doing:

Campus Shuttle – In 2017, hundreds of buses traveled on routes that took them to a series of re:Invent venues. This added a lot of latency to the system and we were not happy about that. In 2018, we are expanding the fleet and replacing the multi-stop routes with a larger set of point-to-point connections, along with additional pick-up and drop-off points at each venue. You will be one hop away from wherever you need to go.

Ride Sharing – We are partnering with Lyft and Uber (both powered by AWS) to give you another transportation option (download the apps now to be prepared). We are partnering with the Las Vegas Monorail and the taxi companies, and are also working on a teleportation service, but do not expect it to be ready in time.

Session Access – We are setting up a robust overflow system that spans multiple re:Invent venues, and are also making sure that the most popular sessions are repeated in more than one venue.

Improved Mobile App – The re:Invent mobile app will be more lively and location-aware. It will help you to find sessions with open seats, tell you what is happening around you, and keep you informed of shuttle and other transportation options.

Something for Everyone
We want to make sure that re:Invent is a warm and welcoming place for every attendee, with business and social events that we hope are progressive and inclusive. Here’s just some of what we have in store:

You can also take advantage of our mother’s rooms, gender-neutral restrooms, and reflection rooms. Check out the community page to learn more!

Getting Ready
Now it is your turn! Here are some suggestions to help you to prepare for re:Invent:

  • Register – Registration is now open! Every year I get email from people I have not talked to in years, begging me for last-minute access after re:Invent sells out. While it is always good to hear from them, I cannot always help, even if we were in first grade together.
  • Watch – We’re producing a series of How to re:Invent webinars to help you get the most from re:Invent. Watch What’s New and Breakout Content Secret Sauce ASAP, and stay tuned for more.
  • Plan – The session catalog is now live! View the session catalog to see the initial list of technical sessions. Decide on the topics of interest to you and to your colleagues, and choose your breakout sessions, taking care to pay attention to the locations. There will be over 2,000 sessions so choose with care and make this a team effort.
  • Pay Attention – We are putting a lot of effort into preparatory content – this blog post, the webinars, and more. Watch, listen, and learn!
  • Train – Get to work on your cardio! You can easily walk 10 or more miles per day, so bring good shoes and arrive in peak condition.

Partners and Sponsors
Participating sponsors are a core part of the learning, networking, and after hours activities at re:Invent.

For APN Partners, re:Invent is the single largest opportunity to interact with AWS customers, delivering both business development and product differentiation. If you are interested in becoming a re:Invent sponsor, read the re:Invent Sponsorship Prospectus.

For re:Invent attendees, I urge you to take time to meet with Sponsoring APN Partners in both the Venetian and Aria Expo halls. Sponsors offer diverse skills, Competencies, services and expertise to help attendees solve a variety of different business challenges. Check out the list of re:Invent Sponsors to learn more.

See You There
Once you are on site, be sure to take advantage of all that re:Invent has to offer.

If you are not sure where to go or what to do next, we’ll have some specially trained content experts to guide you.

I am counting down the days, gearing up to crank out a ton of blog posts for re:Invent, and looking forward to saying hello to friends new and old.

Jeff;

PS – We will be adding new sessions to the session catalog over the summer, so be sure to check back every week!

 

ProgrammableWebSMG Makes Lenderlink API Freely Available to Mortgage Packagers

Specialist Mortgage Group (SMG), a group of specialist finance brokers, has announced that its Lenderlink API is freely available to all second charge mortgage packagers. Lenderlink acts as a single platform that programmatically connects second charge mortgage packagers to second charge lenders.

Simon Willison (Django)scrapely

scrapely

Neat twist on a screen scraping library: this one lets you "train it" by feeding it examples of URLs paired with a dictionary of the data you would like to have extracted from that URL, then uses an instance based learning earning algorithm to run against new URLs. Slightly confusing name since it's maintained by the scrapy team but is a totally independent project from the scrapy web crawling framework.

Jeremy Keith (Adactio)Twitter and Instagram progressive web apps

Since support for service workers landed in Mobile Safari on iOS, I’ve been trying a little experiment. Can I replace some of the native apps I use with progressive web apps?

The two major candidates are Twitter and Instagram. I added them to my home screen, and banished the native apps off to a separate screen. I’ve been using both progressive web apps for a few months now, and I have to say, they’re pretty darn great.

There are a few limitations compared to the native apps. On Twitter, if you follow a link from a tweet, it pops open in Safari, which is fine, but when you return to Twitter, it loads anew. This isn’t any fault of Twitter—this is the way that web apps have worked on iOS ever since they introduced their weird web-app-capable meta element. I hope this behaviour will be fixed in a future update.

Also, until we get web notifications on iOS, I need to keep the Twitter native app around if I want to be notified of a direct message (the only notification I allow).

Apart from those two little issues though, Twitter Lite is on par with the native app.

Instagram is also pretty great. It too suffers from some navigation issues. If I click through to someone’s profile, and then return to the main feed, it also loads it anew, losing my place. It would be great if this could be fixed.

For some reason, the Instagram web app doesn’t allow uploading multiple photos …which is weird, because I can upload multiple photos on my own site by adding the multiple attribute to the input type="file" in my posting interface.

Apart from that, though, it works great. And as I never wanted notifications from Instagram anyway, the lack of web notifications doesn’t bother me at all. In fact, because the progressive web app doesn’t keep nagging me about enabling notifications, it’s a more pleasant experience overall.

Something else that was really annoying with the native app was the preponderance of advertisements. It was really getting out of hand.

Well …(looks around to make sure no one is listening)… don’t tell anyone, but the Instagram progressive web app—i.e. the website—doesn’t have any ads at all!

Here’s hoping it stays that way.

Amazon Web ServicesDeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

Jeremy Keith (Adactio)Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

ProgrammableWeb: APIsGroupDocs.Editor Java

This is indirect access to the GroupDocs.Editor Java API. Please refer to the corresponding SDK below. The API enables document editing in HTML that supports multiple document formats. It processes load documents, converts to HTML, provides HTML to external UI and more. GroupDocs provides document manipulation APIs that allows you to view, convert, annotate, compare, sign, assemble and search documents in your applications.
Date Updated: 2018-07-10
Tags: Documents, Editing, , Viewer

ProgrammableWeb: APIsGroupDocs.Editor .Net

This is indirect access to the GroupDocs.Editor .Net API. Please refer to the corresponding SDK below. The API enables document editing in HTML that supports multiple document formats. It processes load documents, converts to HTML, provides HTML to external UI and more. GroupDocs provides document manipulation APIs that allows you to view, convert, annotate, compare, sign, assemble and search documents in your applications.
Date Updated: 2018-07-10
Tags: Documents, Editing, , Viewer

ProgrammableWebHow Postman Can Improve Your API Documentation

Just as diners complain about restaurants - “The food is terrible and the portions are too small!” - developers bemoan API documentation: we want it to be better, and we want more of it. Despite modern technology’s reliance on APIs to bridge gaps between services and platforms, the actual process of developing and understanding APIs is fraught with gaps in communication among people, teams and business verticals.

ProgrammableWeb: APIsErste Group Netbanking

Erste Group Netbanking API is a REST service that provides access to user and accounting data of Česká spořitelna customers. It allows you to list and read data about a client's products and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-09
Tags: Banking, Accounting, , Data, , European, , Financial

ProgrammableWeb: APIsErste Group Know Your Customer

The Erste Group Know Your Customer API is a REST service that provides verification of Ceska Sporitelna customers. It allows you to verify a user by bank that includes services for; Profile and Healthcheck. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-09
Tags: Banking, Accounts, , European, , Financial, , Verification

ProgrammableWeb: APIsErste Group Personal Accounts

The Erste Group Personal Accounts API is a REST service that provides access to user and accounting data of Ceska Sporitelna customers. It allows you to list and read data about a client's products that includes services for; Accounts, Cards, Building Savings, Pensions and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-09
Tags: Banking, Accounts, , European, , Financial

ProgrammableWebGoogle Cloud’s Threat to Shut Down App Highlights Lack of Customer Service

A recent post on Medium has brought to light the lack of customer service provided to users of Google Cloud Platform (GCP). The post was written by an administrator of a project used to monitor numerous wind turbines and solar plants across eight countries.

ProgrammableWebDaily API RoundUp: CryptoStandardizer, Voxprox, Meqem, Seobility

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Simon Willison (Django)Quoting James Donohue

Over the last twenty years, publishing systems for content on [BBC] News pages have come and gone, having been replaced or made obsolete. Although newer content is published through dynamic web applications that can be readily modified, what lies beneath this sometimes resembles layers of sedimentary rock.

James Donohue

ProgrammableWebHow APIs Could Save Facebook&#039;s User Experience (and others), But Won&#039;t

Have you used the Facebook app on Android or the Web recently? It used to be pretty solid. But now, it's atrocious (see below for more). Twitter was this way a long time ago. Some would argue it still is. Then, through Twitter's APIs, some clever people at a startup called Tweetdeck built a better user experience.

ProgrammableWebTwitter API Updates Eliminate Key Features to Twitterrific App

Twitterrific, a macOS and iOS client for Twitter, announced significant changes tied to the upcoming shutdown of legacy Twitter APIs.

ProgrammableWeb: APIsErste Group Mortgage Calculator

The Erste Group Mortgage calculator API provides a REST with access to basic mortgage calculations of Česká spořitelna. This includes services to retrieve validation criteria for mortgage loan calculation. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, European, , Financial

ProgrammableWeb: APIsErste Group Transparent Accounts

The Erste Group Transparent Accounts API provides a REST interface with access to information about Česká spořitelna. It allows access to user and accounting data of Erste Bank customers that includes services for; the list of available transparent accounts, transparent accounts and transactions on them, detail of a specific transparent account and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, Accounts, , European, , Financial

ProgrammableWeb: APIsErste Group Places

The Erste Group Places API provides a REST interface to access to Česká spořitelna, POI location information. This includes services for; Places, Branches, ATM's and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, European, , Financial, , Location

ProgrammableWeb: APIsErste Group Exchange Rates

The Erste Group Exchange Rates API provides a REST interface that allows access to currently valid and historical exchange rates for the previous 2 years. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, European, , Financial

ProgrammableWeb: APIsErste Group Corporate Accounts

The Erste Group Corporate Accounts API is a REST interface to access bank corporate accounts and data of the authenticated user. This includes services for; Companies, Company data, Passive transactions and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, Accounts, , European, , Financial, , Transactions

ProgrammableWeb: APIsErste Group Accounts

The Erste Group API provides a paginated list of client's accounts that contains unique id to be used as an URI reference. This includes services for; Account balance, List accounts, List account transactions and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, Accounts, , European, , Financial

ProgrammableWeb: APIsErste Group Basic Payments

The Erste Group Basic Payment API provides a listing of the client's payment accounts that are normally available for TPPs with AISP license. This includes services for; Account balance check, Payments, Payments Authorization and more. The Erste Group is a banking service provider in Central and Eastern Europe.
Date Updated: 2018-07-06
Tags: Banking, European, , Financial, , Payments

ProgrammableWeb: APIsAutodesk Forge BIM 360

This Autodesk Forge API allows developers to create applications that integrate with the Autodesk BIM 360 platform. BIM 360 provides a master project directory to manage your projects across services that includes the following supported endpoints; Projects, Companies, Account Users, Project Users, Project Roles and more. Autodesk Forge is a cloud based developer platform that powers the building blocks for your next tool or product.
Date Updated: 2018-07-06
Tags: Platform-as-a-Service, Cloud, , Images, , Project Management

ProgrammableWeb: APIsApi2Pdf

Api2Pdf is an HTML to PDF API that supports converting html, URL's, images, and office documents (word, excel, powerpoint) to PDF, Convert Images to PDF (gif, jpg, png, bmp to PDF) and also supports merge / concatenation of PDFs. It is a wrapper for popular libraries like wkhtmltopdf, Headless Chrome, LibreOffice and more. Api2Pdf.com is a REST API for generating PDF documents.
Date Updated: 2018-07-06
Tags: PDF, API, , Conversions, , Documents, , Images

ProgrammableWebIt&#039;s the End of the API Economy As We Know It

Companies that rely upon third-party public APIs (for example, those from Facebook, Twitter, and other API providers) to do business have always been at risk. This is nothing new.

ProgrammableWebGoogle Introduces Beta 3 for Android P

This week, Google unveiled its third beta of 

Amazon Web ServicesAWS Heroes – New Categories Launch

As you may know, in 2014 we launched the AWS Community Heroes program to recognize a vibrant group of AWS experts. These standout individuals use their extensive knowledge to teach customers and fellow-techies about AWS products and services across a range of mediums. As AWS grows, new groups of Heroes emerge.

Today, we’re excited to recognize prominent community leaders by expanding the AWS Heroes program. Unlike Community Heroes (who tend to focus on advocating a wide-range of AWS services within their community), these new Heroes are specialists who focus their efforts and advocacy on a specific technology. Our first new heroes are the AWS Serverless Heroes and AWS Container Heroes. Please join us in welcoming them as the passion and enthusiasm for AWS knowledge-sharing continues to grow in technical communities.

AWS Serverless Heroes

Serverless Heroes are early adopters and spirited pioneers of the AWS serverless ecosystem. They evangelize AWS serverless technologies online and in-person as well as open source contributions to GitHub and the AWS Serverless Application Repository, these Serverless Heroes help evolve the way developers, companies, and the community at large build modern applications. Our initial cohort of Serverless Heroes includes:

Yan Cui

Aleksandar Simovic

Forrest Brazeal

Marcia Villalba

Erica Windisch

Peter Sbarski

Slobodan Stojanović

Rob Gruhl

Michael Hart

Ben Kehoe

Austen Collins

Announcing AWS Container Heroes

Container Heroes are prominent trendsetters who are deeply connected to the ever-evolving container community. They possess extensive knowledge of multiple Amazon container services, are always keen to learn the latest trends, and are passionate about sharing their insights with anyone running containers on AWS. Please meet the first AWS Container Heroes:

 

Casey Lee

Tung Nguyen

Philipp Garbe

Yusuke Kuoka

Mike Fiedler

The trends within the AWS community are ever-changing.  We look forward to recognizing a wide variety of Heroes in the future. Stay tuned for additional updates to the Hero program in coming months, and be sure to visit the Heroes website to learn more.

Shelley Powers (Burningbird)SCOTUS Pick: Kavanaugh’s Advantage

According to media reports, Trump has narrowed his SCOTUS pick to *three people: Amy Coney Barrett, Brett Kavanaugh, and Raymond Kethledge. I suspect that Trump had narrowed his choice even before Kennedy retired. And I also suspect Trump’s short list was communicated to Kennedy, leading Kennedy to feel he could safely retire. After all, two …

The post SCOTUS Pick: Kavanaugh’s Advantage appeared first on Burningbird.

Jeremy Keith (Adactio)The trimCache function in Going Offline

Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.

It’s the trimCache function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName) to a specified amount (maxItems). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:

 function trimCache(cacheName, maxItems) {
   cacheName.open( cache => {
     cache.keys()
     .then( items => {
       if (items.length > maxItems) {
         cache.delete(items[0])
         .then(
           trimCache(cacheName, maxItems)
         ); // end delete then
       } // end if
     }); // end keys then
   }); // end open
 } // end function

See the problem? It’s right there at the start when I try to open the cache like this:

cacheName.open( cache => {

That won’t work. The open method only works on the caches object—I should be passing the name of the cache into the caches.open method. So the code should look like this:

caches.open( cacheName )
.then( cache => {

Everything else remains the same. The corrected trimCache function is here:

function trimCache(cacheName, maxItems) {
  caches.open(cacheName)
  .then( cache => {
    cache.keys()
    .then(keys => {
      if (keys.length > maxItems) {
        cache.delete(keys[0])
        .then(
          trimCache(cacheName, maxItems)
        ); // end delete then
      } // end if
    }); // end keys then
  }); // end open then
} // end function

Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.

You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.

ProgrammableWebDaily API RoundUp: Ontology, Kontomatik, Deutsche Bank, Contentjet, NeverBounce

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWeb: APIsCryptoStandardizer

Crypto exchanges have an awful tendency to use non-standardized symbols, such as "DRK" for "DASH". CryptoStandardizer provides a way to standardize those symbols and ensure your system's stability.
Date Updated: 2018-07-05
Tags: Cryptocurrency, Financial

ProgrammableWebHow and Why Meltwater Rolled Its Own Custom Testing Solution For Its API

robot testing artHaven't we all been a bit nervous at times about pressing that "Deploy" button, even with amazing test coverage? Especially in the scary world of microservices (aka distributed systems) where the exact constellation of services and their versions is hard to predict.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>