Jeremy Keith (Adactio)Preparing a conference talk

I gave a talk at the State Of The Browser conference in London earlier this month. It was a 25 minute spiel called The Web Is Agreement.

While I was putting the talk together, I posted some pictures of my talk preparation process. People seemed to be quite interested in that peek behind the curtain, so I thought I’d jot down the process I used.

There are two aspects to preparing a talk: the content and the presentation. I like to keep the preparation of those two parts separate. It’s kind of like writing: instead of writing and editing at the same time, it’s more productive to write any old crap first (to get it out of your head) and then go back and edit—”write drunk and edit sober”. Separating out those two mindsets allows you to concentrate on the task at hand.

So, to begin with, I’m not thinking about how I’m going to present the material at all. I’m only concerned with what I want to say.

When it comes to preparing the subject matter for a talk, step number zero is to banish your inner critic. You know who I mean. That little asshole with the sneering voice that says things like “you’re not qualified to talk about this” or “everything has already been said.” Find a way to realise that this demon is a) speaking from inside your head and b) not real. Maybe trying drawing your inner critic. Ridiculous? Yes. Yes, it is.

Alright, time to start. There’s nothing more intimidating than a blank slidedeck, except maybe a blank Photoshop file, or a blank word processing document. In each of those cases, I find that starting with software is rarely a good idea. Paper is your friend.

I get a piece of A3 paper and start scribbling out a mind map. “Mind map” is a somewhat grandiose term for what is effectively a lo-fi crazy wall.

<figure> <picture> <source media="all and (min-width: 46em)" srcset="https://adactio.com/images/uploaded/14275-1/large.jpg"> <source media="all and (min-width: 27em)" srcset="https://adactio.com/images/uploaded/14275-1/medium.jpg, https://adactio.com/images/uploaded/14275-1/large.jpg 1.25x"> Mind mapping. </picture> <figcaption>Step 1</figcaption> </figure>

The idea here is to get everything out of my head. Don’t self-censor. At this stage, there are no bad ideas. This is a “yes, and…” exercise, not a “no, but…” exercise. Divergent, not convergent.

Not everything will make it into the final talk. That’s okay. In fact, I often find that there’s one thing that I’m really attached to, that I’m certain will be in the talk, that doesn’t make the cut. Kill your darlings.

I used to do this mind-mapping step by opening a text file and dumping my thoughts into it. I told myself that they were in no particular order, but because a text file reads left to right and top to bottom, they are in an order, whether I intended it or not. By using a big sheet of paper, I can genuinely get things down in a disconnected way (and later, I can literally start drawing connections).

For this particular talk, I knew that the subject matter would be something to do with web standards. I brain-dumped every vague thought I had about standards in general.

The next step is what I call chunking. I start to group related ideas together. Then I give a label to each of these chunks. Personally, I like to use a post-it note per chunk. I put one word or phrase on the post-it note, but it could just as easily be a doodle. The important thing is that you know what the word or doodle represents. Each chunk should represent a self-contained little topic that you might talk about for 3 to 5 minutes.

<figure> <picture> <source media="all and (min-width: 46em)" srcset="https://adactio.com/images/uploaded/14275-2/large.jpg"> <source media="all and (min-width: 27em)" srcset="https://adactio.com/images/uploaded/14275-2/medium.jpg, https://adactio.com/images/uploaded/14275-2/large.jpg 1.25x"> Chunking. </picture> <figcaption>Step 2</figcaption> </figure>

At this point, I can start thinking about the structure of the talk: “Should I start with this topic? Or should I put that in the middle?” The cost of changing my mind is minimal—I’m just moving post-it notes around.

With topics broken down into chunks like this, I can flesh out each one. The nice thing about this is that I’ve taken one big overwhelming task—prepare a presentation!—and I’ve broken it down into smaller, more manageable tasks. I can take a random post-it note and set myself, say, ten or fifteen minutes to jot down an explanation of it.

The explanation could just be bullet points. For this particular talk, I decided to write full sentences.

<figure> <picture> <source media="all and (min-width: 46em)" srcset="https://adactio.com/images/uploaded/14275-3/large.jpg"> <source media="all and (min-width: 27em)" srcset="https://adactio.com/images/uploaded/14275-3/medium.jpg, https://adactio.com/images/uploaded/14275-3/large.jpg 1.25x"> Expanding. </picture> <figcaption>Step 3</figcaption> </figure>

Even though, in this case, I was writing out my thoughts word for word, I still kept the topics in separate files. That way, I can still move things around easily.

Crafting the narrative structure of a talk is the part I find most challenging …but it’s also the most rewarding. By having the content chunked up like this, I can experiment with different structures. I like to try out different narrative techniques from books and films, like say, flashback: find the most exciting part of the talk; start with that, and then give the backstory that led up to it. That’s just one example. There are many possible narrative stuctures.

What I definitely don’t do is enact the advice that everyone is given for their college presentations:

  1. say what you’re going to say,
  2. say it, and
  3. recap what you’ve said.

To me, that’s the equivalent of showing an audience the trailer for a film right before watching the film …and then reading a review of the film right after watching it. Just play the film! Give the audience some credit—assume the audience has no knowledge but infinite intelligence.

Oh, and there’s one easy solution to cracking the narrative problem: make a list. If you’ve got 7 chunks, you can always give a talk on “Seven things about whatever” …but it’s a bit of a cop-out. I mean, most films have a three-act structure, but they don’t start the film by telling the audience that, and they don’t point out when one act finishes and another begins. I think it’s much more satisfying—albeit more challenging—to find a way to segue from chunk to chunk.

Finding the narrative thread is tricky work, but at least, by this point, it’s its own separate task: if I had tried to figure out the narrative thread at the start of the process, or even when I was chunking things out, it would’ve been overwhelming. Now it’s just the next task in my to-do list.

I suppose, at this point, I might as well make some slides.

<figure> <picture> <source media="all and (min-width: 46em)" srcset="https://adactio.com/images/uploaded/14275-4/large.jpg"> <source media="all and (min-width: 27em)" srcset="https://adactio.com/images/uploaded/14275-4/medium.jpg, https://adactio.com/images/uploaded/14275-4/large.jpg 1.25x"> Slides. </picture> <figcaption>Step 4</figcaption> </figure>

I’m not trying to be dismissive of slides—I think having nice slides is a very good thing. But I do think that they can become the “busy work” of preparing a presentation. If I start on the slides too soon, I find they’ll take up all my time. I think it’s far more important to devote time to the content and structure of the talk. The slides should illustrate the talk …but the slides are not the talk.

If you don’t think of the slides as being subservient to the talk, there’s a danger that you’ll end up with a slidedeck that’s more for the speaker’s benefit than the audiences.

It’s all too easy to use the slides as a defence mechanism. You’re in a room full of people looking towards you. It’s perfectly reasonable for your brain to scream, “Don’t look at me! Look at the slides!” But taken too far, that can be interpreted as “Don’t listen to me!”

For this particular talk, there were moments when I wanted to make sure the audience was focused on some key point I was making. I decided that having no slide at all was the best way of driving home my point.

But slidedeck style is quite a personal thing, so use whatever works for you.

It’s a similar story with presentation style. Apart from some general good advice—like, speak clearly—the way you present should be as unique as you are. I have just one piece of advice, and it’s this: read Demystifying Public Speaking by Lara Hogan—it’s really, really good!

(I had to apologise to Lara last time I saw her, because I really wanted her to sign my copy of her book …but I didn’t have it, because it’s easily the book I’ve loaned out to other people the most.)

I did a good few run-throughs of my talk. There were a few sentences that sounded fine written down, but were really clumsy to say out loud. It reminded me of what Harrison Ford told George Lucas during the filming of Star Wars: “You can type this shit, George, but you can’t say it.”

I gave a final run-through at work to some of my Clearleft colleagues. To be honest, I find that more nerve-wracking than speaking on a stage in front of a big room full of strangers. I think it’s something to do with the presentation of self.

Finally, the day of the conference rolled around, and I was feeling pretty comfortable with my material. I’m pretty happy with how it turned out. You can read The Web Is Agreement, and you can look at the slides, but as with any conference talk, you kinda had to be there.

ProgrammableWebAccept Crytocurrency Payments with Stronghold Platform APIs

Stronghold, a company that provides a distributed exchange platform built on the Stellar Network, has announced the release of a set of Stronghold Platform APIs which can be used to add to applications a variety of payment instruments including (but not limited to) Bitcoin, Ethereum, and Lumens.

ProgrammableWebArxan Launches Protection Solution for Client-Side Web Apps

Arxan Technologies, provider of application protection solutions, announced this week the launch of Arxan for Web, the latest enhancement to its protection solution for client-side web apps. Enabling organizations to defend against server side API attacks and credential theft, Arxan for Web provides a multi-layered defensive approach including:

ProgrammableWebWhat Options do you Have for API-access Control?

If you are building an internet-facing application, you will certainly have to tackle setting up access control for it. Access control is the process of granting your users permission to access certain systems, resources or information. This is commonly known as authentication and authorization. What are the different approaches to access control on the web and which approaches are best suited to different use cases?

Jeremy Keith (Adactio)Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

Bob DuCharme (Innodata Isogen)Panic over "superhuman" AI

Robot overlords not on the way.

Robot Overlords movie poster

When someone describe their worries about AI taking over the world, I usually think to myself "I recently bookmarked a good article about why this is silly and I should point this person to it", but in that instant I can't remember what the article was. I recently re-read a few and thought I'd summarize them here in case anyone wants to point their friends to some sensible discussions of why such worries are unfounded.

The impossibility of intelligence explosion by François Chollet

Chollet is an AI researcher at Google and the author of the Keras deep learning framework and the Manning books "Deep Learning with Python" and "Deep Learning with R". Like some of the other articles covered here, his piece takes on the idea that we will someday build an AI system that can build a better one on its own, and then that one will build a better one, and so on until the singularity.

His outline gives you a general idea of his line of reasoning; the bulleted lists in his last two sections are also good:

  • A flawed reasoning that stems from a misunderstanding of intelligence

  • Intelligence is situational

  • Our environment puts a hard limit on our individual intelligence

  • Most of our intelligence is not in our brain, it is externalized as our civilization

  • An individual brain cannot implement recursive intelligence augmentation

  • What we know about recursively self-improving systems

  • Conclusions

One especially nice paragraph:

In particular, there is no such thing as "general" intelligence. On an abstract level, we know this for a fact via the "no free lunch" theorem -- stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks -- like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

'The discourse is unhinged': how the media gets AI alarmingly wrong by Oscar Schwartz

This Guardian piece focuses on how the media encourages silly thinking about the future of AI. As the article's subtitle tells us,

Social media has allowed self-proclaimed 'AI influencers' who do nothing more than paraphrase Elon Musk to cash in on this hype with low-quality pieces. The result is dangerous.

Much of the article focuses on the efforts of Zachary Lipton, a machine learning assistant professor at Carnegie Mellon, to call out bad journalism on the topic. One example is an article that I was also guilty of taking too seriously: Fast Company's AI Is Inventing Languages Humans Can't Understand. Should We Stop It? The actual "language" was just overly repetitive sentences made possible by recursive grammar rules, which I had experienced myself many years ago doing a LISP-based project for a Natural Language Processing course. Schwartz quotes the Sun article Facebook shuts off AI experiment after two robots begin speaking in their OWN language only they can understand as saying that the incident "closely resembled the plot of The Terminator in which a robot becomes self-aware and starts waging a war on humans". (The Sun article also says "Experts have called the incident exciting but also incredibly scary"; according to the Guardian article, "These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking".)

Schwartz's piece describes how the term "electronic brain" is as old as electronic computers, and how overhyped media coverage of machines that "think" as far back as the 1940s led to inflated expectations about AI that greatly contributed to the several AI winters we've had since then.

Ways to Think About Machine Learning by Benedict Evans

If you're going to read only one of the articles I describe here all the way through, I recommend this one. I don't listen to every episode of the a16z podcast, but I do listen to every one that includes Benedict Evans (this week's episode, on Tesla and the Nature of Disruption, was typically excellent), and I have subscribed to his newsletter for years. He's a sharp guy with sensible attitudes about how technologies and societies fit together and where it may lead.

One theme of many of the articles I describe here is the false notion that intelligence is a single thing that can be measured on a one-dimensional scale. As Evans puts it,

This gets to the heart of the most common misconception that comes up in talking about machine learning - that it is in some way a single, general purpose thing, on a path to HAL 9000, and that Google or Microsoft have each built *one*, or that Google 'has all the data', or that IBM has an actual thing called 'Watson'. Really, this is always the mistake in looking at automation: with each wave of automation, we imagine we're creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn't get robot servants - we got washing machines.

Washing machines are robots, but they're not 'intelligent'. They don't know what water or clothes are. Moreover, they're not general purpose even in the narrow domain of washing - you can't put dishes in a washing machine, nor clothes in a dishwasher (or rather, you can, but you won't get the result you want). They're just another kind of automation, no different conceptually to a conveyor belt or a pick-and-place machine. Equally, machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company. Each of them is a piece of automation. Each of them is a washing machine.

After bringing up relational databases as a point of comparison for what new technology can do ("Relational databases gave us Oracle, but they also gave us SAP, and SAP and its peers gave us global just-in-time supply chains - they gave us Apple and Starbucks"), he asks "What, then, are the washing machines of machine learning, for real companies?" He offers some good suggestions, some of which can be summarized as "AI will allow the automation of more things".

He also discusses low-hanging fruit for what new things AI may automate. As an excellent followup to that, I recommend Kathryn Hume's Harvard Business Review article How to Spot a Machine Learning Opportunity, Even If You Aren't a Data Scientist.

The Myth of a Superhuman AI by Kevin Kelly

In this Wired Magazine article by one of their founders, after a discussion of some of the panicky scenarios out there we read that "buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence". He lists them, then lists five "heresies [that] have more evidence to support them"; these five provide the structure for the rest of his piece:

  • Intelligence is not a single dimension, so "smarter than humans" is a meaningless concept.

  • Humans do not have general purpose minds, and neither will AIs.

  • Emulation of human thinking in other media will be constrained by cost.

  • Dimensions of intelligence are not infinite.

  • Intelligences are only one factor in progress.

A good point about how artificial general intelligence is not something to worry about makes a nice analogy with artificial flight:

When we invented artificial flying we were inspired by biological modes of flying, primarily flapping wings. But the flying we invented -- propellers bolted to a wide fixed wing -- was a new mode of flying unknown in our biological world. It is alien flying. Similarly, we will invent whole new modes of thinking that do not exist in nature. In many cases they will be new, narrow, "small," specific modes for specific jobs -- perhaps a type of reasoning only useful in statistics and probability.

(This reminds me of Evans writing "We didn't get robot servants - we got washing machines".) Another good metaphor is Kelly's comparison of attitudes about superhuman AI with cargo cults:

It is possible that superhuman AI could turn out to be another cargo cult. A century from now, people may look back to this time as the moment when believers began to expect a superhuman AI to appear at any moment and deliver them goods of unimaginable value. Decade after decade they wait for the superhuman AI to appear, certain that it must arrive soon with its cargo.

19 A.I. experts reveal the biggest myths about robots by Guia Marie Del Prado

This Business Insider piece is almost three years old but still relevant. Most of the experts it quotes are actual computer scientist professors, so you get much more sober assessments than you'll see in the panicky articles out there. Here's a good one from Berkeley computer scientist Stuart Russell:

The most common misconception is that what AI people are working towards is a conscious machine, that until you have a conscious machine there's nothing to worry about. It's really a red herring.

To my knowledge, nobody, no one who is publishing papers in the main field of AI, is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress.

As far as AI people, nobody is trying to build a conscious machine, because no one has a clue how to do it, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

From Pieter Abbeel, another Berkeley computer scientist:

In robotics there is something called Moravec's Paradox: "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

This is well appreciated by researchers in robotics and AI, but can be rather counter-intuitive to people not actively engaged in the field.

Replicating the learning capabilities of a toddler could very well be the most challenging problem for AI, even though we might not typically think of a one-year-old as the epitome of intelligence.

I was happy to see the article quote NYU's Ernie Davis, whose AI class I took over 20 years ago while working on my master's degree there. (Reviewing my class notebook I see a lot of LISP and Prolog code, so things have changed a lot.)

This article implicitly has a nice guideline for when to take predictions about the future of AI seriously: are they computer scientists familiar with the actual work going on lately? If they're experts in other fields engaging in science fiction riffing (or as the Guardian article put it more cleverly, paraphrasing Elon Musk), take it all with a big grain of salt.

I don't mean to imply that the progress of technologies labeled as "Artificial Intelligence" has no potential problems to worry about. Just as automobiles and chain saws and a lot of other technology invented over the years can do harm as well as good, the new power brought by advanced processors, storage, and memory can be misused intentionally or accidentally, so it's important to think through all kinds of scenarios when planning for the future. In fact, this is all the more reason not to worry about sentient machines: as the Guardian piece quotes Lipton, "There are policymakers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making. But this issue is terrestrial and sober, so not many people take an interest." Sensible stuff to keep in mind.


Please add any comments to this Google+ post.

ProgrammableWebDaily API RoundUp: CKMIO, MAC Address, Bambora, Telnyx

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebFacebook will Soon Auto-Enroll Apps in its App Review Process

Earlier this year, Facebook introduced a new app review process for third party developers leveraging various Facebook APIs.

ProgrammableWebIntegrate Trade Service Industry Business Management Software with simPRO API

simPRO, business management software provider, recently introduced its RESTful simPRO API. To simplify developer access to the many resources available through the API, simPRO build an intuitive developer center where the API docs, and user instructions are kept.

ProgrammableWebDaily API RoundUp: Zloadr, Telstra, Twinword, VWO

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesMeet the Newest AWS Heroes (September 2018 Edition)

AWS Heroes are passionate AWS enthusiasts who use their extensive knowledge to teach others about all things AWS across a range of mediums. Many Heroes eagerly share knowledge online via forums, social media, or blogs; while others lead AWS User Groups or organize AWS Community Day events. Their extensive efforts to spread AWS knowledge have a significant impact within their local communities. Today we are excited to introduce the newest AWS Heroes:

Jaroslaw Zielinski – Poznan, Poland

Jaroslaw ZielinskiAWS Community Hero Jaroslaw Zielinski is a Solutions Architect at Vernity in Poznan (Poland), where his responsibility is to support customers on their road to the cloud using cloud adoption patterns. Jaroslaw is a leader of AWS User Group Poland operating in 7 different cities around Poland. Additionally, he connects the community with the biggest IT conferences in the region – PLNOG, DevOpsDay, Amazon@Innovation to name just a few.

He supports numerous projects connected with evangelism, like Zombie Apocalypse Workshops or Cloud Builder’s Day. Bringing together various IT communities, he hosts a conference Cloud & Datacenter Day – the biggest community conference in Poland. In addition, his passion for IT is transferred into his own blog called Popołudnie w Sieci. He also publishes in various professional papers.

 

Jerry Hargrove – Kalama, USA

Jerry HargroveAWS Community Hero Jerry Hargrove is a cloud architect, developer and evangelist who guides companies on their journey to the cloud, helping them to build smart, secure and scalable applications. Currently with Lucidchart, a leading visual productivity platform, Jerry is a thought leader in the cloud industry and specializes in AWS product and services breakdowns, visualizations and implementation. He brings with him over 20 years of experience as a developer, architect & manager for companies like Rackspace, AWS and Intel.

You can find Jerry on Twitter compiling his famous sketch notes and creating Lucidchart templates that pinpoint practical tips for working in the cloud and helping developers increase efficiency. Jerry is the founder of the AWS Meetup Group in Salt Lake City, often contributes to meetups in the Pacific Northwest and San Francisco Bay area, and speaks at developer conferences worldwide. Jerry holds several professional AWS certifications.

 

Martin Buberl – Copenhagen, Denmark

Martin BuberlAWS Community Hero Martin Buberl brings the New York hustle to Scandinavia. As VP Engineering at Trustpilot he is on a mission to build the best engineering teams in the Nordics and Baltics. With a person-centered approach, his focus is on high-leverage activities to maximize impact, customer value and iteration speed — and utilizing cloud technologies checks all those boxes.

His cloud-obsession made him an early adopter and evangelist of all types of AWS services throughout his career. Nowadays, he is especially passionate about Serverless, Big Data and Machine Learning and excited to leverage the cloud to transform those areas.

Martin is an AWS User Group Leader, organizer of the AWS Community Day Nordics and founder of the AWS Community Nordics Slack. He has spoken at multiple international AWS events — AWS User Groups, AWS Community Days and AWS Global Summits — and is looking forward to continue sharing his passion for software engineering and cloud technologies with the Community.

To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

Amazon Web ServicesNew – Parallel Query for Amazon Aurora

Amazon Aurora is a relational database that was designed to take full advantage of the abundance of networking, processing, and storage resources available in the cloud. While maintaining compatibility with MySQL and PostgreSQL on the user-visible side, Aurora makes use of a modern, purpose-built distributed storage system under the covers. Your data is striped across hundreds of storage nodes distributed over three distinct AWS Availability Zones, with two copies per zone, on fast SSD storage. Here’s what this looks like (extracted from Getting Started with Amazon Aurora):

New Parallel Query
When we launched Aurora we also hinted at our plans to apply the same scale-out design principle to other layers of the database stack. Today I would like to tell you about our next step along that path.

Each node in the storage layer pictured above also includes plenty of processing power. Aurora is now able to make great use of that processing power by taking your analytical queries (generally those that process all or a large part of a good-sized table) and running them in parallel across hundreds or thousands of storage nodes, with speed benefits approaching two orders of magnitude. Because this new model reduces network, CPU, and buffer pool contention, you can run a mix of analytical and transactional queries simultaneously on the same table while maintaining high throughput for both types of queries.

The instance class determines the number of parallel queries that can be active at a given time:

  • db.r*.large – 1 concurrent parallel query session
  • db.r*.xlarge – 2 concurrent parallel query sessions
  • db.r*.2xlarge – 4 concurrent parallel query sessions
  • db.r*.4xlarge – 8 concurrent parallel query sessions
  • db.r*.8xlarge – 16 concurrent parallel query sessions
  • db.r4.16xlarge – 16 concurrent parallel query sessions

You can use the aurora_pq parameter to enable and disable the use of parallel queries at the global and the session level.

Parallel queries enhance the performance of over 200 types of single-table predicates and hash joins. The Aurora query optimizer will automatically decide whether to use Parallel Query based on the size of the table and the amount of table data that is already in memory; you can also use the aurora_pq_force session variable to override the optimizer for testing purposes.

Parallel Query in Action
You will need to create a fresh cluster in order to make use of the Parallel Query feature. You can create one from scratch, or you can restore a snapshot.

To create a cluster that supports Parallel Query, I simply choose Provisioned with Aurora parallel query enabled as the Capacity type:

I used the CLI to restore a 100 GB snapshot for testing, and then explored one of the queries from the TPC-H benchmark. Here’s the basic query:

SELECT
  l_orderkey,
  SUM(l_extendedprice * (1-l_discount)) AS revenue,
  o_orderdate,
  o_shippriority

FROM customer, orders, lineitem

WHERE
  c_mktsegment='AUTOMOBILE'
  AND c_custkey = o_custkey
  AND l_orderkey = o_orderkey
  AND o_orderdate < date '1995-03-13'
  AND l_shipdate > date '1995-03-13'

GROUP BY
  l_orderkey,
  o_orderdate,
  o_shippriority

ORDER BY
  revenue DESC,
  o_orderdate LIMIT 15;

The EXPLAIN command shows the query plan, including the use of Parallel Query:

+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
| id | select_type | table    | type | possible_keys                 | key  | key_len | ref  | rows      | Extra                                                                                                                          |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
|  1 | SIMPLE      | customer | ALL  | PRIMARY                       | NULL | NULL    | NULL |  14354602 | Using where; Using temporary; Using filesort                                                                                   |
|  1 | SIMPLE      | orders   | ALL  | PRIMARY,o_custkey,o_orderdate | NULL | NULL    | NULL | 154545408 | Using where; Using join buffer (Hash Join Outer table orders); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)   |
|  1 | SIMPLE      | lineitem | ALL  | PRIMARY,l_shipdate            | NULL | NULL    | NULL | 606119300 | Using where; Using join buffer (Hash Join Outer table lineitem); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.01 sec)

Here is the relevant part of the Extras column:

Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)

The query runs in less than 2 minutes when Parallel Query is used:

+------------+-------------+-------------+----------------+
| l_orderkey | revenue     | o_orderdate | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 514726.4896 | 1995-03-06  |              0 |
|  593851010 | 475390.6058 | 1994-12-21  |              0 |
|  188390981 | 458617.4703 | 1995-03-11  |              0 |
|  241099140 | 457910.6038 | 1995-03-12  |              0 |
|  520521156 | 457157.6905 | 1995-03-07  |              0 |
|  160196293 | 456996.1155 | 1995-02-13  |              0 |
|  324814597 | 456802.9011 | 1995-03-12  |              0 |
|   81011334 | 455300.0146 | 1995-03-07  |              0 |
|   88281862 | 454961.1142 | 1995-03-03  |              0 |
|   28840519 | 454748.2485 | 1995-03-08  |              0 |
|  113920609 | 453897.2223 | 1995-02-06  |              0 |
|  377389669 | 453438.2989 | 1995-03-07  |              0 |
|  367200517 | 453067.7130 | 1995-02-26  |              0 |
|  232404000 | 452010.6506 | 1995-03-08  |              0 |
|   16384100 | 450935.1906 | 1995-03-02  |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 min 53.36 sec)

I can disable Parallel Query for the session (I can use an RDS custom cluster parameter group for a longer-lasting effect):

set SESSION aurora_pq=OFF;

The query runs considerably slower without it:

+------------+-------------+-------------+----------------+
| l_orderkey | o_orderdate | revenue     | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 1995-03-06  | 514726.4896 |              0 |
...
|   16384100 | 1995-03-02  | 450935.1906 |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 hour 25 min 51.89 sec)

This was on a db.r4.2xlarge instance; other instance sizes, data sets, access patterns, and queries will perform differently. I can also override the query optimizer and insist on the use of Parallel Query for testing purposes:

set SESSION aurora_pq_force=ON;

Things to Know
Here are a couple of things to keep in mind when you start to explore Amazon Aurora Parallel Query:

Engine Support – We are launching with support for MySQL 5.6, and are working on support for MySQL 5.7 and PostgreSQL.

Table Formats – The table row format must be COMPACT; partitioned tables are not supported.

Data Types – The TEXT, BLOB, and GEOMETRY data types are not supported.

DDL – The table cannot have any pending fast online DDL operations.

Cost – You can make use of Parallel Query at no extra charge. However, because it makes direct access to storage, there is a possibility that your IO cost will increase.

Give it a Shot
This feature is available now and you can start using it today!

Jeff;

 

ProgrammableWebBuild Innovative Healthcare Apps with the Newly Upgraded DrChrono API

DrChrono, a healthcare technology platform provider, has announced major upgrades to the DrChrono API which includes a number of new endpoints. Among the new endpoints are Clinical Quality Measures (CQM), Clinical Notes, Tasks, Billing, and Labs. A wide range of new webhooks have been added to the DrChrono API. The webhooks provide developers with many more options when it comes to checking for changes in DrChrono.

ProgrammableWebFacebook Expands Bug Bounty Program to Include Third Party Apps

This week Facebook expanded their bug bounty program that has been running since 2011 and has payed out more than $6 million in that time.

Jeremy Keith (Adactio)A framework for web performance

Here at Clearleft, we’ve recently been doing some front-end consultancy. That prompted me to jot down thoughts on design principles and performance:

We continued with some more performance work this week. Having already covered some of the nitty-gritty performance tactics like font-loading, image optimisation, etc., we wanted to take a step back and formulate an ongoing strategy for performance.

When it comes to web performance, the eternal question is “What should we measure?” The answer to that question will determine where you then concentrate your efforts—whatever it is your measuring, that’s what you’ll be looking to improve.

I started by drawing a distinction between measurements of quantities and measurements of time. Quantities are quite easy to measure. You can measure these quantities using nothing more than browser dev tools:

  • overall file size (page weight + assets), and
  • number of requests.

I think it’s good to measure these quantities, and I think it’s good to have a performance budget for them. But I also think they’re table stakes. They don’t actually tell you much about the impact that performance is having on the user experience. For that, we need to enumerate moments in time:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, and
  • time to first meaningful interaction.

There’s one more moment in time, which is the time until DOM content is loaded. But I’m not sure that has a direct effect on how performance is perceived, so it feels like it belongs more in the category of quantities than time.

Next, we listed out all the factors that could affect each of the moments in time. For example, the time to first byte depends on the speed of the network that the user is on. It also depends on how speedily your server (or Content Delivery Network) can return a response. Meanwhile, time to first render is affected by the speed of the user’s network, but it’s also affected by how many blocking elements are on the critical path.

By listing all the factors out, we can draw a distinction between the factors that are outside of our control, and the factors that we can do something about. So while we might not be able to do anything about the speed of the user’s network, we might well be able to optimise the speed at which our server returns a response, or we might be able to defer some assets that are currently blocking the critical path.

<figure>

Factors
1st byte
  • server speed
  • network speed
1st render
  • network speed
  • critical path assets
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size

</figure>

So far, everything in our list of performance-affecting factors is related to the first visit. It’s worth drawing up a second list to document all the factors for subsequent visits. This will look the same as the list for first visits, but with the crucial difference that caching now becomes a factor.

<figure>

First visit factors Repeat visit factors
1st byte
  • server speed
  • network speed
  • server speed
  • network speed
  • caching
1st render
  • network speed
  • critical path assets
  • network speed
  • critical path assets
  • caching
1st meaningful paint
  • network speed
  • font-loading strategy
  • image optimisation
  • network speed
  • font-loading strategy
  • image optimisation
  • caching
1st meaningful interaction
  • network speed
  • device processing power
  • JavaScript size
  • network speed
  • device processing power
  • JavaScript size
  • caching

</figure>

Alright. Now it’s time to get some numbers for each of the four moments in time. I use Web Page Test for this. Choose a realistic setting, like 3G on an Android from the East coast of the USA. Under advanced settings, be sure to select “First View and Repeat View” so that you can put those numbers in two different columns.

Here are some numbers for adactio.com:

<figure>

First visit time Repeat visit time
1st byte 1.476 seconds 1.215 seconds
1st render 2.633 seconds 1.930 seconds
1st meaningful paint 2.633 seconds 1.930 seconds
1st meaningful interaction 2.868 seconds 2.083 seconds

</figure>

I’m getting the same numbers for first render as first meaningful paint. That tells me that there’s no point in trying to optimise my font-loading, for example …which makes total sense, because adactio.com isn’t using any web fonts. But on a different site, you might see a big gap between those numbers.

I am seeing a gap between time to first byte and time to first render. That tells me that I might be able to get some blocking requests off the critical path. Sure enough, I’m currently referencing an external stylesheet in the head of adactio.com—if I were to inline critical styles and defer the loading of that stylesheet, I should be able to narrow that gap.

A straightforward site like adactio.com isn’t going to have much to worry about when it comes to the time to first meaningful interaction, but on other sites, this can be a significant bottleneck. If you’re sending UI elements in the initial HTML, but then waiting for JavaScript to “hydrate” those elements into working, the user can end up in an uncanny valley of tapping on page elements that look fine, but aren’t ready yet.

My point is, you’re going to see very different distributions of numbers depending on the kind of site you’re testing. There’s no one-size-fits-all metric to focus on.

Now that you’ve got numbers for how your site is currently performing, you can create two new columns: one of those is a list of first-visit targets, the other is a list of repeat-visit targets for each moment in time. Try to keep them realistic.

For example, if I could reduce the time to first render on adactio.com by 0.5 seconds, my goals would look like this:

<figure>

First visit goal Repeat visit goal
1st byte 1.476 seconds 1.215 seconds
1st render 2.133 seconds 1.430 seconds
1st meaningful paint 2.133 seconds 1.430 seconds
1st meaningful interaction 2.368 seconds 1.583 seconds

</figure>

See how the 0.5 seconds saving cascades down into the other numbers?

Alright! Now I’ve got something to aim for. It might also be worth having an extra column to record which of the moments in time are high priority, which are medium priority, and which are low priority.

<figure>

Priority
1st byte Medium
1st render High
1st meaningful paint Low
1st meaningful interaction Low

</figure>

Your goals and priorities may be quite different.

I think this is a fairly useful framework for figuring out where to focus when it comes to web performance. If you’d like to give it a go, I’ve made a web performance chart for you to print out and fill in. Here’s a PDF version if that’s easier for printing. Or you can download the HTML version if you want to edit it.

I have to say, I’m really enjoying the front-end consultancy work we’ve been doing at Clearleft around performance and related technologies, like offline functionality. I’d like to do more of it. If you’d like some help in prioritising performance at your company, please get in touch. Let’s make the web faster together.

ProgrammableWebGoogle&#039;s Flutter Reaches Penultimate Preview

Google today released a new preview version of Flutter to developers. Flutter is Google's set of tools for quickly designing and implementing Android and iOS user interfaces. The company hopes to have a 1.0 release prepared in the near future.

ProgrammableWebHow the OnSched API can Integrate Scheduling Into Your Chat-bot

Chatbot systems have greatly evolved the customer service space with rapid, personalized responses that can save businesses hundreds of thousands of dollars. Moving into 2019 these bots are looking for ways to break the mold with innovations in automation, as well as new integrations with other powerful software tools.

ProgrammableWebGoogle Announces Cloud Inference API

Google recently introduced the new Google Cloud Inference API. The API allows developers to run large scale correlations over type time-series datasets? In operation, these correlations span a number of real world use case scenarios. Retailers can analyze foot traffic to sales conversion numbers using sensor data.

Simon Willison (Django)Build impossible programs

Build impossible programs

Delightful talk by Julia Evans describing how she went about building a Ruby profiler in Rust despite having no knowledge of Ruby internals and only beginner's knowledge of Rust.

Simon Willison (Django)Sqorn

Sqorn

JavaScript library for building SQL queries that makes really smart usage of ES6 tagged template literals. The magic of tagged template literals is that they let you intercept and process interpolated values, making them ideally suited to escaping parameters in SQL queries. Sqorn takes that basic ability and layers on some really interesting API design to allow you to further compose queries.

Via Michele Nasti

WHATWG blogThe state of fieldset interoperability

As part of my work at Bocoup, I recently started working with browser implementers to improve the state of fieldset, the 21 year old feature in HTML, that provides form accessibility benefits to assistive technologies like screen readers. It suffers from a number of interoperability bugs that make it difficult for web developers to use.

Here is an example form grouped with a <legend> caption in a <fieldset> element:

Pronouns

And the corresponding markup for the above example.

<fieldset>
 <legend>Pronouns</legend>
 <label><input type=radio name=pronouns value=he> He/him</label>
 <label><input type=radio name=pronouns value=she> She/her</label>
 <label><input type=radio name=pronouns value=they> They/them</label>
 <label><input type=radio name=pronouns value=other> Write your own</label>
 <input type=text name=pronouns-other placeholder=&hellip;>
</fieldset>

The element is defined in the HTML standard, along with rendering rules in the Rendering section. Further developer documentation is available on MDN.

Usage

Based on a query of the HTTP Archive data set, containing the raw content of the top 1.3 million web pages, we find the relative usage of each HTML element. The fieldset element is used on 8.41% of the web pages, which is higher than other popular features, such as the video and canvas elements; however, the legend element is used on 2.46% of web pages, which is not ideal for assistive technologies. Meanwhile, the form element appears on 70.55% of pages, and we believe that if interoperability bugs were fixed, correct and semantic fieldset and legend use would increase, and have a positive impact on form accessibility for the web.

Fieldset standards history

In January 1997, HTML 3.2 introduces forms and some form controls, but does not include the fieldset or legend elements.

In July 1997, the first draft of HTML 4.0 introduces the fieldset and legend elements:

The FIELDSET element allows form designers to group thematically related controls together. Grouping controls makes it easier for users to understand their purpose while simultaneously facilitating tabbing navigation for visual user agents and speech navigation for speech-oriented user agents. The proper use of this element makes documents more accessible to people with disabilities.

The LEGEND element allows designers to assign a caption to a FIELDSET. The legend improves accessibility when the FIELDSET is rendered non-visually. When rendered visually, setting the align attribute on the LEGEND element aligns it with respect to the FIELDSET.

In December 1999, HTML 4.01 is published as a W3C Recommendation, without changing the definitions of the fieldset and legend elements.

In December 2003, Ian Hickson extends the fieldset element with the disabled and form attributes in the Proposed XHTML Module: XForms Basic, later renamed to Web Forms 2.0.

In September 2008, Ian Hickson adds the fieldset element to the HTML standard.

In February 2009, Ian Hickson specifies rendering rules for the fieldset element. The specification has since gone through some minor revisions, e.g., specifying that fieldset establishes a block formatting context in 2009 and adding min-width: min-content; in 2014.

In August 2018, I proposed a number of changes to the standard to better define how it should work, and resolve ambiguity between browser implementer interpretations.

Current state

As part of our work at Bocoup to improve the interoperability of the fieldset and legend child element, we talked to web developers and browser implementers, proposed changes to the standard, and wrote a lot of tests. At the time of this writing, 26 issues have been reported on the HTML specification for the fieldset element, and the tests that we wrote show a clear lack of interoperability among browser engines.

The results for fieldset and legend tests show some tests failing in all browsers, some tests passing in all browsers, and some passing and failing in different browsers.

Of the 26 issues filed against the specification, 17 are about rendering interoperability. These rendering issues affect use cases such as making a fieldset scrollable, which currently result in broken scroll-rendering in some browsers. These issues also affect consistent legend rendering which is causing web developers avoid using the fieldset element altogether. Since the fieldset element is intended to help people who use assistive technologies to navigate forms, the current situation is less than ideal.

HTML spec rendering issues

In April of this year, Mozilla developers filed a meta-issue on the HTML specification “Need to spec fieldset layout” to address the ambiguities which have been leading to interoperability issues between browser implementations. During the past few weeks of work on fieldset, we made initial proposed changes to the rendering section of the HTML standard to address these 17 issues. At the time of this writing, these changes are under review.

Proposal to extend -webkit-appearance

Web developers also struggle with changing the default behaviors of fieldset and legend and seek ways to turn off the “magic” to have the elements render as normal elements. To address this, we created a proposal to extend the -webkit-appearance CSS property with a new value called fieldset and a new property called legend that are together capable giving grouped rendering behavior to regular elements, as well as resetting fieldset/legend elements to behave like normal elements.

fieldset {
  -webkit-appearance: none;
  margin: 0;
  padding: 0;
  border: none;
  min-inline-size: 0;
}
legend {
  legend: none;
  padding: 0;
}

The general purpose proposed specification for an "unprefixed" CSS ‘appearance’ property, has been blocked by Mozilla's statement that it is not web-compatible as currently defined, meaning that implementing appearance would break the existing behavior of websites that are currently using CSS appearance in a different way.

We asked the W3C CSS working group for feedback on the above approach, and they had some reservations and will develop an alternative proposal. When there is consensus for how it should work, we will update the specification and tests accordingly.

We had also considered defining new display values for fieldset and legend, but care needs to be taken to preserve web compatibility. There are thousands of pages in HTTP Archive that set ‘display’ to something on fieldset or legend, but browsers typically behave as display: block was set. For example, specifying display: inline on the legend needs to render the same as it does by default.

In parallel, we authored an initial specification for the ‘-webkit-appearance’ property in Mike Taylor's WHATWG Compatibility standard (which reverse engineers web platform wonk into status quo specifications), along with accompanying tests. More work needs to be done for the ‘-webkit-appearance’ (or unprefixed ‘appearance’) to define what the values mean and to reach interoperability on the supported values.

Accessibility Issues

We have started looking into testing accessibility explicitly, to ensure that the elements remain accessible even when they are styled in particular ways.

This work has uncovered ambiguities in the specification, which we have submitted a proposal to address. We have also identified interoperability issues in the accessiblity mapping in implementations, which we have reported.

Implementation fixes

Meta bugs have been reported for each browser engine (Gecko, Chromium, WebKit, EdgeHTML), which depend on more specific bugs.

As of September 18 2018, the following issues have been fixed in Gecko:

In Gecko, the bug Implement fieldset/legend in terms of '-webkit-appearance' currently has a work-in-progress patch.

The following issues have been fixes in Chromium:

The WebKit and Edge teams are aware of bugs, and we will follow up with them to track progress.

Conclusion

The fieldset and legend elements are useful to group related form controls, in particular to aid people who use assistive technologies. They are currently not interoperable and are difficult for web developers to style. With our work and proposal, we aim to resolve the problems so that they can be used without restrictions and behave the same in all browser engines, which will benefit browser implementers, web developers, and end users.

(This post is cross-posted on Bocoup's blog.)

Amazon Web ServicesAWS Data Transfer Price Reductions – Up to 34% (Japan) and 28% (Australia)

I’ve got good news for AWS customers who make use of our Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. Effective September 1, 2018 we are reducing prices for data transfer from Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon CloudFront by up to 34% in Japan and 28% in Australia.

EC2 and S3 Data Transfer
Here are the new prices for data transfer from EC2 and S3 to the Internet:

EC2 & S3 Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 1 GB / Month $0.000 $0.000 0% $0.000 $0.000 0%
Next 9.999 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.130 $0.086 -34% $0.130 $0.094 -28%
Greater than 150 TB / Month $0.120 $0.084 -30% $0.120 $0.092 -23%

You can consult the EC2 Pricing and S3 Pricing pages for more information.

CloudFront Data Transfer
Here are the new prices for data transfer from CloudFront edge nodes to the Internet

CloudFront Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 10 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.120 $0.086 -28% $0.120 $0.094 -22%
Next 350 TB / Month $0.100 $0.084 -16% $0.100 $0.092 -8%
Next 524 TB / Month $0.080 $0.080 0% $0.095 $0.090 -5%
Next 4 PB / Month $0.070 $0.070 0% $0.090 $0.085 -6%
Over 5 PB / Month $0.060 $0.060 0% $0.085 $0.080 -6%

Visit the CloudFront Pricing page for more information.

We have also reduced the price of data transfer from CloudFront to your Origin. The price for CloudFront Data Transfer to Origin from edge locations in Australia has been reduced 20% to $0.080 per GB. This represents content uploads via POST and PUT.

Things to Know
Here are a couple of interesting things that you should know about AWS and data transfer:

AWS Free Tier – You can use the AWS Free Tier to get started with, and to learn more about, EC2, S3, CloudFront, and many other AWS services. The AWS Getting Started page contains lots of resources to help you with your first project.

Data Transfer from AWS Origins to CloudFront – There is no charge for data transfers from an AWS origin (S3, EC2, Elastic Load Balancing, and so forth) to any CloudFront edge location.

CloudFront Reserved Capacity Pricing – If you routinely use CloudFront to deliver 10 TB or more content per month, you should investigate our Reserved Capacity pricing. You can receive a significant discount by committing to transfer 10 TB or more content from a single region, with additional discounts at higher levels of usage. To learn more or to sign up, simply Contact Us.

Jeff;

 

ProgrammableWebAmazon Releases Alexa Gadgets Toolkit Which Includes Self-Service APIs

Amazon has announced the release of the Alexa Gadgets Toolkit (Beta) which includes a number of self-service APIs, technical documentation, and sample code that developers can use to build an Alexa Gadget that incorporates some of the available Gadget Interfaces.

Amazon Web ServicesNew – AWS Storage Gateway Hardware Appliance

AWS Storage Gateway connects your on-premises applications to AWS storage services such as Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and Amazon Glacier. It runs in your existing virtualized environment and is visible to your applications and your client operating systems as a file share, a local block volume, or a virtual tape library. The resulting hybrid storage model gives our customers the ability to use their AWS Storage Gateways for backup, archiving, disaster recovery, cloud data processing, storage tiering, and migration.

New Hardware Appliance
Today we are making Storage Gateway available as a hardware appliance, adding to the existing support for VMware ESXi, Microsoft Hyper-V, and Amazon EC2. This means that you can now make use of Storage Gateway in situations where you do not have a virtualized environment, server-class hardware or IT staff with the specialized skills that are needed to manage them. You can order appliances from Amazon.com for delivery to branch offices, warehouses, and “outpost” offices that lack dedicated IT resources. Setup (as you will see in a minute) is quick and easy, and gives you access to three storage solutions:

File Gateway – A file interface to Amazon S3, accessible via NFS or SMB. The files are stored as S3 objects, allowing you to make use of specialized S3 features such as lifecycle management and cross-region replication. You can trigger AWS Lambda functions, run Amazon Athena queries, and use Amazon Macie to discover and classify sensitive data.

Volume Gateway – Cloud-backed storage volumes, accessible as local iSCSI volumes. Gateways can be configured to cache frequently accessed data locally, or to store a full copy of all data locally. You can create EBS snapshots of the volumes and use them for disaster recovery or data migration.

Tape Gateway – A cloud-based virtual tape library (VTL), accessible via iSCSI, so you can replace your on-premises tape infrastructure, without changing your backup workflow.

To learn more about each of these solutions, read What is AWS Storage Gateway.

The AWS Storage Gateway Hardware Appliance is based on a specially configured Dell EMC PowerEdge R640 Rack Server that is pre-loaded with AWS Storage Gateway software. It has 2 Intel® Xeon® processors, 128 GB of memory, 6 TB of usable SSD storage for your locally cached data, redundant power supplies, and you can order one from Amazon.com:

If you have an Amazon Business account (they’re free) you can use a purchase order for the transaction. In addition to simplifying deployment, using this standardized configuration helps to assure consistent performance for your local applications.

Hardware Setup
As you know, I like to go hands-on with new AWS products. My colleagues shipped a pre-release appliance to me; I left it under the watchful guide of my CSO (Canine Security Officer) until I was ready to write this post:

I don’t have a server room or a rack, so I set it up on my hobby table for testing:

In addition to the appliance, I also scrounged up a VGA cable, a USB keyboard, a small monitor, and a power adapter (C13 to NEMA 5-15). The adapter is necessary because the cord included with the appliance is intended to plug in a power distribution jack commonly found in a data center. I connected it all up, turned it on and watched it boot up, then entered a new administrative password.

Following the directions in the documentation, I configured an IPV4 address, using DHCP for convenience:

I captured the IP address for use in the next step, selected Back (the UI is keyboard-driven) and then logged out. This is the only step that takes place on the local console.

Gateway Configuration
At this point I will switch from past to present, and walk you through the configuration process. As directed by the Getting Started Guide, I open the Storage Gateway Console on the same network as the appliance, select the region where I want to create my gateway, and click Get started:

I select File gateway and click Next to proceed:

I select Hardware Appliance as my host platform (I can click Buy on Amazon to purchase one if necessary), and click Next:

Then I enter the IP address of my appliance and click Connect:

I enter a name for my gateway (jbgw1), set the time zone, pick ZFS as my RAID Volume Manager, and click Activate to proceed:

My gateway is activated within a second or two and I can see it in the Hardware section of the console:

At this point I am free to use a console that is not on the same network, so I’ll switch back to my trusty WorkSpace!

Now that my hardware has been activated, I can launch the actual gateway service on it. I select the appliance, and choose Launch Gateway from the Actions menu:

I choose the desired gateway type, enter a name (fgw1) for it, and click Launch gateway:

The gateway will start off in the Offline status, and transition to Online within 3 to 5 minutes. The next step is to allocate local storage by clicking Edit local disks:

Since I am creating a file gateway, all of the local storage is used for caching:

Now I can create a file share on my appliance! I click Create file share, enter the name of an existing S3 bucket, and choose NFS or SMB, then click Next:

I configure a couple of S3 options, request creation of a new IAM role, and click Next:

I review all of my choices and click Create file share:

After I create the share I can see the commands that are used to mount it in each client environment:

I mount the share on my Ubuntu desktop (I had to install the nfs-client package first) and copy a bunch of files to it:

Then I visit the S3 bucket and see that the gateway has already uploaded the files:

Finally, I have the option to change the configuration of my appliance. After making sure that all network clients have unmounted the file share, I remove the existing gateway:

And launch a new one:

And there you have it. I installed and configured the appliance, created a file share that was accessible from my on-premises systems, and then copied files to it for replication to the cloud.

Now Available
The Storage Gateway Hardware Appliance is available now and you can purchase one today. Start in the AWS Storage Gateway Console and follow the steps above!

Jeff;

 

 

ProgrammableWebAnother Grindr API Flaw Exposes Users&#039; Location

Grindr, a social networking app for the LGTBQ community, continues to struggle with API security. Earlier this year, a security flaw in Grindr's public API allowed access to users' location information, even if such users blocked this functionality in their app settings. The flaw also enabled the creation of an app that allowed Grindr users to find out who had blocked them on the app.

Simon Willison (Django)Letterboxing on Lundy

Last week Natalie and I spent a delightful two days with our friends Hannah and Adam on the beautiful island of Lundy in the Bristol Channel, 12 miles off the coast of North Devon.

I’ve been wanting to visit Lundy for years. The island is managed by the Landmark Trust, a UK charity who look after historic buildings and make them available as holiday rentals.

Our first experience with the Landmark Trust was the original /dev/fort back in 2008 when we rented a Napoleonic Sea Fortress on Alderney in the Channel Islands. Ever since then I’ve been keeping an eye out for opportunities to try out more of their properties: just two weeks ago we stayed in Wortham Manor and used it as a staging ground to help prepare a family wedding.

I cannot recommend the Landmark Trust experience strongly enough: each property is unique and fascinating, they are kept in great condition and if you split the cost of a larger rental among a group of friends the price can be comparable to a youth hostel.

Lundy is their Crown Jewels: they’ve been looking after the island since the 1960s and now offer 23 self-catering properties there.

We took the ferry out on Tuesday morning (a truly horrific two hour voyage) and back again on Thursday evening (thankfully much calmer). Once on Lundy we stayed in Castle Keep South, a two bedroom house in the keep of a castle built in the 13th century by Henry III, after he retook the island from the apparently traitorous William de Marisco (who was then hanged, drawed and quartered for good measure - apparently one of the first ever uses of that punishment). Lundy has some very interesting history attached to it.

The island itself is utterly spectacular. Three miles long, half a mile wide, surrounded by craggy cliffs and mostly topped with ferns and bracken. Not a lot of trees except for the more sheltered eastern side. A charming population of sheep, goats, Lundy Ponies and some highland cattle with extremely intimidating horns.

A highland cow that looks like Boris Johnson

(“They’re complete softies. We call that one Boris because he looks like Boris Johnson” - a lady who works in the Tavern)

Lundy has three light houses (two operational, one retired), the aforementioned castle, a charming little village, a church and numerous fascinating ruins and isolated buildings, many of which you can stay in. It has the remains of two crashed WWII German Heinkel He 111 bombers (which we eventually tracked down).

It also hosts what is quite possibly the world’s best Letterboxing trail.

Letterboxing?

Letterboxing is an outdoor activity that is primarily pursued in the UK. It consists of weatherproof boxes hidden in remote locations, usually under a pile of rocks, containing a notebook and a custom stamp. The location of the boxes is provided by a set of clues. Given the clues, your challenge is to find all of the boxes and collect their stamps in your notebook.

On Lundy the clues can be purchased from the village shop.

I had dabbled with Letterboxing a tiny bit in the past but it hadn’t really clicked with me until Natalie (a keen letterboxer) encouraged us to give it a go on Lundy.

It ended up occupying almost every waking moment of our time there, and taking us to every far-flung corner of the island.

There are 28 letterboxes on Lundy. We managed to get 27 of them - and we would have got them all, if the last one hadn’t been located on a beach that was shut off from the public due to grey seals using it to raise their newly born pups! The pups were cute enough that we forgave them.

To give you an idea for how it works, here’s the clue for letterbox 27, “The Ugly”:

The Ugly: From the lookout hut near the flagpole on the east side of Millcombe, walk in the direction of the South Light until it is on a bearing of 130°, Bramble Villa 210° and Millcombe due west. The letterbox is beneah you.

There were letterboxes in lighthouses, letterboxes in ruins, letterboxes perilously close to cliff-faces, letterboxes in church pews, letterboxes in quarries, letterboxes in caves. If you thought that letterboxing was for kids, after scrabbling down more perilous cliff paths than I can count I can assure you it isn't!

On Thursday I clocked up 24,000 steps walking 11 miles and burned 1,643 calories. For comparison, when I ran the half marathon last year I only burned 1,222. These GPS tracks from my Apple Watch give a good impression of how far we ended up walking on our second day of searching.

When we checked the letterboxing log book in the Tavern on Wednesday evening we found most people who attempt to hit all 28 letterboxes spread it out over a much more sensible timeframe. I’m not sure that I would recommend trying to fit it in to just two days, but it’s hard to imagine a better way of adding extra purpose to an exploration of the island.

Should you attempt letterboxing on Lundy (and if you can get out there you really should consider it), a few tips:

  • If in doubt, look for the paths. Most of the harder to find letterboxes were at least located near an obvious worn path.
  • “Earthquake” is a nightmare. The clue really didn’t help us - we ended up performing a vigorous search of most of the area next to (not inside) the earthquake fault.
  • The iPhone compass app is really useful for finding bearings. We didn’t use a regular compass at all.
  • If you get stuck, check for extra clues in the letterboxing log book in the tavern. This helped us crack Earthquake.
  • There’s more than one pond. The quarry pond is very obvious once you find it.
  • Take as many different maps as you can find - many of the clues reference named landmarks that may not appear on the letterboxing clue map. We forgot to grab an offline copy of Lundy in the Google Maps app and regretted it.

If you find yourself in Ilfracombe on the way to or from Lundy, the Ilfracombe Museum is well worth your time. It’s a classic example in the genre of “eccentric collects a wide variety of things, builds a museum for them”. Highlights include a cupboard full of pickled bats and a drawer full of 100-year-old wedding cake samples.

ProgrammableWebRookout Workspaces Brings Teamwork to Production Debugging

Rookout, a company focused on production debugging, has released the first collaborative solution to debug live code on demand, including in production.

ProgrammableWebKong Announces Version 1.0 of its Open-Source Microservice API Gateway

Kong Inc., a microservices API company, has announced Kong 1.0, the latest version of its open-source microservice API gateway. The company is known for creating and supporting the Kong platform. This latest version of the Kong platform includes a variety of features such as support for service mesh deployments and Kubernetes Ingress controller.

Simon Willison (Django)Extended Validation Certificates are Dead

Extended Validation Certificates are Dead

Troy Hunt has been writing about the flaws of Extended Validation certificates for a while. Now iOS 12 is out and Mobile Safari no longer displays their visual indicator in the URL bar (and desktop Safari will stop doing so next week when Mac OS Mojave ships). EV certificates are being dropped by many of the larger companies that were using them. "This turned out to be a long blog post because every time I sat down to write, more and more evidence on the absolute pointlessness of EV presented itself".

ProgrammableWebIdentify Emotions in Written Text With New Twinword Emotion Analysis API

Twinword, a company providing text analysis APIs, has announced the launch of their Emotion Analysis API. The API is capable of analyzing a paragraph of text and identifying six different emotions including joy, anger, sadness, disgust, surprise and fear.

ProgrammableWebChrome 70 Beta Release Includes Shape Detection API

Google has announced the beta release of Chrome 70. A complete list of Chrome 70 features can be found at ChromeStatus. Two notable features of Chrome 70 include the Shape Detection API, and updates to the Web Authentication API.

ProgrammableWebDaily API RoundUp: Doba, Aidlab, Dream Payments, Wishpond

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebUrban Airship Releases Beta iOS SDK with New App Notification Practices

This week, customer engagement company Urban Airship released a beta SDK for iOS in advance of the public release of iOS 12.

ProgrammableWebUrban Airship Releases Beta iOS SDK with New App Notification Practices

This week, customer engagement company Urban Airship released a beta SDK for iOS in advance of the public release of iOS 12.

ProgrammableWebMicrosoft Releases New Microsoft 365 Secure Score API

Microsoft has released a new Microsoft 365 Secure Score API which includes a number of new features and improvements. Among the new updates to the API are integration with Microsoft’s Security Graph API, addition of dual entities, and availability in the Microsoft Graph Explorer.

ProgrammableWebHubSpot Introduces Beta Version of Calling API

HubSpot recently announced a beta release of its new Calling API. The API enables users to utilize third-party telephony provider solutions instead of HubSpot's native calling features. Currently, HubSpot has made the beta version open to Sales Hub Enterprise and Service Hub Enterprise Customers.

ProgrammableWebWeather APIs to Track Hurricane Florence

ProgrammableWeb recently covered the Call for Code Global Initiative, a competition that asks developers to create tools that will aid in natural disaster preparedness and relief. In light of this, and with Hurricane Florence fast approaching the Carolinas, we wanted to make sure Developers could find the tools they need to build life saving apps. Below is a brief summary of some of the related APIs in our directory. 

ProgrammableWebeBay&#039;s HeadGaze Controls iPhone X Screen through Head Movements

eBay has introduced new technology that monitors users' head motions to navigate an iPhone's user interface. The technology, HeadGaze, was developed by a team at eBay and resulted in a test app (HeadSwipe) that allows users to swipe deals on eBay through head movements.

ProgrammableWebBloomberg Launches Open Data and Linked Data Website

Bloomberg, a global business and financial data, news, and analytics provider, has launched Enterprise Access Point, an Open Data and Linked Data website for Bloomberg Data License clients. The website provides a single access point for a number of "ready-to-use" datasets.

ProgrammableWebApple Tasks Devs to Prep Apps for New iPhones and Apple Watch

Apple today announced three new iPhones and a new pair of Apple Watches. This hardware will run the latest operating systems from Apple, meaning iOS 12 for the iPhones and watchOS 5 for the Watch. Apple has put together a guide for developers to help them ensure their apps are prepared for the new phones and wearables on day one. 

ProgrammableWebTwitter Announces Immediate Changes to Image Access through Twitter&#039;s Direct Message API

Yesterday, Twitter announced immediate changes to its Direct Message API. The changes specifically address the way images are sent in Direct Messages. Twitter was unable to give developers warning prior to the change "due to the potential sensitivity of the prior state."

ProgrammableWebClearcover Expands its Car Insurance API Platform

Clearcover, a technology-driven car insurance provider, has announced that the company has expanded its car insurance API platform. In addition to the Clearcover Quote API, the platform now features SmartLink and the Clearcover Lead API. Using these APIs, partners can integrate with applications Clearcover platform capabilities such as data collection, display prices, and view changes to coverage elections.

ProgrammableWebXillio Announces Updates to Its Unified Content Connectivity Platform

Xillio, a provider of content migration and integration software, has released a new version of the Xillio API platform. The new features hopes to expand on Xillio’s goal to deliver a unified and uniform approach to making business content stored in any content repository or file share – be it cloud, on premise or legacy systems – easily accessible to any application. 
 

Jeremy Keith (Adactio)Several people are writing

Anne Gibson writes:

It sounds easy to make writing a habit, but like every other habit that doesn’t involve addictive substances like nicotine or dopamine it’s hard to start and easy to quit.

Alice Bartlett writes:

Anyway, here we are, on my blog, or in your RSS reader. I think I’ll do weaknotes. Some collections of notes. Sometimes. Not very well written probably. Generally written with the urgency of someone who is waiting for a baby wake up.

Patrick Rhone writes:

Bottom line; please place any idea worth more than 280 characters and the value Twitter places on them (which is zero) on a blog that you own and/or can easily take your important/valuable/life-changing ideas with you and make them easy for others to read and share.

Sara Soueidan writes:

What you write might help someone understand a concept that you may think has been covered enough before. We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from. And if no one reads your article, then that’s also okay. That voice telling you that people are just sitting somewhere watching our every step and judging us based on the popularity of our writing is a big fat pathetic attention-needing liar.

Laura Kalbag writes:

The web can be used to find common connections with folks you find interesting, and who don’t make you feel like so much of a weirdo. It’d be nice to be able to do this in a safe space that is not being surveilled.

Owning your own content, and publishing to a space you own can break through some of these barriers. Sharing your own weird scraps on your own site makes you easier to find by like-minded folks.

Brendan Dawes writes:

At times I think “will anyone reads this, does anyone care?”, but I always publish it anyway — and that’s for two reasons. First it’s a place for me to find stuff I may have forgotten how to do. Secondly, whilst some of this stuff is seemingly super-niche, if one person finds it helpful out there on the web, then that’s good enough for me. After all I’ve lost count of how many times I’ve read similar posts that have helped me out.

Robin Rendle writes:

My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains a weird thing you struggled with on the Internet, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this thing then be sure to jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.

Khoi Vinh writes:

Maybe you feel more comfortable writing in short, concise bullets than at protracted, grandiose length. Or maybe you feel more at ease with sarcasm and dry wit than with sober, exhaustive argumentation. Or perhaps you prefer to knock out a solitary first draft and never look back rather than polishing and tweaking endlessly. Whatever the approach, if you can do the work to find a genuine passion for writing, what a powerful tool you’ll have.

Amber Wilson writes:

I want to finally begin writing about psychology. A friend of mine shared his opinion that writing about this is probably best left to experts. I tried to tell him I think that people should write about whatever they want. He argued that whatever he could write about psychology has probably already been written about a thousand times. I told him that I’m going to be writer number 1001, and I’m going to write something great that nobody has written before.

Austin Kleon writes:

Maybe I’m weird, but it just feels good. It feels good to reclaim my turf. It feels good to have a spot to think out loud in public where people aren’t spitting and shitting all over the place.

Tim Kadlec writes:

I write to understand and remember. Sometimes that will be interesting to others, often it won’t be.

But it’s going to happen. Here, on my own site.

You write…

ProgrammableWebDaily API RoundUp: timeBuzzer, Reviewshake, Boltron, Synapse

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebGoogle Photos Library API now Generally Available

Google has introduced the Google Photos Library API. The API aims to give developers better ability to build photo and video experiences. With connecting, uploading, and sharing functionality built in, Google hopes the API will help developers from front-end to backend better consumer and use photos and videos.

Amazon Web ServicesNew – AWS Systems Manager Session Manager for Shell Access to EC2 Instances

It is a very interesting time to be a corporate IT administrator. On the one hand, developers are talking about (and implementing) an idyllic future where infrastructure as code, and treating servers and other resources as cattle. On the other hand, legacy systems still must be treated as pets, set up and maintained by hand or with the aid of limited automation. Many of the customers that I speak with are making the transition to the future at a rapid pace, but need to work in the world that exists today. For example, they still need shell-level access to their servers on occasion. They might need to kill runaway processes, consult server logs, fine-tune configurations, or install temporary patches, all while maintaining a strong security profile. They want to avoid the hassle that comes with running Bastion hosts and the risks that arise when opening up inbound SSH ports on the instances.

We’ve already addressed some of the need for shell-level access with the AWS Systems Manager Run Command. This AWS facility gives administrators secure access to EC2 instances. It allows them to create command documents and run them on any desired set of EC2 instances, with support for both Linux and Microsoft Windows. The commands are run asynchronously, with output captured for review.

New Session Manager
Today we are adding a new option for shell-level access. The new Session Manager makes the AWS Systems Manager even more powerful. You can now use a new browser-based interactive shell and a command-line interface (CLI) to manage your Windows and Linux instances. Here’s what you get:

Secure Access – You don’t have to manually set up user accounts, passwords, or SSH keys on the instances and you don’t have to open up any inbound ports. Session Manager communicates with the instances via the SSM Agent across an encrypted tunnel that originates on the instance, and does not require a bastion host.

Access Control – You use IAM policies and users to control access to your instances, and don’t need to distribute SSH keys. You can limit access to a desired time/maintenance window by using IAM’s Date Condition Operators.

Auditability – Commands and responses can be logged to Amazon CloudWatch and to an S3 bucket. You can arrange to receive an SNS notification when a new session is started.

Interactivity – Commands are executed synchronously in a full interactive bash (Linux) or PowerShell (Windows) environment

Programming and Scripting – In addition to the console access that I will show you in a moment, you can also initiate sessions from the command line (aws ssm ...) or via the Session Manager APIs.

The SSM Agent running on the EC2 instances must be able to connect to Session Manager’s public endpoint. You can also set up a PrivateLink connection to allow instances running in private VPCs (without Internet access or a public IP address) to connect to Session Manager.

Session Manager in Action
In order to use Session Manager to access my EC2 instances, the instances must be running the latest version (2.3.12 or above) of the SSM Agent. The instance role for the instances must reference a policy that allows access to the appropriate services; you can create your own or use AmazonEC2RoleForSSM. Here are my EC2 instances (sk1 and sk2 are running Amazon Linux; sk3-win and sk4-win are running Microsoft Windows):

Before I run my first command, I open AWS Systems Manager and click Preferences. Since I want to log my commands, I enter the name of my S3 bucket and my CloudWatch log group. If I enter either or both values, the instance policy must also grant access to them:

I’m ready to roll! I click Sessions, see that I have no active sessions, and click Start session to move ahead:

I select a Linux instance (sk1), and click Start session again:

The session opens up immediately:

I can do the same for one of my Windows instances:

The log streams are visible in CloudWatch:

Each stream contains the content of a single session:

In the Works
As usual, we have some additional features in the works for Session Manager. Here’s a sneak peek:

SSH Client – You will be able to create SSH sessions atop Session Manager without opening up any inbound ports.

On-Premises Access – We plan to give you the ability to access your on-premises instances (which must be running the SSM Agent) via Session Manager.

Available Now
Session Manager is available in all AWS regions (including AWS GovCloud) at no extra charge.

Jeff;

Amazon Web ServicesAWS – Ready for the Next Storm

As I have shared in the past (AWS – Ready to Weather the Storm) we take extensive precautions to help ensure that AWS will remain operational in the face of hurricanes, storms, and other natural disasters. With Hurricane Florence heading for the east coast of the United States, I thought it would be a good time to review and update some of the most important points from that post. Here’s what I want you to know:

Availability Zones – We replicate critical components of AWS across multiple Availability Zones to ensure high availability. Common points of failure, such as generators, UPS units, and air conditioning, are not shared across Availability Zones. Electrical power systems are designed to be fully redundant and can be maintained without impacting operations. The AWS Well-Architected Framework provides guidance on the proper use of multiple Availability Zones to build applications that are reliable and resilient, as does the Building Fault-Tolerant Applications on AWS whitepaper.

Contingency Planning – We maintain contingency plans and regularly rehearse our responses. We maintain a series of incident response plans and update them regularly to incorporate lessons learned and to prepare for emerging threats. In the days leading up to a known event such as a hurricane, we increase fuel supplies, update staffing plans, and add provisions to ensure the health and safety of our support teams.

Data Transfer – With a storage capacity of 100 TB per device, AWS Snowball Edge appliances can be used to quickly move large amounts of data to the cloud.

Disaster Response – When call volumes spike before, during, or after a disaster, Amazon Connect can supplement your existing call center resources and allow you to provide a better response.

Support – You can contact AWS Support if you are in need of assistance with any of these issues.

Jeff;

 

 

Jeremy Keith (Adactio)The top four web performance challenges

Danielle and I have been doing some front-end consultancy for a local client recently.

We’ve both been enjoying it a lot—it’s exhausting but rewarding work. So if you’d like us to come in and spend a few days with your company’s dev team, please get in touch.

I’ve certainly enjoyed the opportunity to watch Danielle in action, leading a workshop on refactoring React components in a pattern library. She’s incredibly knowledgable in that area.

I’m clueless when it comes to React, but I really enjoy getting down to the nitty-gritty of browser features—HTML, CSS, and JavaScript APIs. Our skillsets complement one another nicely.

This recent work was what prompted my thoughts around the principles of robustness and least power. We spent a day evaluating a continuum of related front-end concerns: semantics, accessibility, performance, and SEO.

When it came to performance, a lot of the work was around figuring out the most suitable metric to prioritise:

  • time to first byte,
  • time to first render,
  • time to first meaningful paint, or
  • time to first meaningful interaction.

And that doesn’t even cover the more easily-measurable numbers like:

  • overall file size,
  • number of requests, or
  • pagespeed insights score.

One outcome was to realise that there’s a tendency (in performance, accessibility, or SEO) to focus on what’s easily measureable, not because it’s necessarily what matters, but precisely because it is easy to measure.

Then we got down to some nuts’n’bolts technology decisions. I took a step back and looked at the state of performance across the web. I thought it would be fun to rank the most troublesome technologies in order of tricksiness. I came up with a top four list.

Here we go, counting down from four to the number one spot…

4. Web fonts

Coming in at number four, it’s web fonts. Sometimes it’s the combined weight of multiple font files that’s the problem, but more often that not, it’s the perceived performance that suffers (mostly because of when the web fonts appear).

Fortunately there’s a straightforward question to ask in this situation: WWZD—What Would Zach Do?

3. Images

At the number three spot, it’s images. There are more of them and they just seem to be getting bigger all the time. And yet, we have more tools at our disposal than ever—better file formats, and excellent browser support for responsive images. Heck, we’re even getting the ability to lazy load images in HTML now.

So, as with web fonts, it feels like the impact of images on performance can be handled, as long as you give them some time and attention.

2. Our JavaScript

Just missing out on making the top spot is the JavaScript that we send down the pipe to our long-suffering users. There’s nothing wrong with the code itself—I’m sure it’s very good. There’s just too damn much of it. And that’s a real performance bottleneck, especially on mobile.

So stop sending so much JavaScript—a solution as simple as Monty Python’s instructions for playing the flute.

1. Other people’s JavaScript

At number one with a bullet, it’s all the crap that someone else tells us to put on our websites. Analytics. Ads. Trackers. Beacons. “It’s just one little script”, they say. And then that one little script calls in another, and another, and another.

It’s so disheartening when you’ve devoted your time and energy into your web font loading strategy, and optimising your images, and unbundling your JavaScript …only to have someone else’s JavaScript just shit all over your nice performance budget.

Here’s the really annoying thing: when I go to performance conferences, or participate in performance discussions, you know who’s nowhere to be found? The people making those third-party scripts.

The narrative around front-end performance is that it’s up to us developers to take responsibility for how our websites perform. But by far the biggest performance impact comes from third-party scripts.

There is a solution to this, but it’s not a technical one. We could refuse to add overweight (and in many cases, unethical) third-party scripts to the sites we build.

I have many, many issues with Google’s AMP project, but I completely acknowledge that it solves a political problem:

No external JavaScript is allowed in an AMP HTML document. This covers third-party libraries, advertising and tracking scripts. This is A-okay with me.

The reasons given for this ban are related to performance and I agree with them completely. Big bloated JavaScript libraries are one of the biggest performance killers on the web.

But how can we take that lesson from AMP and apply it to all our web pages? If we simply refuse to be the one to add those third-party scripts, we get fired, and somebody else comes in who is willing to poison web pages with third-party scripts. There’s nothing to stop companies doing that.

Unless…

Suppose we were to all make a pact that we would stand in solidarity with any of our fellow developers in that sort of situation. A sort of joining-together. A union, if you will.

There is power in a factory, power in the land, power in the hands of the worker, but it all amounts to nothing if together we don’t stand.

There is power in a union.

Matt Webb (Schulze & Webb)What has the EU ever done for us? Some thoughts on a new mapping project

There’s a new project being shared round today that maps EU-funded projects in the UK: here it is. It’s easy to use and very interesting to find out, for example, what projects have been funded in my home town.

Kudos to the folks who built it. Creating sites like this is hard work, and a vital part of the national discussion about the EU.

So for transparency (which is a good thing) I’m hugely in favour of this.

But in terms of what I think about the EU funding itself, I’m not so sure. The strapline for the site is What has the EU done for your area? and while in one sense that’s true, it makes me think: but this was the UK’s money to begin with, right?

Looking at the breakdown of the EU membership fee, the UK paid £13.1bn into the EU budget in 2016, and received back £5.5bn in various forms of funding (£4.5bn channeled through the public sector, and approx. £1bn direct to the private sector). So the first thing the mapping project highlights is that the UK pays more to the EU than it "gets back."

That purely budgetary framing makes me uncomfortable. Should we really be looking at what we "get back" from the EU in terms of project funding? How do we value reduced friction to trade (and associated economic boost), the reduction in defence and diplomatic spending (by being part of a bloc), the cultural benefits of having a stronger voice on the world stage, etc.

What this mapping project also highlights is, well, should the EU be choosing how this money is spent at all? When money is spent directly by the UK government there is a certain amount of democratic accountability. I know who I can complain to, I know how I can try to influence the spending criteria, and I can campaign to vote out the people ultimately responsible if I really disagree.

But for EU spending? It’s more abstract. When I see this map of EU spending in the UK, what it makes me ask is why the UK government isn’t in charge of it. That same discomfort was, of course, a reason why people voted for Brexit in the 2016 referendum. Although if the UK government controlled the spending, that doesn’t imply that it could all go the NHS instead--regardless of what was written on the side of a bus--as many of these projects are vital for our agriculture, regional growth, jobs, and industrial strategy, and you wouldn’t want to stop them. So leaving the EU wouldn’t mean we’d get this money "back" in the national budget by any means.

Now, on balance, I believe Brexit is a bad idea. The UK’s contribution to the EU is only 2% of our total national budget and, as I said, the "non-spending" benefits of EU membership matter significantly, and many of these projects would be funded anyhow.

But I’m not an unequivocal booster of everything the EU does. This mapping project is fixing a huge lack of transparency. The level of democratic accountability worries me: how do we know that all of these projects are within the mandate that we’ve given to the EU under the treaties, and how can we influence the allocations? I happen to believe that these problems are addressable as a member of the EU. (And to be honest, the same concerns could be levelled at the UK government about the project grants we do control.)

So while I’m pleased (and relieved) that this spending is broadly sensible, I’m not sure it should be waved around as "hey look at all this awesome stuff we’re getting from the EU." I don’t think that’s the case it makes at all.

--

My brand of weight-it-all-up ambivalence doesn’t play particularly well in this era of hyper-shareable Facebook posts and 24 hour news cycle sound bites.

However it’s by taking into account evidence like this that I’m able to say with increasing confidence that Brexit doesn’t add up. I look at what’s going on, consider alternatives, and... well, the Brexit options currently on the table look terrible, and the impending exit day of 2019 (which occurs well ahead of any trade deal being done) is so close with so little certainty around which to prepare that I fear a lot of damage and hurt will result.

It’s also this kind of easy-to-read evidence that we were lacking in the 2016 referendum, which is why I don’t think anyone (however they voted) really knew enough to make an informed decision, and why it’s perfectly ok to revisit the issue now and have a re-think before it’s too late. Given the circumstances it’s ok to change your mind.

That mapping project again: myeu.uk. More like this please!

ProgrammableWebMux Launches API for Live Streaming Video to Large Audiences

Mux, a streaming video technology company, has launched its Live Streaming API which developers can use to build applications that support scalable live video streaming on mobile, desktop, and TV. The Mux Video Live Streaming API makes it possible for developers to integrate Mux live streaming video capabilities with applications by adding only a few lines of code. Once a live stream is provisioned with Mux, it is available for streaming immediately as opposed to having a 3-5-minute delay.

Jeremy Keith (Adactio)Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

Amazon Web ServicesLearn about AWS Services and Solutions – September AWS Online Tech Talks

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have.

Featured this month is our first ever fireside chat discussion. Join Debanjan Saha, General Manager of Amazon Aurora and Amazon RDS, to learn how customers are using our relational database services and leveraging database innovations.

Register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Compute

September 24, 2018 | 09:00 AM – 09:45 AM PT – Accelerating Product Development with HPC on AWS – Learn how you can accelerate product development by harnessing the power of high performance computing on AWS.

September 26, 2018 | 09:00 AM – 10:00 AM PT – Introducing New Amazon EC2 T3 Instances – General Purpose Burstable Instances – Learn about new Amazon EC2 T3 instance types and how they can be used for various use cases to lower infrastructure costs.

September 27, 2018 | 09:00 AM – 09:45 AM PT – Hybrid Cloud Customer Use Cases on AWS: Part 2 – Learn about popular hybrid cloud customer use cases on AWS.

Containers

September 19, 2018 | 11:00 AM – 11:45 AM PT – How Talroo Used AWS Fargate to Improve their Application Scaling – Learn how Talroo, a data-driven solution for talent and jobs, migrated their applications to AWS Fargate so they can run their application without worrying about managing infrastructure.

Data Lakes & Analytics

September 17, 2018 | 11:00 AM – 11:45 AM PT – Secure Your Amazon Elasticsearch Service Domain – Learn about the multi-level security controls provided by Amazon Elasticsearch Service (Amazon ES) and how to set the security for your Amazon ES domain to prevent unauthorized data access.

September 20, 2018 | 11:00 AM – 12:00 PM PT – New Innovations from Amazon Kinesis for Real-Time Analytics – Learn about the new innovations from Amazon Kinesis for real-time analytics.

Databases

September 17, 2018 | 01:00 PM – 02:00 PM PT – Applied Live Migration to DynamoDB from Cassandra – Learn how to migrate a live Cassandra-based application to DynamoDB.

September 18, 2018 | 11:00 AM – 11:45 AM PT – Scaling Your Redis Workloads with Redis Cluster – Learn how Redis cluster with Amazon ElastiCache provides scalability and availability for enterprise workloads.

**Featured: September 20, 2018 | 09:00 AM – 09:45 AM PT – Fireside Chat: Relational Database Innovation at AWS – Join our fireside chat with Debanjan Saha, GM, Amazon Aurora and Amazon RDS to learn how customers are using our relational database services and leveraging database innovations.

DevOps

September 19, 2018 | 09:00 AM – 10:00 AM PT – Serverless Application Debugging and Delivery – Learn how to bring traditional best practices to serverless application debugging and delivery.

Enterprise & Hybrid

September 26, 2018 | 11:00 AM – 12:00 PM PT – Transforming Product Development with the Cloud – Learn how to transform your development practices with the cloud.

September 27, 2018 | 11:00 AM – 12:00 PM PT – Fueling High Performance Computing (HPC) on AWS with GPUs – Learn how you can accelerate time-to-results for your HPC applications by harnessing the power of GPU-based compute instances on AWS.

IoT

September 24, 2018 | 01:00 PM – 01:45 PM PT – Manage Security of Your IoT Devices with AWS IoT Device Defender – Learn how AWS IoT Device Defender can help you manage the security of IoT devices.

September 26, 2018 | 01:00 PM – 02:00 PM PT – Over-the-Air Updates with Amazon FreeRTOS – Learn how to execute over-the-air updates on connected microcontroller-based devices with Amazon FreeRTOS.

Machine Learning

September 17, 2018 | 09:00 AM – 09:45 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

September 18, 2018 | 09:00 AM – 09:45 AM PT – How to Integrate Natural Language Processing and Elasticsearch for Better Analytics – Learn how to process, analyze and visualize data by pairing Amazon Comprehend with Amazon Elasticsearch.

September 20, 2018 | 01:00 PM – 01:45 PM PT – Build, Train and Deploy Machine Learning Models on AWS with Amazon SageMaker – Dive deep into building, training, & deploying machine learning models quickly and easily using Amazon SageMaker.

Management Tools

September 19, 2018 | 01:00 PM – 02:00 PM PT – Automated Windows and Linux Patching – Learn how AWS Systems Manager can help reduce data breach risks across your environment through automated patching.

re:Invent

September 12, 2018 | 08:00 AM – 08:30 AM PT – Episode 5: Deep Dive with Our Community Heroes and Jeff Barr – Get the insider secrets with top recommendations and tips for re:Invent 2018 from AWS community experts.

Security, Identity, & Compliance

September 24, 2018 | 11:00 AM – 12:00 PM PT – Enhanced Security Analytics Using AWS WAF Full Logging – Learn how to use AWS WAF security incidence logs to detect threats.

September 27, 2018 | 01:00 PM – 02:00 PM PT – Threat Response Scenarios Using Amazon GuardDuty – Discover methods for operationalizing your threat detection using Amazon GuardDuty.

Serverless

September 18, 2018 | 01:00 PM – 02:00 PM PT – Best Practices for Building Enterprise Grade APIs with Amazon API Gateway – Learn best practices for building and operating enterprise-grade APIs with Amazon API Gateway.

Storage

September 25, 2018 | 09:00 AM – 10:00 AM PT – Ditch Your NAS! Move to Amazon EFS – Learn how to move your on-premises file storage to Amazon EFS.

September 25, 2018 | 11:00 AM – 12:00 PM PT – Deep Dive on Amazon Elastic File System (EFS): Scalable, Reliable, and Elastic File Storage for the AWS Cloud – Get live demos and learn tips & tricks for optimizing your file storage on EFS.

September 25, 2018 | 01:00 PM – 01:45 PM PT – Integrating File Services to Power Your Media & Entertainment Workloads – Learn how AWS file services deliver high performance shared file storage for media & entertainment workflows.

ProgrammableWeb​Call for Code Global Challenge Asks Devs to Help with Disaster Relief

The Call for Code Global Initiative, which aims to use developers' skills to "drive positive and long-lasting change across the world with their code", is running a challenge that asks developers to build solutions that aid in natural disaster preparedness and relief.

Amazon Web ServicesAmazon AppStream 2.0 – New Application Settings Persistence and a Quick Launch Recap

Amazon AppStream 2.0 gives you access to Windows desktop applications through a web browser. Thousands of AWS customers, including SOLIDWORKS, Siemens, and MathWorks are already using AppStream 2.0 to deliver applications to their customers.

Today I would like to bring you up to date on some recent additions to AppStream 2.0, wrapping up with a closer look at a brand new feature that will automatically save application customizations (preferences, bookmarks, toolbar settings, connection profiles, and the like) and Windows settings between your sessions.

The recent additions to AppStream 2.0 can be divided into four categories:

User Enhancements – Support for time zone, locale, and language input, better copy/paste, and the new application persistence feature.

Admin Improvements – The ability to configure default application settings, control access to some system resources, copy images across AWS regions, establish custom branding, and share images between AWS accounts.

Storage Integration – Support for Microsoft OneDrive for Business and Google Drive for G Suite.

Regional Expansion – AppStream 2.0 recently became available in three additional AWS regions in Europe and Asia.

Let’s take a look at each item and then at application settings persistence….

User Enhancements
In June we gave AppStream 2.0 users control over the time zone, locale, and input methods. Once set, the values apply to future sessions in the same AWS region. This feature (formally known as Regional Settings) must be enabled by the AppStream 2.0 administrator as detailed in Enable Regional Settings for Your AppStream 2.0 Users.

In July we added keyboard shortcuts for copy/paste between your local device and your AppStream 2.0 sessions when using Google Chrome.

Admin Improvements
In February we gave AppStream 2.0 administrators the ability to copy AppStream 2.0 images to other AWS regions, simplifying the process of creating and managing global application deployments (to learn more, visit Tag and Copy an Image):

In March we gave AppStream 2.0 administrators additional control over the user experience, including the ability to customize the logo, color, text, and help links in the application catalog page. Read Add Your Custom Branding to AppStream 2.0 to learn more.

In May we added administrative control over the data that moves to and from the AppStream 2.0 streaming sessions. AppStream 2.0 administrators can control access to file upload, file download, printing, and copy/paste to and from local applications. Read Create AppStream 2.0 Fleets and Stacks to learn more.

In June we gave AppStream 2.0 administrators the power to configure default application settings (connection profiles, browser settings, and plugins) on behalf of their users. Read Enabling Default OS and Application Settings for Your Users to learn more.

In July we gave AppStream 2.0 administrators the ability to share AppStream 2.0 images between AWS accounts for use in the same AWS Region. To learn more, take a look at the UpdateImagePermissions API and the update-image-permissions command.

Storage Integration
Both of these launches provide AppStream 2.0 users with additional storage options for the documents that they access, edit, and create:

Launched in June, the Google Drive for G Suite support allows users to access files on a Google Drive from inside of their applications. Read Google Drive for G Suite is now enabled on Amazon AppStream 2.0 to learn how to enable this feature for an AppStream application stack.

Similiarly, the Microsoft OneDrive for Business Support that was launched in July allows users to access files stored in OneDrive for Business accounts. Read Amazon AppStream 2.0 adds support for OneDrive for Business to learn how to set this up.

 

Regional Expansion
In January we made AppStream 2.0 available in the Asia Pacific (Singapore) and Asia Pacific (Sydney) Regions.

In March we made AppStream 2.0 available in the Europe (Frankfurt) Region.

See the AWS Region Table for the full list of regions where AppStream 2.0 is available.

Application Settings Persistence
With the past out of the way, let’s take a look at today’s new feature, Application Settings Persistence!

As you can see from the launch recap above, AppStream 2.0 already saves several important application and system settings between sessions. Today we are adding support for the elements that make up the Windows Roaming Profile. This includes:

Windows Profile – The contents of C:\users\user_name\appdata .

Windows Profile Folder – The contents of C:\users\user_name .

Windows Registry – The tree of registry entries rooted at HKEY_CURRENT_USER .

This feature must be enabled by the AppStream 2.0 administrator. The contents of the Windows Roaming Profile are stored in an S3 bucket in the administrator’s AWS account, with an initial storage allowance (easily increased) of up to 1 GB per user. The S3 bucket is configured for Server Side Encryption with keys managed by S3. Data moves between AppStream 2.0 and S3 across a connection that is protected by SSL. The administrator can choose to enable S3 versioning to allow recovery from a corrupted profile.

Application Settings Persistence can be enabled for an existing stack, as long as it is running the latest version of the AppStream 2.0 Agent. Here’s how it is enabled when creating a new stack:

Putting multiple stacks in the same settings group allows them to share a common set of user settings. The settings are applied when the user logs in, and then persisted back to S3 when they log out.

This feature is available now and AppStream 2.0 administrators can enable it today. The only cost is for the S3 storage consumed by the stored profiles, charged at the usual S3 prices.

Jeff;

PS – Follow the AWS Desktop and Application Streaming Blog to make sure that you know about new features as quickly as possible.

 

ProgrammableWebNACHA Launches Afinis Interoperability Standards for Financial Services

Afinis is a membership-based standards organization that brings together diverse collaborators – through innovative and agile processes – to develop implementable, interoperable and portable standards across operating environments and platforms.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>