Jeremy Keith (Adactio)Regression toward being mean

I highly recommend Remy’s State Of The Gap post—it’s ace. He summarises it like this:

I strongly believe in the concepts behind progressive web apps and even though native hacks (Flash, PhoneGap, etc) will always be ahead, the web, always gets there. Now, today, is an incredibly exciting time to be build on the web.

I agree completely. That might sound odd after I wrote about Regressive Web Apps, but it’s precisely because I’m so excited by the technologies behind progressive web apps that I think it’s vital that we do them justice. As Remy says:

Without HTTPS and without service workers, you can’t add to homescreen. This is an intentionally high bar of entry with damn good reasons.

When the user installs a PWA, it has to work. It’s our job as web developers to provide the most excellent experience for our users.

It has to work.

That’s why I don’t agree with Dion’s metrics for what makes a progressive web app:

If you deliver an experience that only works on mobile is that a PWA? Yes.

I think it’s important to keep quality control high. Being responsive is literally the first item in the list of qualities that help define what a progressive web app is. That’s why I wrote about “regressive” web apps: sites that are supposed to showcase what we can do but instead take a step backwards into the bad old days of separate sites for separate device classes: washingtonpost.com/pwa, m.flipkart.com, lite.5milesapp.com, app.babe.co.id, m.aliexpress.com.

A lot of people on Twitter misinterpreted my post as saying “the current crop of progressive web apps are missing the mark, therefore progressive web apps suck”. What I was hoping to get across was “the current crop of progressive web apps are missing the mark, so let’s make better ones!”

Now, I totally understand that many of these examples are a first stab, a way of testing the waters. I absolutely want to encourage these first attempts and push them further. But I don’t think that waiving the qualifications for progressive web apps helps achieves that. As much as I want to acknowledge the hard work that people have done to create those device-specific examples, I don’t think we should settle for anything less than high-quality progressive web apps that are as much about the web as they are about apps.

Simply put, in this instance, I don’t think good intentions are enough.

Which brings me to the second part of Regressive Web Apps, the bit about Chrome refusing to show the “add to home screen” prompt for sites that want to have their URL still visible when launched from the home screen.

Alex was upset by what I wrote:

if you think the URL is going to get killed on my watch then you aren’t paying any attention whatsoever.

so, your choices are to think that I have a secret plan to kill URLs, or conclude I’m still Team Web.

I’m galled that anyone, particularly you @adactio, would think the former…but contrarianism uber alles?

I am very, very sorry that I upset Alex like this.

But I stand by my criticism of the actions of the Chrome team. Because good intentions are not enough.

I know that Alex is huge fan of URLs, and of the web. Heck, just about everybody I know that works on Chrome in some capacity are working for the web first and foremost: Alex, Jake, various and sundry Pauls. But that doesn’t mean I’m going to stay quiet when I see the Chrome team do something I think is bad for the web. If anything, it’s precisely because I hold them to a high standard that I’m going to sound the alarm when I see what I consider to be missteps.

I think that good people can make bad decisions with the best of intentions. Usually it involves long-term thinking—something I think is very important. “The ends justify the means” is a way of thinking that can create a lot of immediate pain, even if it means a better future overall. Balancing those concerns is front and centre of the Chromium project:

As browser implementers, we find that there’s often tension between (a) moving the web forward and (b) preserving compatibility. On one hand, the web platform API surface must evolve to stay relevant. On the other hand, the web’s primary strength is its reach, which is largely a function of interoperability.

For example, when Alex talks of the Web Component era as though it were an inevitability, I get nervous. Not for myself, but for the millions of Opera Mini users out there. How do we get to a better future without leaving anyone behind? Or do we sacrifice those people for the greater good? Do the needs of the many outweigh the needs of the few? Do the ends justify the means?

Now, I know for a fact that the end-game that Alex is pursuing with web components—and the extensible web manifesto in general—is a more declarative web: solutions that first get tackled as web components end up landing in browsers. But to get there, the solutions are first created using modern JavaScript that simply doesn’t work everywhere. Is that the price we’re going to have to pay for a better web?

I hope not. I hope we can find ways to have our accessible cake and eat it too. But it will be really, really hard.

Returning to progressive web apps, I was genuinely shocked and appalled at the way that the Chrome team altered the criteria for the “add to home screen” prompt to discourage exposing URLs. I was also surprised at how badly the change was communicated—it was buried in a bug report that five people contributed to before pushing the change. I only found out about it through a conversation with Paul Kinlan. Paul encouraged me to give feedback, and that’s what I did on my website, just like Stuart did on his.

Of course the Chrome team are working on ways of exposing URLs within progressive web apps that are launched in from the home screen. Opera are working on it too. But it’s a really tricky problem to solve. It’s not enough to say “we’ll figure it out”. It’s not enough to say “trust us.”

I do trust the people I know working on Chrome. I also trust the people I know at Mozilla, Opera and Microsoft. That doesn’t mean I’m going to let their actions go unquestioned. Good intentions are not enough.

As Alex readily acknowledges, the harder problem (figuring out how to expose URLs) should have been solved first—then the change to the “add to home screen” metrics would be uncontentious. Putting the cart before the horse, discouraging display:browser now, while saying “trust us, we’ll figure it out”, is another example of saying the ends justify the means.

But the stakes are too high here to let this pass. Good intentions are not enough. Knowing that the people working on Chrome (or Firefox, or Opera, or Edge) are good people is not reason enough to passively accept every decision they make.

Alex called me out for not getting in touch with him directly about the Chrome team’s future plans with URLs, but again, that kind of rough consensus to do something is trumped by running code. Also, I did talk to Chrome people—this all came out of a discussion with Paul Kinlan. I don’t know who’s who in the company’s political hierarchy and I don’t think I should need an org chart to give feedback to Google (or Mozilla, or Opera, or Microsoft).

You’ll notice that I didn’t include Apple there. I don’t hold them to the same high standard. As it turns out, I know some very good people at Apple working on WebKit and Safari. As individuals, they care about the web. But as a company, Apple has shown indifference towards web developers. As Remy put it:

Even getting the hint of interest from Apple is a process of dumpster-diving the mailing lists scanning for the smallest hint of interest.

With that in mind, I completely understand Alex’s frustration with my post on “regressive” web apps. Although I intended it as a push towards making better progressive web apps, I can see how it could be taken as confirmation by those who think that progressive web apps aren’t worth investing in. Apple, for example. As it is, they’ll have to be carried kicking and screaming into adding support for Service Workers, manifest files, and other building blocks. From the reaction to my post from at least one WebKit developer on Twitter, not only did I fail to get across just how important the technologies behind progressive web apps are, I may have done more harm than good, giving ammunition to sceptics.

Still, I hope that most people took my words in the right spirit, like Addy:

We should push them to do much better. I’ll file bugs. Per @adactio post, can’t forget the ‘Progressive’ part of PWAs

Seeing that reaction makes me feel good …but seeing Alex’s reaction makes me feel bad. Very bad. I’m genuinely sorry that I made Alex feel that way. It wasn’t my intention but, well …good intentions are not enough.

I’ve been looking back at what I wrote, trying to see it through Alex’s eyes, looking for the parts that could be taken as a personal attack:

Chrome developers have decided that displaying URLs is not “best practice” … To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief. … Withholding the “add to home screen” prompt like that has a whiff of blackmail about it. … This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Some pretty strong words there. I stand by them, but the tone is definitely strident.

When we criticise something—a piece of software, a book, a website, a film, a piece of music—it’s all too easy to forget that there are real people behind it. But that isn’t the case here. I know that there are real people working on Chrome, because I know quite a few of those people. I also know that their intentions are good. That’s not a reason for me to remain silent—that’s a reason for me to speak up.

If I had known that my post was going to upset Alex, would I have still written it? That’s a tough one. On the one hand, this is a topic I care passionately about. I think it’s vital that we don’t compromise on the very things that make the web great. On the other hand, who knows if what I wrote will make the slightest bit of difference? In which case, I got the catharsis of getting it off my chest but at the price of upsetting somebody I respect. That price feels too high.

I love the fact that I can publish whatever I want on my own website. It can be a place for me to be enthusiastic about things that excite me, and a place for me to rant about things that upset me. I estimate that the enthusiastic stuff outnumbers the ranty stuff by about ten to one, but negativity casts a disproportionately large shadow.

I need to get better at tempering my words. Not that I’m going to stop criticising bad decisions when I see them, but I need to make my intentions clearer …because just having good intentions is not enough. Throughout this post, I’ve mentioned repeatedly how much I respect the people I know working on the Chrome team. I should have said that in my original post.

ProgrammableWebNetatmo Launches Netatmo Connect, its Ecosystem for Developers

Netatmo, a smart home company, last week announced its ecosystem for third party developers. With Netatmo Connect, developers, product managers and marketing professionals can create relationships between their products, services or apps and Netatmo devices. 

More than 20 business partners and 14,000 developers already use Natatmo APIs through this platform. Netatmo Connect makes interactions possible for the smart home to be even smarter. 

Norman Walsh (Sun)Data vs APIs

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Data vs APIs

Volume 19, Issue 14; 30 May 2016

If you can’t have the data, an API is nice. A better API would be better, and sometimes the data would be nice(r).

</header>

Why not use the API instead? It has everything.

<footer>
Robin Berjon
</footer>

What happened was, for another posting I’m in the middle of writing, I wanted know “how many W3C specifications have I edited”? There’s really no way to answer that question, but as an approximation, an answer to this question would suffice: “on how many W3C specifications am I credited as an editor?”

The W3C publishes (in RDF) the data that drives their technical reports page. Take your favorite triple store, run this query:

xquery version "1.0-ml";

import module namespace sem = "http://marklogic.com/semantics"
    at "/MarkLogic/semantics.xqy";

declare default function namespace "http://www.w3.org/2005/xpath-functions";

declare option xdmp:mapping "false";

let $rdfxml  := xdmp:document-get("http://www.w3.org/2002/01/tr-automation/tr.rdf")
let $triples := sem:rdf-parse($rdfxml/*)
let $_       := cts:uris() ! xdmp:document-delete(.) (: Danger, Will Robinson :)
return
  sem:rdf-insert($triples)

Followed by this one:

PREFIX rec54: <http://www.w3.org/2001/02pd/rec54#>
PREFIX contact: <http://www.w3.org/2000/10/swap/pim/contact#>

SELECT ?doc ?type
WHERE
{
  ?doc a ?type .
  ?doc rec54:editor ?ed .
  ?ed contact:fullName "Norman Walsh"
}

And the answer is 31. Except that’s not really the answer. That’s just the number of unique specifications, the current set as of today. Some number of Working Drafts preceded most of those. (And each of those was possibly preceded by some number of never-officially-published editorial drafts; there’s no precise answer to my question; I’m only interested in an order of magnitude).

At one time, maybe a decade ago, it was either the case that tr.rdf contained the whole history of the technical reports page, or there was another RDF version available that did. I asked around, that’s not available anymore. “Use the API, instead.”

So I did. And at this point, I began to construct a rant in my head, a screed possibly. Much wailing and gnashing of teeth about the fact that my straightforward 19 lines of query would have to be replaced by more than 100 lines of Python to be tested and debugged and, with it’s thousands upon thousands of HTTP requests, run tediously. Run more than once, written carefully (more testing, more debugging) to code around API rate limiting. A script that will not even, as it happens, answer my question, it will only collect the narrow slice of data needed to answer my question. I’ll have to write even more code to get the answer.

This is not that rant.

It’s not a rant for a few of reasons.

  1. Primarily because what the W3C has done is not unreasonable. The tr.rdf file is about 1.1M. I estimate that the entire data set would be at least four times that size. (There are about 1,200 specifications and about 5,000 distinct versions.) Fourish megabytes isn’t a very big download with a modern, first-world internet connection, but it’s big enough. You don’t want your browser doing it everytime someone wants to know the latest version of spec.

  2. It’s a nicely designed API, and for some kinds of access, an API is nice.

  3. Much as it would have been easier for me to get the results I wanted from the raw data, it’s only fair to observe that if it was 1.1T of data instead of 1.1M, a link to the file would be substantially less useful. I guess what I really want is for the W3C to store the data in MarkLogic and publish a SPARQL endpoint that I could use. But that’s a whole different kettle of fish.

  4. Finally, this isn’t a rant because if it was, I fear it would appear to be directed at the W3C. The fact that this data exists at all, let alone is published in any form at all, is a testament to the W3C’s reliability, professionalism, and serious concern about the web. Most organizations wouldn’t have had the foresight to collect, preserve, and curate this data. Of those few that had, most wouldn’t have bothered to publish it in any useful form at all, for free, on the web.

So I’m disappointed that I couldn’t just download the RDF. And I’m annoyed that I had to code my way through an API to get the data. But I’m grateful that it was possible to get it at all.

My initial plan was brute force: get all the specs, get all the versions, get all the editors, count the number of specs where I’m credited as an editor. Unfortunately, the data backing the API seems to be incomplete: many versions have no editors.

Backup plan: get all the specs, get all the versions, figure out what specs I’ve edited, count all the versions of the specs I’ve edited.

This query, against the tr.rdf data, answers the question, “what specs have I edited”:

PREFIX rec54: <http://www.w3.org/2001/02pd/rec54#>
PREFIX contact: <http://www.w3.org/2000/10/swap/pim/contact#>
PREFIX doc: <http://www.w3.org/2000/10/swap/pim/doc#>

SELECT ?doc ?type
WHERE
{
  ?version a ?type .
  ?version rec54:editor ?ed .
  ?version doc:versionOf ?doc .
  ?ed contact:fullName "Norman Walsh"
}

There are a few places where the short names have been reused, but I can get the list of short names from the results of that query. Then this Python script will bang on the API until it gets an answer:

import json
import requests

"""
Get stuff from the W3C API
"""


class Specs:
    """Specs"""
    def __init__(self):
        f = open("/home/ndw/.w3capi.json")
        self.headers = json.loads(f.read())
        f.close()

        self.datafile = "/tmp/specs.json"
        try:
            f = open(self.datafile)
            self.data = json.loads(f.read())
            f.close()
        except FileNotFoundError:
            self.data = {}

    def save(self):
        f = open(self.datafile, "w")
        print(json.dumps(self.data, indent=2), file=f)
        f.close()

    def get(self, uri, page):
        params = {
            'items': 1000,
            'page': page
        }

        response = requests.get(uri, headers=self.headers, params=params)

        if response.status_code != 200:
            raise Exception("Error code: {}".format(response.status_code))

        return json.loads(response.text)

    def get_specs(self):
        uri = "https://api.w3.org/specifications"
        page = 1
        done = False

        specs = []
        while not done:
            data = self.get(uri, page)

            for hash in data['_links']['specifications']:
                specs.append(hash['href'])

            done = 'next' not in data['_links']
            page = page + 1

        for key in specs:
            self.data[key] = {}

    def get_versions(self, spec):
        uri = "{}/versions".format(spec)
        page = 1

        data = self.get(uri, page)

        self.data[spec]['versions'] = []
        for version in data['_links']['version-history']:
            self.data[spec]['versions'].append(version['href'])

    def count_versions(self, spec):
        if 'versions' in self.data[spec]:
            return len(self.data[spec]['versions'])
        else:
            return 1  # there must be at least one!


def main():
    """Main"""
    specs = Specs()

    if "https://api.w3.org/specifications/xml" not in specs.data:
        print("Getting specifications")
        specs.get_specs()
        specs.save()

    for spec in specs.data:
        if "versions" not in specs.data[spec]:
            try:
                print("V: {}".format(spec))
                specs.get_versions(spec)
            except Exception:
                specs.save()
                raise

    specs.save()

    shortnames = ['WD-XSLReq', 'html-xml-tf-report', 'leiri',
                  'namespaceState', 'proc-model-req', 'webarch',
                  'xinclude-11-requirements', 'xinclude-11', 'xlink10-ext',
                  'xlink11', 'xml-id', 'xml-link-style', 'xml-proc-profiles',
                  'xpath-datamodel-30', 'xpath-datamodel-31',
                  'xpath-datamodel', 'xpath-functions', 'xproc-template',
                  'xproc-v2-req', 'xproc', 'xproc20-steps', 'xproc20',
                  'xptr-element', 'xptr-framework', 'xptr',
                  'xslt-xquery-serialization']

    count = 0
    for shortname in shortnames:
        spec = "https://api.w3.org/specifications/" + shortname
        count = count + specs.count_versions(spec)

    print(count)


if __name__ == '__main__':
    main()

The answer is 101. Approximately.

</article>

ProgrammableWebCrowdStrike Launches Falcon Connect With Expanded APIs

CrowdStrike Inc., a cloud-delivered next-generation endpoint protection, threat intelligence and response services company, today announced the addition of a broad set of sophisticated and easy-to-use APIs to the CrowdStrike Falcon Platform, along with new development and integration resources, as part of its Spring release of new solutions and services.

Jeremy Keith (Adactio)A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

<script src="https://gist.github.com/adactio/f8046bf3d52b5a08c1541a2b2df70bd8.js"></script>

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.

Daniel Glazman (Disruptive Innovations)Alerte enlèvement et France 2

La Préfecture du Rhône a diffusé ce soir à 19h46 une alerte enlèvement. Alors que les premières minutes sont évidemment cruciales dans un tel cas, rien au Journal de France 2 avant 20h10. Il est 20h24, aucun bandeau en bas d'écran. Le numéro de téléphone à appeler n'a même pas été affiché à l'écran mais seulement lu par le commentateur. Honteux, lamentable, inouï. Scandale.

Jeremy Keith (Adactio)Switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean

I’ve been updating my book sites over to HTTPS:

They’re all hosted on the same (virtual) box as adactio.com—Ubuntu 14.04 running Apache 2.4.7 on Digital Ocean. If you’ve got a similar configuration, this might be useful for you.

First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).

I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using sudo in front of the commands):

wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

Seems like a good idea to put that certbot-auto thingy into a directory like /etc:

mv certbot-auto /etc

Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for yourdomain.com:

/etc/certbot-auto --apache certonly -d yourdomain.com

The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.

The result of this will be a bunch of generated certificates that live here:

  • /etc/letsencrypt/live/yourdomain.com/cert.pem
  • /etc/letsencrypt/live/yourdomain.com/chain.pem
  • /etc/letsencrypt/live/yourdomain.com/privkey.pem
  • /etc/letsencrypt/live/yourdomain.com/fullchain.pem

Now you’ll need to configure your Apache gubbins. Head on over to…

cd /etc/apache2/sites-available

If you only have one domain on your server, you can just edit default.ssl.conf. I prefer to have separate conf files for each domain.

Time to fire up an incomprehensible text editor.

nano yourdomain.com.conf

There’s a great SSL Configuration Generator from Mozilla to help you figure out what to put in this file. Following the suggested configuration for my server (assuming I want maximum backward-compatibility), here’s what I put in.

<script src="https://gist.github.com/adactio/f0e13a2f8b9f9f084676bb2a901c5c95.js"></script>

Make sure you update the /path/to/yourdomain.com part—you probably want a directory somewhere in /var/www or wherever your website’s files are sitting.

To exit the infernal text editor, hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

If the yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:

a2ensite yourdomain.com

Time to restart Apache. Fingers crossed…

service apache2 restart

If that worked, you should be able to go to https://yourdomain.com and see a lovely shiny padlock in the address bar.

Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.

Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:

/etc/certbot-auto renew --dry-run

You should see a message saying:

Processing /etc/letsencrypt/renewal/yourdomain.com.conf

And then, after a while:

** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded.

You could set yourself a calendar reminder to do the renewal (without the --dry-run bit) every few months. Or you could tell your server’s computer to do it by using a cron job. It’s not nearly as rude as it sounds.

You can fire up and edit your list of cron tasks with this command:

crontab -e

This tells the machine to run the renewal task at quarter past six every evening and log any results:

15 18 * * * /etc/certbot-auto renew --quiet >> /var/log/certbot-renew.log

(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.

Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.

If you have other domains you want to secure, repeat the process by running:

/etc/certbot-auto --apache certonly -d yourotherdomain.com

And then creating/editing /etc/apache2/sites-available/yourotherdomain.com.conf accordingly.

I found these useful when I was going through this process:

That last one is good if you like the warm glow of accomplishment that comes with getting a good grade:

For extra credit, you can run your site through securityheaders.io to harden your headers. Again, not as rude as it sounds.

You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.

Yeah, I definitely should’ve mentioned that at the start.

Norman Walsh (Sun)New desktops

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

New desktops

Volume 19, Issue 13; 29 May 2016

Not the physical kind, but a little online workspace tinkering.

</header>

For many years, my standard desktop environment has been two Emacs windows (exactly overlapping each other) on the left, two shell windows on the right.

<figure class="figure-wrapper" id="R.1.3">
Old layout
Old layout
</figure>

So determined have I been to keep this arrangement, that I’ve refused to use any laptop with a horizontal resolution that wouldn’t support it. Having overlapping windows, especially Emacs and shell windows, even with “focus follows mouse” behavior, is a usability nightmare.

One of my colleagues has a completely different strategy. He has several desktops and keeps applications in “full screen” mode in each one. Switching applications effectively slides workspaces back and forth (he’s using OS X).

I decided to give it a try.

<figure class="figure-wrapper" id="R.1.7">
New layout
New layout
</figure>

The little box you see in the center of that screen is the workspace switcher with the second-from-the-left, top-most workspace selected.

In the upper-left workspace, I have my two overlapping Emacsen. To their right (the workspace you can see behind the little box), I have a shell window with a few tabs, to the right of that, a web browser. On the lower left, there’s shell window showing the “tail ‑f” of a MarkLogic error log. (I use that all the time!) The other four workspaces are currently empty. In each workspace, all of the windows are full-screen.

Where I used to use Alt-Tab to toggle between apps, I now use Ctrl-Alt-arrow keys to navigate between workspaces. On any given workspace, I still use Alt-Tab to toggle between apps. Helpfully, Alt-Tab only shows the applications on that workspace.

There are some advantages to this arrangement. First of all, every application has a big, roomy window. For projecting at a conference or WebExing (or as my aging eyes demand) this means you can bump up the font size quite a bit and still have a useful display. The geometry is also interesting, and I may experiment with different ones. Knowing that the tail of my log is “down” from my main Emacs session is (at least sometimes) faster than Alt-Tabbing around to it.

There are also some disadvantages. The whole desktop sliding effect can be a bit distracting (though if I go looking, I bet I can turn that off). There’s also a down side to the geometry. Some windows have to be literally “further away” than others. Getting from Emacs to a terminal is one “hop”, but the web browser is two hops away. If I switch the terminal and the web browser, then the terminal is two hops. I might just get used to that, or I might try to use shell or eshell mode in Emacs more regularly so that I can move the browser “closer” and rely less on the terminals.

At home, I have three displays: a “primary” one in front of me and a slightly smaller one to its right. The third display is the laptop panel located below the primary display. Generally, I have my old layout Emacsen plus terminals on the primary display, browser to the right, and error log tail on the laptop display.

I’m entirely uncertain how this workspace switching approach is going to fit into the multiple monitor scenario at home. Having three displays sliding about might just be too much. But I’ll see, I guess, when I get home again!

</article>

ProgrammableWebDaily API RoundUp: Microsoft Computer Vision, Recombee, Vidsource, Twin Prime, Space Bunny, Plus 13 More

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebGoogle Beats Oracle, Java APIs in Android are Fair Use

On Thursday the two-week trial between Oracle and Google ended when a federal jury found that Google's re-use of 37 Java APIs in the development of its Android operating system is protected by "fair use" according to Stephanie

ProgrammableWebTwilio&#039;s Notify API Reaches Customers Via Preferred Method

Twilio has launched a beta for its new Notify API that allows developers to orchestrate notifications over SMS, push, messaging apps, and more with a single, user-focused API. The goal of the API is to contact customers in the method they want to be reached.

Anne van Kesteren (Opera)Making the DOM faster

There is a pretty good comment on Hacker News (of all places) by nostrademons explaining why the DOM is not slow. Basically, all the DOM mutation operations are pretty trivial. Inserting and removing nodes, moving them around, setting an attribute, etc. It’s layout that takes a while and layout can be synchronously triggered through APIs such as offsetWidth and getComputedStyle(). So you have to be careful to group all your DOM mutations and only start asking layout questions afterwards.

(This has been a known pattern in some circles for a long time; tip of my imaginary hat to my former Q42 colleagues. However, I don’t think it’s generally well-known, especially given some of the “DOM is slow” advocacy I have seen of late.)

Still, whenever you invoke insertBefore() or remove(), there is some cost as these JavaScript functions are ultimately backed by C++ (nay Rust) with an IDL translation layer inbetween that makes sure the C++ doesn’t get to see anything funky. This happens as it’s important that the C++-backed DOM remains in a consistent state for all the other algorithms that run in the browser to not get confused. Research doing the DOM entirely in JavaScript has halted and in fact would hinder efforts to do layout in parallel, which is being spearheaded by Servo.

Yehuda came up with an idea, based on writing code for Ember.js, which in turn has been inspired by React, to represent these mutation operations somehow and apply them to a DOM in one go. That way, you only do the IDL-dance once and the browser then manipulates the tree in C++ with the many operations you fed it. Basically the inverse of mutation records used by mutation observers. With such a DOM mutation representation, you can make thousands of mutations and only pay the IDL cost once.

Having such an API would:

  1. Encourage good practice. By providing a single entry-point to mutating the DOM, developers will be encouraged to group DOM updates together before doing any kind of layout. If more sites are structured this way that will increase their performance.
  2. Improve engine performance. This requires further testing to make sure there is indeed a somewhat non-trivial IDL cost today and that we can reduce it by passing the necessary instructions more efficiently than through method calls.
  3. Potentially enable more parallelism by preparing these DOM updates in a worker, via supporting this DOM mutation representation in workers and making it transferable. That reduces the amount of DOM work done where user interaction needs to take place.

Looking forward to hear what folks think!

Update: Boris Zbarsky weighs in with some great implementer perspective on the small cost of IDL and the various tradeoffs to consider for a browser-provided API.

Norman Walsh (Sun)XQuery by default

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

XQuery by default

Volume 19, Issue 12; 27 May 2016

Using TamperMonkey to “fix” QConsole.

</header>

This posting is only likely to be of interest if you use MarkLogic. I use MarkLogic all the time.

Long ago, when QConsole was first introduced, it was an editing environment for XQuery. In more recent times, support for JavaScript, SPARQL, and SQL has been added. This is great, and QConsole gets better and better in every release. Except for this one thing: it defaults to JavaScript mode and, whilst I’m delighted that the server supports server-side JavaScript, it’s not my everyday bag.

There’s no way to define an initial default language, which is ok. If you change the language, that change persists across sessions; most folks will probably only ever have to change it once. The thing is, I often rebuild the server and start over with a clean install, so I get the default over and over again.

Until today. Up at 4am due to jet lag, I hacked up this TamperMonkey script to automatically switch to XQuery mode:

// ==UserScript==
// @name         Switch to XQuery mode
// @namespace    http://tampermonkey.net/
// @version      0.1
// @description  Effectively, make XQuery the default in QConsole
// @author       Norman Walsh
// @match        http://localhost:8000/qconsole/
// ==/UserScript==

(function() {
    'use strict';
    unsafeWindow.setTimeout(frobMode,1000);
})();

function frobMode() {
    var zwspace ="\u200B";
    var lines = unsafeWindow.jQuery("#query-text-space div.CodeMirror-lines div.CodeMirror-code > div pre");
    var defContent = (lines.length === 3) && (lines[0].innerText === "'use strict'") && (lines[1].innerText === zwspace) && (lines[2].innerText === zwspace);
    if (defContent) {
        unsafeWindow.jQuery("#mode-selection select").val("xquery");
        unsafeWindow.jQuery("#mode-selection select").change();
    }
}

It only switches the mode if you’re on the initial, default page. Even so, it’s a bit crude. In particular, waiting for one second is a hack. The actual QConsole page is populated by AJAX calls to the server. The recommended way to deal with this situation appears to be waitForKeyElements, but I couldn’t find an expression that worked reliably.

It works for me and I’m probably the only person who will ever use it, and I’ve been up since 4am, so I’m calling it done.

</article>

ProgrammableWebDaily API RoundUp: SimpleML, PriceJSON, OnTop, Travelbriefing, JsonOdds, Serendipify.Me

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesAmazon ElastiCache Update – Export Redis Snapshots to Amazon S3

Amazon ElastiCache supports the popular Memcached and Redis in-memory caching engines. While Memcached is generally used to cache results from a slower, disk-based database, Redis is used as a fast, persistent key-value store. It uses replicas and failover to support high availability, and natively supports the use of structured values.

Today I am going to focus on a helpful new feature that will be of interest to Redis users. You already have the ability to create snapshots of a running Cache Cluster. These snapshots serve as a persistent backup, and can be used to create a new Cache Cluster that is already loaded with data and ready to go. As a reminder, here’s how you create a snapshot of a Cache Cluster:

You can now export your Redis snapshots to an S3 bucket. The bucket must be in the same Region as the snapshot and you need to grant ElastiCache the proper permissions (List, Upload/Delete, and View Permissions) on it. We envision several uses for this feature:

Disaster Recovery – You can copy the snapshot to another environment for safekeeping.

Analysis – You can dissect and analyze the snapshot in order to understand usage patterns.

Seeding – You can use the snapshot to seed a fresh Redis Cache Cluster in another Region.

Exporting a Snapshot
To export a snapshot, simply locate it, select it, and click on Copy Snapshot:

Verify the permissions on the bucket (read Exporting Your Snapshot to learn more):

Then enter a name and select the desired bucket:

ElastiCache will export the snapshot and it will appear in the bucket:

The file is a standard Redis RDB file, and can be used as such.

You can also exercise this same functionality from your own code or via the command line. Your code can call CopySnapshot while specifying the target S3 bucket. Your scripts can use the  copy-snapshot command.

This feature is available now and you can start using it today! There’s no charge for the export; you’ll pay the usual S3 storage charges.

Jeff;

 

ProgrammableWebHow to Create Responsive Image Breakpoints with the Cloudinary API

Even the most modern responsive websites often struggle with selecting image resolutions that best match the various user devices. They typically have to compromise on either the image dimensions, number of images created, or even worse, just simply use a single image with the largest resolution that may be needed.

ProgrammableWebSquare Launches Register API for Android

Square has launched a Register API for Android. The release arrives after an iOS version was released in March.

ProgrammableWebRealm Launches Version 1.0 Featuring Updated API

Realm, a mobile database provider, has announced the launch of Realm version 1.0 featuring a number of capabilities including fine-grained notifications, support for Apple’s Swift language, and intuitive Java modeling. Realm version 1.0 also includes updates to the Realm API which is available via Realm mobile SDKs.

ProgrammableWeb: APIsWorld Weather Online Marine, Sailing, Surfing Weather

The World Weather Online Marine, Sailing, Surfing Weather API integrates forecast data into web services and mobile applications. XML, JSON, and REST are available as protocols to make callbacks.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWorld Weather Online Historical or Past Weather

The World Weather Online Historical or Past Weather API integrates previous forecast records into applications. It is available in CSV, JSON, and XML formats.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWeather Online Local City and Town Weather

The Weather Online Local City and Town Weather integrates forecast predictions according to location. It is available in CSV, JSON, JSONP, and XML formats.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsGentics Mesh

The Gentics Mesh API integrates content management into web services and mobile applications. It provides REST and JSON parameters and it is accessible through token.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRentsWatch

The RentsWatch API returns average rent prices of specific areas in the European market by using an OpenStreetMap (OSM) method. Also, it supports statistics of a city, statistics of a location, and ranking of cities by different indicators. This API uses JSON for data exchange, and API Keys for authentication. The firm that develops RentsWatch, Journalism++, provides analysis and data visualization services.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsTwin Prime Data

The Twin Prime Data API provides users with programmatic access to rich client and network data collected by Twin Prime, including data and metrics about end users' mobile app performance and usage. Twin Prime is a service that analyzes real-time network data to help users deliver their content faster to any location, device, and network.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRecombee

The Recombee API allows developers to access its real-time recommendations cloud service programmatically using a sophisticated query language. Recombee is a recommendations system as a service that uses data mining, flexible query language, and a variety of machine learning algorithms (including collaborative filtering and content-based recommendation) to provide useful recommendations.
Date Updated: 2016-05-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow to Ensure App Growth Post Launch

Launching an app is a bit like getting married. So much buildup and anticipation goes into planning a wedding, yet once the event is over, the real hard work begins.

ProgrammableWebFacebook Launches Continuous Live Video API

Facebook has launched a new Continuous Live Video API that enables developers to broadcast long and persistent streaming video to the world's largest social network. 

ProgrammableWebGoogle Launches Version 4 of its Safe Browsing API

Google has launched version 4 of its Safe Browsing API. The new release specifically addresses constraints presented by mobile environments without sacrificing levels of protection desktop users have come to enjoy. As expected, the launch of version 4 will start the deprecation process of versions 2 and 3.

ProgrammableWeb: APIsW3C Payment Request

The W3C Payment Request API integrates payment methods into merchants' applications. Callbacks are available via JSON and token and system-level are required to authenticate.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsYouTube Mobile Data Plan

The YouTube Mobile Data Plan offers Quality of Experience (QoE) optimization by identifying the user's data plan. This API includes two parts, one is used to establish the user's data plan with an anonymous identifier, and the other allows the application to identify the user's data plan from the mobile network operator. Optimization is achieved by improving data transparency within applications. The Mobile Data Plan API requires OAuth for authentication, and uses JSON for data exchange.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCampDoc

CampDoc offers an electronic health record system built for resident summer camps and day camps. This system features integrated management of health forms, allergies, and medications and illness/injury tracking. The CampDoc API is used to interact with the health record system and specialized functions such as profiles, users, and registrations. This API is REST based and registration is required to access detailed documentation.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsnullNude

nullNude is an adult content moderation platform that can be used to automatically remove unwanted content from a service. This platform features abstract understanding which is more discriminative than regular models when filtering images. 3 paid plans are available, and a free starter option is available as well. The company behind nullNude, dNeural, is a software engineering firm based in Gdynia, Poland. The nullNude API uses API Keys and API Secret for authentication, and JSON for data exchange.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMicrosoft Cognitive Services Computer Vision

The Microsoft Cognitive Services Computer Vision API analyses images and returns information about them. It can be used to filter mature content or to detect faces in an image. Information such as image description, and dominant and accent colors can also be retrieved. Additionally, this API recognizes celebrities, reads text in images, and generates thumbnails. Currently, a free plan that limits calls to 5000 transactions per month is available. The Computer Vision API uses JSON format for data exchange, and API Keys for authentication.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSoftGarden Career Websites

The SoftGarden Career Websites API targets ad agencies that aim to integrate job search, job listings and online application to leverage career websites and applications. Job board vendors can use the API available in REST format with OAuth to sell and receive employment postings.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsZeit Now

The Zeit Now API allows developers to programmatically orchestrate Node.js deployments in the cloud. Zeit is a project designed to make cloud computing as easy and accessible as mobile computing. The entirety of Zeit's command-line deployment tool is made available for users to remix.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBibler

The Bibler API provides full-text access to information searches and English translation comparisons of the Holy Scripture. Requests submitted to the RESTful API generate JSON-formatted responses. Developers can freely integrate the Bibler API into their own platforms or use it in pilot programs of other apps, for as long as they submit reasonable number of requests at a time.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMutual Funds NAV - India

The Mutual Funds NAV - India API returns mutual funds information from the Association of Mutual Funds of India (AMFI). This API supports searching by name, scheme code or International Securities Identification Number (ISIN), and returns data in JSON format.
Date Updated: 2016-05-25
Tags: [field_primary_category], [field_secondary_categories]

Jeremy Keith (Adactio)Regressive Web Apps

There were plenty of talks about building for the web at this year’s Google I/O event. That makes a nice change from previous years when the web barely got a look in and you’d be forgiven for thinking that Google I/O was an event for Android app developers.

This year’s event showed just how big Google is, and how it doesn’t have one party line when it comes to the web and native. At the same time as there were talks on Service Workers and performance for the web, there was also an unveiling of Android Instant Apps—a full-frontal assault on the web. If you thought it was annoying when websites door-slammed you with intrusive prompts to install their app, just wait until they don’t need to ask you anymore.

Peter has looked a bit closer at Android Instant Apps and I think he’s as puzzled as I am. Either they are sandboxed to have similar permission models to the web (in which case, why not just use the web?) or they allow more access to native APIs in which case they’re a security nightmare waiting to happen. I’m guessing it’s probably the former.

Meanwhile, a different part of Google is fighting the web’s corner. The buzzword du jour is Progressive Web Apps, originally defined by Alex as:

  • Responsive
  • Connectivity independent
  • App-like-interactions
  • Fresh
  • Safe
  • Discoverable
  • Re-engageable
  • Installable
  • Linkable

A lot of those points are shared by good native apps, but the first and last points in that list are key features of the web: being responsive and linkable.

Alas many of the current examples of so-called Progressive Web Apps are anything but. Flipkart and The Washington Post have made Progressive Web Apps that are getting lots of good press from Google, but are mobile-only.

Looking at most of the examples of Progressive Web Apps, there’s an even more worrying trend than the return to m-dot subdomains. It looks like most of them are concentrating so hard on the “app” part that they’re forgetting about the “web” bit. That means they’re assuming that modern JavaScript is available everywhere.

Alex pointed to shop.polymer-project.org as an example of a Progressive Web App that is responsive as well as being performant and resilient to network failures. It also requires JavaScript (specifically the Polymer polyfill for web components) to render some text and images in a browser. If you’re using the “wrong” browser—like, say, Opera Mini—you get nothing. That’s not progressive. That’s the opposite of progressive. The end result may feel very “app-like” if you’re using an approved browser, but throwing the users of other web browsers under the bus is the very antithesis of what makes the web great. What does it profit a website to gain app-like features if it loses its soul?

I’m getting very concerned that the success criterion for Progressive Web Apps is changing from “best practices on the web” to “feels like native.” That certainly seems to be how many of the current crop of Progressive Web Apps are approaching the architecture of their sites. I think that’s why the app-shell model is the one that so many people are settling on.

Personally, I’m not a fan of the app-shell model. I feel that it prioritises exactly the wrong stuff—the interface is rendered quickly while the content has to wait. It feels weirdly like a hangover from Appcache. I also notice it being used as a get-out-of-jail-free card, much like the ol’ “Single Page App” descriptor; “Ah, I can’t do progressive enhancement because I’m building an app shell/SPA, you see.”

But whatever. That’s just, like, my opinion, man. Other people can build their app-shelled SPAs and meanwhile I’m free to build websites that work everywhere, and still get to use all the great technologies that power Progressive Web Apps. That’s one of the reasons why I’ve been quite excited about them—all the technologies and methodologies they promote match perfectly with my progressive enhancement approach: responsive design, Service Workers, good performance, and all that good stuff.

I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content, and don’t throw away responsiveness in favour of a return to device-specific silos. But I’m not holding my breath. People seem to be so caught up in the attempt to get native-like functionality that they’re willing to give up the very things that make the web great.

For example, I’ve seen people use a meta viewport declaration to disable pinch-zooming on their sites. As justification they point to the fact that you can’t pinch-zoom in most native apps, therefore this web-based app should also prohibit that action. The inability to pinch-zoom in native apps is a bug. By also removing that functionality from web products, people are reproducing unnecessary bugs. It feels like a cargo-cult approach to building for the web: slavishly copy whatever native is doing …because everyone knows that native apps are superior to websites, right?

Here’s another example of the cargo-cult imitation of native. In your manifest JSON file, you can declare a display property. You can set it to browser, standalone, or fullscreen. If you set it to standalone or fullscreen then, when the site is launched from the home screen, it won’t display the address bar. If you set the display property to browser, the address bar will be visible on launch. Now, personally I like to expose those kind of seams:

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

Other people disagree. They think it makes more sense to hide the URL. They have a genuine concern that users will be confused by launching a website from the home screen in a browser (presumably because the user’s particular form of amnesia caused them to forget how that icon ended up on their home screen in the first place).

Fair enough. We’ll agree to differ. They can set their display property how they want, and I can set my display property how I want. It’s a big web after all. There’s no one right or wrong way to do this. That’s why there are multiple options for the values.

Or, at least, that was the situation until recently…

Remember when I wrote about how Chrome on Android will show an “add to home screen” prompt if your Progressive Web App fulfils a few criteria?

  • It is served over HTTPS,
  • it has a manifest JSON file,
  • it has a Service Worker, and
  • the user visits it a few times.

Well, those goalposts have moved. There is now a new criterion:

  • Your manifest file must not contain a display value of browser.

Chrome developers have decided that displaying URLs is not “best practice”. It was filed as a bug.

A bug.

Displaying URLs.

A bug.

I’m somewhat flabbergasted by this. The killer feature of the web—URLs—are being treated as something undesirable because they aren’t part of native apps. That’s not a failure of the web; that’s a failure of native apps.

Now, don’t get me wrong. I’m not saying that everyone should be setting their display property to browser. That would be far too prescriptive. I’m saying that it should be a choice. It should depend on the website. It should depend on the expectations of the users of that particular website. To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief.

I wouldn’t even have noticed this change of policy if it weren’t for the newly-released Lighthouse tool for testing Progressive Web Apps. The Session gets a good score but under “Best Practices” there was a red mark against the site for having display: browser. Turns out that’s the official party line from Chrome.

Just to clarify: you can have a site that has literally no HTML or turns away entire classes of devices, yet officially follows “best practices” and gets rewarded with an “add to home screen” prompt. But if you have a blazingly fast responsive site that works offline, you get nothing simply because you don’t want to hide URLs from your users:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Stuart argues that this is a paternal decision:

The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar.

I think there’s something to that, but digging deeper, developers and designers don’t make decisions like that in isolation. They’re generally thinking about what’s best for users. So, yes, absolutely, different apps will have different display properties, but that shouldn’t be down to the belief system of the developer; it should be down to the needs of the users …the specific needs of the specific users of that specific app. For the Chrome team to come down on one side or the other and arbitrarily declare that one decision is “correct” for every single Progressive Web App that is ever going to be built …that’s a political decision. It kinda feels like an abuse of power to me. Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

The other factors that contribute to the “add to home screen” prompt are pretty uncontroversial:

  • Sites should be served over a secure connection: that’s pretty hard to argue with.
  • Sites should be resilient to network outages: I don’t think anyone is going to say that’s a bad idea.
  • Sites should provide some metadata in manifest file: okay, sure, it’s certainly not harmful.
  • Sites should obscure their URL …whoa! That feels like a very, very different requirement, one that imposes one particular opinion onto everyone who wants to participate.

This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Up until now I’ve been a big fan of Progressive Web Apps. I understood them to be combining the best of the web (responsiveness, linkability) with the best of native (installable, connectivity independent). Now I see that balance shifting towards the native end of the scale at the expense of the web’s best features. I’d love to see that balance restored with a little less emphasis on the “Apps” and a little more emphasis on the “Web.” Now that would be progressive.

Amazon Web ServicesAmazon Elastic Transcoder Update – Support for MPEG-DASH

Amazon Elastic Transcoder converts media files (audio and video) from one format to another. The service is robust, scalable, cost-effective, and easy to use. You simply create a processing pipeline (pointing to a pair of S3 buckets for input and output in the process), and then create transcoding jobs. Each job reads a specific file from the input bucket, transcodes it to the desired format(s) as specified in the job, and then writes the output to the output bucket. You pay for only what you transcode, with price points for Standard Definition (SD) video, High Definition (HD) video, and audio. We launched the service with support for an initial set of transcoding presets (combinations of output formats and relevant settings). Over time, in response to customer demand and changes in encoding technologies, we have added additional presets and formats. For example, we added support for the VP9 Codec earlier this year.

Support for MPEG-DASH
Today we are adding support for transcoding to the MPEG-DASH format. This International Standard format supports high-quality audio and video streaming from HTTP servers, and has the ability to adapt to changes in available network throughput using a technique known as adaptive streaming. It was designed to work well across multiple platforms and at multiple bitrates, simplifying the transcoding process and sidestepping the need to create output in multiple formats.

During the MPEG-DASH transcoding process, the content is transcoded into segmented outputs at the different bitrates and a playlist is created that references these outputs. The client (most often a video player) downloads the playlist to initiate playback. Then it monitors the effective network bandwidth and latency, requests video segments as needed. If network conditions change during the playback process, the player will take action, upshifting or downshifting as needed.

You can serve up the transcoded content directly from S3 or you can use Amazon CloudFront to get the content even closer to your users. Either way, you need to create a CORS policy that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

If you are using CloudFront, you need to enable the OPTIONS method, and allow it to be cached:

You also need to add three headers to the whitelist for the distribution:

Transcoding With MPEG-DASH
To make use of the adaptive bitrate feature of MPEG-DASH, you create a single transcoding job and specify multiple outputs, each with a different preset. Here are your choices (4 for video and 1 for audio):

When you use this format, you also need to choose a suitable segment duration (in seconds). A shorter duration produces a larger number of smaller segments and allows the client to adapt to changes more quickly.

You can create a single playlist that contains all of the bitrates, or you can choose the bitrates that are most appropriate for your customers and your content. You can also create your own presets, using an existing one as a starting point:

Available Now
MPEG-DASH support is available now in all Regions where Amazon Elastic Transcoder is available. There is no extra charge for this use of this format (see Elastic Transcoder Pricing to learn more).

Jeff;

 

Amazon Web ServicesAmazon Redshift – Up to 2X Throughput and 10X Vacuuming Performance Improvements

My colleague Maor Kleider wrote today’s guest post!

Jeff;

Amazon Redshift, AWS’s fully managed data warehouse service, makes petabyte-scale data analysis fast, cheap, and simple. Since launch, it has been one of AWS’s fastest growing services, with many thousands of customers across many industries. Enterprises such as NTT DOCOMO, NASDAQ, FINRA, Johnson & Johnson, Hearst, Amgen, and web-scale companies such as Yelp, Foursquare and Yahoo! have made Amazon Redshift a key component of their analytics infrastructure.

In this blog post, we look at performance improvements we’ve made over the last several months to Amazon Redshift, improving throughput by more than 2X and vacuuming performance by 10X.

Column Store
Large scale data warehousing is largely an I/O problem, and Amazon Redshift uses a distributed columnar architecture to minimize and parallelize I/O. In a column-store, each column of a table is stored in its own data block. This reduces data size, since we can choose compression algorithms optimized for each type of column. It also reduces I/O time during queries, because only the columns in the table that are being selected need to be retrieved.

However, while a column-store is very efficient at reading data, it is less efficient than a row-store at loading and committing data, particularly for small data sets. In patch 1.0.1012 (December 17, 2015), we released a significant improvement to our I/O and commit logic. This helped with small data loads and queries using temporary tables. While the improvements are workload-dependent, we estimate the typical customer saw a 35% improvement in overall throughput.

Regarding this feature, Naeem Ali, Director of Software Development, Data Science at Cablevision, told us:

Following the release of the I/O and commit logic enhancement, we saw a 2X performance improvement on a wide variety of workloads. The more complex the queries, the higher the performance improvement.

Improved Query Processing
In addition to enhancing the I/O and commit logic for Amazon Redshift, we released an improvement to the memory allocation for query processing in patch 1.0.1056 (May 17, 2016), increasing overall throughput by up to 60% (as measured on standard benchmarks TPC-DS, 3TB), depending on the workload and the number of queries that spill from memory to disk. The query throughput improvement increases with the number of concurrent queries, as less data is spilled from memory to disk, reducing required I/O.

Taken together, these two improvements, should double performance for customer workloads where a portion of the workload contains complex queries that spill to disk or cause temporary tables to be created.

Better Vacuuming
Amazon Redshift uses multi-version concurrency control to reduce contention between readers and writers to a table. Like PostgreSQL, it does this by marking old versions of data as deleted and new versions as inserted, using the transaction ID as a marker. This allows readers to build a snapshot of the data they are allowed to see and traverse the table without locking. One issue with this approach is the system becomes slower over time, requiring a vacuum command to reclaim the space. This command reclaims the space from deleted rows and ensures new data that has been added to the table is placed in the right sorted order.

We are releasing a significant performance improvement to vacuum in patch 1.0.1056, available starting May 17, 2016. Customers previewing the feature have seen dramatic improvements both in vacuum performance and overall system throughput as vacuum requires less resources.

Ari Miller, a Principal Software Engineer at TripAdvisor, told me:

We estimate that the vacuum operation on a 15TB table went about 10X faster with the recent patch, ultimately improving overall query performance.

 You can query the VERSION function to verify that you are running at the desired patch level.

Available Now
Unlike on-premise data warehousing solutions, there are no license or maintenance fees for these improvements or work required on your part to obtain them. They simply show up as part of the automated patching process during your maintenance window.

Maor Kleider, Senior Product Manager, Amazon Redshift

 

Amazon Web ServicesEC2 Instance Console Screenshot

When our users move existing machine images to the cloud for use on Amazon EC2, they occasionally encounter issues with drivers, boot parameters, system configuration settings, and in-progress software updates. These issues can cause the instance to become unreachable via RDP (for Windows) or SSH (for Linux) and can be difficult to diagnose. On a traditional system, the physical console often contains log messages or other clues that can be used to identify and understand what’s going on.

In order to provide you with additional visibility into the state of your instances, we now offer the ability to generate and capture screenshots of the instance console. You can generate screenshots while the instance is running or after it has crashed.

Here’s how you generate a screenshot from the console (the instance must be using HVM virtualization):

And here’s the result:

It can also be used for Windows instances:

You can also create screenshots using the CLI (aws ec2 get-console-screenshot) or the EC2 API (GetConsoleScreenshot).

Available Now
This feature is available today in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (Brazil) Regions. There are no costs associated with it.

Jeff;

 

ProgrammableWebTwilio Unveils Cellular Communications Platform for Developers

Today at the SIGNAL conference, Twilio unveiled Twilio Programmable Wireless, a new cellular communications platform for developers built in partnership with T-Mobile. There are billions of connected devices in use today and building communications applications for connected devices can be difficult.

ProgrammableWebTwitter Tweaks 140 Character Calculation

Today, Twitter announced a number of changes aimed at creating richer public conversations by eliminating elements that currently count towards the 140 character limit. Certain elements that previously ate into the 140 character cap (e.g. media attachments, @names that auto-populate when you hit reply to a Tweet, URLs at the end of Tweets, etc.), will no longer count as characters when calculating 140 characters. Twitter will roll out the changes in the coming months.

Shelley Powers (Burningbird)Learning Node, 2nd Edition is now live

Learning Node 2nd cover

Learning Node, 2nd Edition is now in production and should be hitting the streets within a few weeks. We had a bit of excitement when Node 6.0 was rolled out, just as we entered production. However, this edition of the book was specifically designed to accommodate Node’s rather energetic release schedule, and the book survived with only minimal changes.

In this edition, I focused heavily on the Node core API, rather than third-party modules. I figured the book audience either consists of front-end developers working with JavaScript in the browser, or server-side developers who have worked with other tools. In either case, the audience wants to know how to work with Node…not this module or that. Node, itself.

My one trip into the fanciful was the chapter on Node in other environments. In this chapter, I had a chance to introduce the reader to Microsoft’s new ChakraCore for Node, as well as using Node with Arduino and Raspberry Pi, and with the Internet of Things (IoT). I figured by Chapter 12, we all deserved a special treat.

The book’s Table of Contents:

Preface
1. The Node Environment
2. Node Building Blocks: the Global Objects, Events, and Node’s Asynchronous Nature
3. Basics of Node Modules and Npm
4. Interactive Node with REPL and More on the Console
5. Node and the Web
6. Node and the Local System
7. Networking, Sockets, and Security
8. Child Processes
9. Node and ES6
10. Full-stack Node Development
11. Node in Development and Production
12. Node in New Environments

A more detailed TOC is available at O’Reilly.

I had a good crew at O’Reilly on the book, and an exceptionally good tech reviewer in Ethan Brown.

ProgrammableWebDaily API RoundUP: NASA, Lyft, OAuth.io, Nudge, Neatly.io, MiaRec

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Norman Walsh (Sun)Balisage 2016

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Balisage 2016

Volume 19, Issue 11; 24 May 2016

Come hang out with all the markup geeks! Or, less flippantly, come discuss the hard problems and the deep questions with a broad array of engineers, librarians, philosophers, linguists, researchers, scholars, in-the-trenches practitioners, assorted polymaths, and interesting people of all stripes.

</header>

We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. But there are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions, and pass them on.

<footer>
Richard Feynman
</footer>

I spent the first few days of last week holed up in Rockville, MD, pouring over the submissions that would become the Balisage 2016 Program. As ever, there were a lot of good papers.

I don’t think I can summarize the whole week (1–5 August in Bethesda, MD) any more eloquently than our esteemed chair, so I won’t try. I’ll just reproduce her words:

The 2016 program includes papers discussing reducing ambiguity in linked-open-data annotations, the visualization of XSLT execution patterns, automatic recognition of grant- and funding-related information in scientific papers, construction of an interactive interface to assist cybersecurity analysts, rules for graceful extension and customization of standard vocabularies, case studies of agile schema development, a report on XML encoding of subtitles for video, an extension of XPath to file systems, handling soft hyphens in historical texts, an automated validity checker for formatted pages, one no-angle-brackets editing interface for scholars of German family names and another for scholars of Roman legal history, and a survey of non-XML markup such as Markdown.

<footer></footer>

If you’re interested in markup or, especially this year, in how it can be used to provide stable, scalable, and sustainable infrastructures for modern applications, this is the conference you should attend. If you just want to hang out with incredibly smart people doing incredibly diverse and interesting things, this is also the place.

At the end of the day, this isn’t a conference about XML or SGML or JSON or XSLT or JavaScript or any other specific technology (though, by the same token, it’s also about all of them), it’s a conference about the power of declarative interfaces, open information, reuse, and data independence.

Looking forward to it!

</article>

ProgrammableWeb: APIsJsonOdds

The JsonOdds API allows developers to integrate odds data for MLB, NBA, NCAA Basketball, NCAA Football, NFL, NHL, MMA, soccer, and tennis into their own websites and applications. JsonOdds is a sports betting odds service designed specifically to provide developers with the data they need to create their own applications.
Date Updated: 2016-05-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSimpleML Automated Machine Learning

SimpleML is a fully-automated data science platform that finds the most appropriate machine learning algorithm for data, optimizes its hyper-parameters using Bayesian Optimization, and generates a machine learning model that can be used offline. Generated models can be exported for offline evaluation to different programming languages such as C, Java, Python, Javascript, Ruby and PHP. In addition lightweight models can be generated for Arduino, Raspberry Pi and other microcontrollers.
Date Updated: 2016-05-24
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesNew AWS Quick Start Reference Deployment – Standardized Architecture for PCI DSS

If you build an application that processes credit card data, you need to conform to PCI DSS (Payment Card Industry Data Security Standard). Adherence to the standard means that you need to meet control objectives for your network, protect cardholder data, implement strong access controls, and more.

In order to help AWS customers to build systems that conform to PCI DSS, we are releasing a new Quick Start Reference Deployment. The new Standardized Architecture for PCI DSS on the AWS Cloud (PDF or HTML) includes a AWS CloudFormation template that deploys a standardized environment that falls in scope for PCI DSS compliance (version 3.1).

The template describes a stack that deploys a multi-tiered Linux-based web application in about 30 minutes. It makes use of child templates, and can be customized as desired. It launches a pair of Virtual Private Clouds (Management and Production) and can accommodate a third VPC for development:

The template sets up the IAM items (policies, groups, roles, and instance profiles), S3 buckets (encrypted web content, logging, and backup), a Bastion host for troubleshooting and administration, an encrypted RDS database instance running in multiple Availability Zones, and a logging / monitoring / alerting package that makes use of AWS CloudTrail, Amazon CloudWatch, and AWS Config Rules. The architecture supports a wide variety of AWS best practices (all of which are detailed in the document) including use of multiple Availability Zones, isolation using public and private subnets, load balancing, auto scaling, and more.

You can use the template to set up an environment that you can use for learning, as a prototype, or as the basis for your own template.

The Quick Start also includes a Security Controls Reference. This document maps the security controls called out by PCI DSS to the relevant architecture decisions, features, and configurations.

Jeff;

PS – Check out our other AWS Enterprise Accelerator Quick Starts!

 

 

ProgrammableWeb5 Considerations Before Starting That WebRTC Project

As with any deployment, adding WebRTC to your environment requires careful consideration to ensure the execution produces the capabilities and user experience that you expect. In this post on No Jitter, Amir Zmora discusses five considerations before beginning a WebRTC project.

ProgrammableWeb: APIsOnTop Notification

The OnTop Notification API allows developers to get notifications on their phones that keep them informed about their apps. It can be used to get notified of new app content that requires review, report severe crashes or bugs, receive push notifications from internet-connected devices, and perform simple app analytics and engagement studies.
Date Updated: 2016-05-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsUAE Car Registration

UAE Car Registration API aids the real-time retrieval of registration details along with technical data and functionality characteristics of vehicles bearing UAE number plates. Instead of prodding automobile sellers for vehicle descriptions, users can use the SOAP-based API to access crucial government-sourced automobile information including: model type, year of manufacture, year of registration, engine size, vehicle identification number, gross weight, displacement, and 50 other descriptive data fields. The API deploys both XML and JSON formats in the conveyance of requests and responses. Its coverage extends across all the seven emirates of the UAE comprising: Abu Dhabi, Ajman, Dubai, Fujairah, Ras Al Khaimah, Sharjah, and Umm Al Quwain.
Date Updated: 2016-05-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPriceJSON

The PriceJSON API returns detailed pricing offers for over 340 million products sold through Amazon.com. It provides product price, shipping price, buy box, seller name, and seller rating. API uses REST in JSON format. FREE monthly API calls.
Date Updated: 2016-05-23
Tags: [field_primary_category], [field_secondary_categories]

Daniel Glazman (Disruptive Innovations)CSS Variables in BlueGriffon

I guess the title says it all :-) Click on the thumbnail to enlarge it.

CSS Variables in BlueGriffon

ProgrammableWebStitch Labs Announces New API

Stitch Labs, an inventory control platform, recently announced the launch of its API to help retailers innovate and streamline their operational infrastructure. This technical flexibility is critical for retailers to be successful in today's dynamic eCommerce environment full of competition and rising consumer expectations.

Amazon Web ServicesArduino Web Editor and Cloud Platform – Powered by AWS

Last night I spoke with Luca Cipriani from Arduino to learn more about the new AWS-powered Arduino Web Editor and Arduino Cloud Platform offerings. Luca was en-route to the Bay Area Maker Faire and we had just a few minutes to speak, but that was enough time for me to learn a bit about what they have built.

If you have ever used an Arduino, you know that there are several steps involved. First you need to connect the board to your PC’s serial port using a special cable (you can also use Wi-Fi if you have the appropriate add-on “shield”), ensure that the port is properly configured, and establish basic communication. Then you need to install, configure, and launch your development environment, make sure that it can talk to your Arduino, tell it which make and model of Arduino that you are using, and select the libraries that you want to call from your code. With all of that taken care of, you are ready to write code, compile it, and then download it to the board for debugging and testing.

Arduino Code Editor
Luca told me that the Arduino Code Editor was designed to simplify and streamline the setup and development process. The editor runs within your browser and is hosted on AWS (although we did not have time to get in to the details, I understand that they made good use of AWS Lambda and several other AWS services).

You can write and modify your code, save it to the cloud and optionally share it with your colleagues and/or friends. The editor can also detect your board (using a small native plugin) and configure itself accordingly; it even makes sure that you can only write code using libraries that are compatible with your board. All of your code is compiled in the cloud and then downloaded to your board for execution.

Here’s what the editor looks like (see Sneak Peek on the New, Web-Based Arduino Create for more):

Arduino Cloud Platform
Because Arduinos are small, easy to program, and consume very little power, they work well in IoT (Internet of Things) applications. Even better, it is easy to connect them to all sorts of sensors, displays, and actuators so that they can collect data and effect changes.

The new Arduino Cloud Platform is designed to simplify the task of building IoT applications that make use of Arduino technology. Connected devices will be able to be able to connect to the Internet, upload information derived from sensors, and effect changes upon command from the cloud. Building upon the functionality provided by AWS IoT, this new platform will allow devices to communicate with the Internet and with each other. While the final details are still under wraps, I believe that this will pave the wave for sensors to activate Lambda functions and for Lambda functions to take control of displays and actuators.

I look forward to learning more about this platform as the details become available!

Jeff;

 

ProgrammableWebNest Labs Expands Works with Nest API

Nest Labs, which Google acquired for $3.2 billion in January 2014, launched Works with Nest to ensure that its devices can play nicely with devices made by third party manufacturers. Since its launch, numerous companies, including Jawbone, Phillips, Whirlpool, Mercedes-Benz and Logitech, have built Works with Nest integrations.

ProgrammableWebDaily API RoundUp: Microsoft Bing, RevTwo, Stride.ai, Orbit.ai, Helioviewer

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesAWS Accelerator for Citrix – Migrate or Deploy XenApp & XenDesktop to the Cloud

If you are running Citrix XenApp, XenDesktop and/or NetScaler on-premises  and are interested in moving to the AWS Cloud, I have a really interesting offer for you!

In cooperation with our friends at Citrix (an Advanced APN Technology Partner), we have assembled an AWS Accelerator to help you to plan and execute a successful trial migration while using your existing licenses. The migration process makes use of the new Citrix Lifecycle Management (CLM) tool. CLM includes a set of proven migration blueprints that will help you to move your existing deployment to AWS. You can also deploy the XenApp and XenDesktop Service using Citrix Cloud, and tap CLM to manage your AWS-based resources.

Here’s the Deal
The AWS Accelerator lets you conduct a 25-user trial migration / proof of concept over a 60 day period. During that time you can use CLM to deploy XenApp, XenDesktop, and NetScaler on AWS per the reference architecture and a set of best practices. We will provide you with AWS Credit ($5000) and Citrix will provide you with access to CLM. A select group of joint AWS and Citrix launch partners will deliver the trials with the backing and support of technical and services teams from both companies.

Getting Started
Here’s what you need to do to get started:

  1. Contact your AWS (email us) or Citrix account team and ask to join the AWS Accelerator.
  2. Submit your request in order to be considered for Amazon EC2 credits and a trial of Citrix CLM.
  3. Create an AWS account if you don’t already have one.

After you do this, follow the steps in the Citrix blueprint (Deploy the XenApp and XenDesktop Proof of Concept blueprint with NetScaler to AWS) to build your proof-of-concept environment.

Multiple AWS Partners are ready, willing, and able to help you to work through the blueprint and to help you to tailor it to the needs of your organization. The AWS Accelerator Launch Services Partners include Accenture, Booz Allen Hamilton, CloudNation, Cloudreach, Connectria, Equinix (EPS Cloud), REAN Cloud, and SSI-Net. Our Launch Direct Connect partner is Level 3.

Learn More at Synergy
AWS will be sponsoring Citrix Synergy next week in Las Vegas and will be at booth #770. Citrix will also be teaching a hands on lab (SYN618) based on the AWS Accelerator program on Monday May 23rd at 8 AM. If you are interested in learning more please sign up for the hands on lab or stop by the booth and say hello to my colleagues!

Jeff;

 

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>