Daniel Glazman (Disruptive Innovations)Lyonnaise des Eaux

En septembre dernier, je poussais un coup de gueule contre la Lyonnaise des Eaux qui venait de me mettre dans un patakès pas possible. J'en pousse un second aujourd'hui pour la même raison. Laissez-moi vous résumer, chronologiquement, toute l'affaire :

  • je m'installe le 31 décembre 2013 dans mon logement actuel, une petite maison sympa avec un petit jardin. J'ouvre évidemment un contrat de fourniture d'eau à mon nom, après avoir pris soin de vérifier correctement mon n° de compteur. Mon logement est dans un petit lotissement de quatre maisons.
  • début juin 2014 s'installe dans une des autres maisons une famille danoise. Peu au fait du fonctionnement (voire du non-fonctionnement) des services régaliens français, je les tuyaute et leur recommande de bien tester leur compteur avant d'en fournir le n° à la Lyonnaise.
  • en septembre 2014, je constate que je n'ai reçu aucune facture de la Lyonnaise des Eaux depuis juin ; allant sur mon compte en ligne, je découvre que mon compte a tout simplement été clos. Il me faudra quatre ou cinq coups de fil à la Lyonnaise pour découvrir que celle-ci a unilatéralement fermé mon contrat, parce que mon voisin danois s'est planté de numéro de compteur, alors que je n'ai jamais notifié la Lyonnaise de mon départ, qu'elle ne m'a jamais notifié de la fermeture du contrat ni jamais contacté à quelque sujet que ce soit entre janvier et septembre.
  • la Lyonnaise promet la régularisation immédiate, dépêche sur place un technicien qui vérifie bien les numéros de compteur et effectue les relevés.
  • vendredi dernier : alors que je ne me suis pas plus occupé que ça de mes factures d'eau, j'ai besoin d'un justificatif de domicile. Je retourne donc sur mon compte en ligne pour découvrir une fois encore que mon contrat est désormais « inactif ». Je rappelle évidemment immédiatement la Lyonnaise.
  • je découvre alors que rien n'a été fait, si ce n'est les relevés, en septembre dernier. Le compteur erroné a été laissé à mon voisin danois et aucun compteur ne m'a été ré-attribué. Je râle un peu, parce que j'en ai marre de perdre du temps à régler l'incompétence de la Lyonnaise. On me présente des excuses, traite le problème de « loupé impardonnable », me promet le règlement immédiat du souci, un étalement du règlement de régularisation, un geste commercial et un appel d'un responsable commercial.
  • je retourne sur mon compte en ligne ce matin, où mon contrat est toujours inactif... Je rappelle donc la Lyonnaise pour la n-ième fois. Si mon contrat a bien été réactivé à mon nom, il a été rattaché non pas à mon compte mais de nouveau à celui du voisin danois. Contrairement à ce qui avait été promis, une facture de régularisation sans possibilité d'étalement ni aucun geste commercial a déjà été émise. Le montant de cette facture est totalement obscur et semble sur-évalué mais aucun détail n'est fourni.
  • je ne pouvais pas voir ni rattacher le nouveau contrat car non seulement il avait été rattaché au voisin danois, mais la notification du compte lui avait été envoyée à lui et pas à moi...
  • ayant exigé, à nouveau, qu'un responsable commercial m'appelle, j'ai finalement été joint en fin de matinée aujourd'hui. Un geste commercial est fait et un étalement est accepté. Mais comme, malgré mes demandes initiale, la Lyonnaise n'en avait absolument pas tenu compte hier et vendredi dernier, les factures déjà émises depuis 48 heures sont à poubelliser dans l'attente de nouvelles.
  • de son côté évidemment, mon voisin danois doit avoir un remboursement de 13 mois de consommation de mon compteur et une facture de 13 mois de consommation de son propre compteur.
  • une seconde voisine a elle une consommation d'exactement 0 mètres cubes sur un an pour une famille de 5 personnes alors que son compte mentionne correctement son n° de compteur et que les relevés ont été effectués...

Pendant dix mois, la Lyonnaise des Eaux a été incapable de régulariser une situation créée certes par l'erreur initiale de mon voisin, mais surtout maintenue par sa fermeture unilatérale de mon contrat et son absence totale de communication tant vers mon voisin danois que moi-même. Les quelques gestes de régularisation à effectuer en septembre 2014 n'ont pas été faits. Les courriers promis à cette date n'ont pas été envoyés. Je me retrouve avec 13 mois de consommation d'eau à régulariser par ce qu'il faut bien appeler leur incompétence. Si je n'avais pas moi-même alerté et perdu plusieurs heures au téléphone et en tests de compteurs, cette situation aurait pu durer très longtemps et conduire à des régularisations monumentales. Mon voisin danois quittant les lieux fin août, une nouvelle famille s'installera probablement bientôt. J'ose espérer que le Service Client (plutôt un Sévice Client en l'état actuel des choses) de la Lyonnaise de Eaux saura enfin gérer correctement un tel mouvement...

Il se trouve que la Lyonnaise des Eaux est en situation de monopole dans le secteur de mon logement. C'est bien dommage. Je les quitterais volontiers sur le champ sinon...

Anne van Kesteren (Opera)Update on standardizing shadow DOM and custom elements

There has been revived interest in standardizing shadow DOM and custom elements across all browsers. To that end we had a bunch of discussion online, met in April to discuss shadow DOM, and met earlier this month to discuss custom elements. There is agreement around shadow DOM now. host.attachShadow() will give you a ShadowRoot instance. And <slot> elements can be used to populate the shadow tree with children from the host. The shadow DOM specification will remain largely unchanged otherwise. Hayato is working on updates.

This is great, we can start implementing these changes in Gecko and ship them. Other browsers plan on doing the same.

Custom elements however is somewhat more astray. Here are some of the pain points:

  • Can we execute JavaScript at creation-time? This would explain how builtin elements are created and is fully deterministic. It does complicate a number of algorithms in the DOM (ranges/editing) that are already very hard to reason about and have been the cause of security bugs.

    The counter argument is that those algorithms are likely already harder than specified due to mutation events, several focus events that fire synchronously (!), and beforeunload. The counter argument to that is that maybe we can “fix” those cases and that adding new scenarios where JavaScript can run synchronously does not help lower the complexity tax.

  • Do we need upgrades? With some reservation, there seemed to be agreement that custom elements definitions loaded post-parse-time were an important facet to support. That is, the parser creates instances and these are later "upgraded" with some JavaScript into their local-name matching custom element. Retaining object identity here is important as experiments show that unrelated libraries might grab references before "upgrading" happens.

  • Can we have upgrades and be fully deterministic? This was the question that killed custom elements being done anytime soon. Once you accept upgrades, you accept that instances are created at a different time to when they get initialized. In this time difference observable changes happen to the tree and the wider world of DOM. If developers start relying on this time difference for their own custom elements (and feedback from Polymer indicated they all did), they end up creating elements that cannot be reused, break when created through document.createElement(), etc.

During the meeting Maciej imagined various hacks during a break that would meet the determinism and upgrade requirements, but neither seemed workable on closer scrutiny, although an attempt will still be made. That and figuring out whether JavaScript needs to run during DOM operations will be our next set of steps. Hopefully with some more research a clearer answer for custom elements will emerge.

Amazon Web ServicesNow Available – Amazon Aurora

We announced Amazon Aurora last year at AWS re:Invent (see Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon for more info).  With storage replicated both within and across three Availability Zones, along with an update model driven by quorum writes, Amazon Aurora is designed to deliver high performance and 99.99% availability while easily and efficiently scaling to up to 64 TB of storage.

In the nine months since that announcement, a host of AWS customers have been putting Amazon Aurora through its paces.  As they tested a wide variety of table configurations, access patterns, and queries on Amazon Aurora, they provided us with the feedback that we needed to have in order to fine-tune the service. Along the way, they verified that each Amazon Aurora instance is able to deliver on our performance target of up to 100,000 writes and 500,000 reads per second, along with a price to performance ratio that is 5 times better than previously available.

Now Available
Today I am happy to announce that Amazon Aurora is now available for use by all AWS customers, in three AWS regions. During the testing period we added some important features that will simplify your migration to Amazon Aurora. Since my original blog post provided a good introduction to many of the features and benefits of the core product, I’ll focus on the new features today.

Zero-Downtime Migration
If you are already using Amazon RDS for MySQL and want to migrate to Amazon Aurora, you can do a zero-downtime migration by taking advantage of Amazon Aurora’s new features. I will summarize the process here, but I do advise you to read the reference material below and to do a practice run first! Immediately after you migrate, you will begin to benefit from Amazon Aurora’s high throughput, security, and low cost. You will be in a position to spend less time thinking about the ins and outs of database scaling and administration, and more time to work on your application code.

If the database is active, start by enabling binary logging in the instance’s DB parameter group (see MySQL Database Log Files to learn how to do this). In certain cases, you may want to consider creating an RDS Read Replica and using it as the data source for the migration and replication (check out Replication with Amazon Aurora to learn more).

Open up the RDS Console, select your existing database instance, and choose Migrate Database from the Instance Actions menu:

Fill in the form (in most cases you need do nothing more than choose the DB Instance Class) and click on the Migrate button:

Aurora will create a new DB instance and proceed with the migration:

A little while later (a coffee break might be appropriate, depending on the size of your database), the Amazon Aurora instance will be available:

Now (assuming that the source database was actively changing) while you were creating the Amazon Aurora instance, replicate the changes to the new instance using the mysql.rds_set_external_master command, and then update your application to use the new Aurora endpoint!

Metrics Galore
Each Amazon Aurora instance reports a plethora of metrics to Amazon CloudWatch. You can view these from the Console and you can, as usual, set alarms and take actions as needed:




Easy and Fast Replication
Each Amazon Aurora instance can have up to 15 replicas, each of which adds additional read capacity. You can create a replica with a couple of clicks:

Due to Amazon Aurora’s unique storage architecture, replication lag is extremely low, typically between 10 ms and 20 ms.

5x Performance
When we first announced Amazon Aurora we expected to deliver a service that offered at least 4 times the price-performance of existing solutions. Now that we are ready to ship, I am happy to report that we’ve exceeded this goal, and that Amazon Aurora can deliver 5x the price-performance of a traditional relational database when run on the same class of hardware.

In general, this does not mean that individual queries will run 5x as fast as before (although Amazon Aurora’s fast, SSD-based storage certainly speeds things up). Instead, it means that Amazon Aurora is able to handle far more concurrent queries (both read and write) than other products. Amazon Aurora’s unique, highly parallelized access to storage reduces contention for stored data and allows it to process queries in a highly efficient fashion.

From our Partners
Members of the AWS Partner Network (APN) have been working to test their offerings and to gain operational and architectural experience with Amazon Aurora. Here’s what I know about already:

  • Business Intelligence – Tableau, Zoomdata, and Looker.
  • Data Integration – Talend and Attunity.
  • Query and Monitoring – Webyog, Toad,  and Navicat.
  • SI and Consulting – 8K Miles, 2nd Watch, and Nordcloud.

Ready to Roll
Our customers and partners have put Amazon Aurora to the test and it is now ready for your production workloads. We are launching in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions, and will expand to others over time.

Pricing works like this:

  • Database Instances – You pay by the hour for the primary instance and any replicas. Instances are available in 5 sizes, with 2 to 32 vCPUs and 15.25 to 244 GiB of memory. You can also use Reserved Instances to save money on your steady-state database workloads.
  • Storage – You pay $0.10 per GB per month for storage, based on the actual number of bytes of storage consumed by your database, sampled hourly. For this price you get a total of six copies of your data, two copies in each of three Availability Zones.
  • I/O – You pay $0.20 for every million I/O requests that your database makes.

See the Amazon Aurora Pricing page for more information.

Go For It
To learn more, visit the Amazon Aurora page and read the Amazon Aurora Documentation. You can also attend the upcoming Amazon Aurora Webinar to learn more and to see Aurora in action.

Jeff;

ProgrammableWeb3taps, Craigslist Settle Scraping Legal Battle

3taps, a “one-stop data shop for developers,” has ended its long-running legal battle with Craigslist over its scraping of Craigslist listings.

ProgrammableWebDaily API RoundUp: Amazon Machine Learning, Google Maps Android

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebGoogle Terminating Access to Unpublished Autocomplete API

Google will restrict access to an unofficial, unpublished Autocomplete API that developers have used to integrate autocomplete search functionality within their own applications.

The API was developed by Google "as a complement to Search," and Google says that it "never intended that it would exist disconnected from the purpose of anticipating user search queries."

ProgrammableWebHow APIs can Improve Communication Across a Decoupled Project

In his post on Lullabot's blog, Mateu Aguiló Bosch discussed the benefits of building from an API in a decoupled project.

ProgrammableWebEddystone APIs Bring Beacon Tech to Chrome Browser

Google recently pushed out an update to its Chrome browser for the iPhone and iPad and added several key functions. Chrome 44 for iOS is the first version to add support for the Physical Web. It's all powered by beacon technology and APIs.

ProgrammableWebBugzilla Releases Version 5.0, Updates API

Bugzilla, a bug tracking and reporting system, has announced the release of Bugzilla 5.0. It marks the first update in more than two years and comes with significant changes from the previous version (4.4). The most significant changes include the WebServices API, bug comment tags and membership checks.

Amazon Web ServicesAWS Week in Review – July 20, 2015

Let’s take a quick look at what happened in AWS-land last week. If you find these summaries useful, or if you have ideas for additional types of content, please feel free to leave a comment.

Monday, July 20
Tuesday, July 21
Wednesday, July 22
Thursday, July 23
Friday, July 24

New & Notable Open Source

New Customer Stories

New SlideShare Content

New YouTube Videos

New Marketplace Applications

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

Upcoming Events at the AWS Loft (New York)

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

Jeremy Keith (Adactio)On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

ProgrammableWeb: APIsActive Popular Activity Search

The Active Popular Activity Search API provides data associated to endurance, team sports, youth camps, tennis leagues, parks & recreation, fitness, classes, outdoor adventure, and business events. This API requires a key, features JSON format, and it returns a list of the most popular activities in the world. ACTIVE.com is an online community revolving around people and the sport and recreational activities that they like to do.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsActive Campground

With the Active Campground API, developers can create applications with the goal to promote campgrounds available at the site ReserveAmerica.com This site contains planning trip guides, hunting & fishing licenses, and camping gear. Examples of data that users can access includes campgrounds with horses in the state of California and campgrounds in the state of Main that allow pets. The API is read-only and it requires an Api Key as authentication method. To test the API, visit http://developer.active.com/io-docs - ACTIVE.com is an online community revolving around people and the sport and recreational activities that they like to do.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsActive Campsite Search

The Active Campsite Search allows to access data from parks, campgrounds, and campsites located in the United States and Canada. Search filters include RV friendly, electricity, hunting, fishing, arrival date, and length of stay. Formats in JSON, REST, and XML facilitate the interaction with the API accessible with a Key. ACTIVE.com is an online community revolving around people and the sport and recreational activities that they like to do.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMicrosoft Health Cloud

Preview Microsoft's Health Cloud API and leverage data from Microsoft Health in your own apps. The API uses HTTP GET methods and JSON data types with OAuth2 for authentication. Register for a developer account to receive your Client ID and Secret. Use Microsoft Cloud API to access user data like heart rate, step counts, or distance and activity data like run, bike, guided workout, or sleep activity. See the project documentation for full method descriptions and instructions on getting started.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBugsnag Errors

The Bugsnag Errors REST API allows developers to access and integrate the errors functionality of Bugsnag with other applications. The API methods include listing account and project errors, retrieving error details, and managing errors. Bugsnag is a software that detects and diagnose crashes in web or mobile applications.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBugsnag Projects

The Bugsnag Projects REST API allows developers to access and integrate the projects functionality of Bugsnag with other applications. The API methods include listing account projects, retrieving project details, and managing projects. Bugsnag is a software that detects and diagnose crashes in web or mobile applications.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBugsnag Accounts

The Bugsnag Accounts REST API allows developers to access and integrate the accounts functionality of Bugsnag with other applications. The API methods include listing accounts, retrieving account details, and authenticating accounts. Bugsnag is a software that detects and diagnose crashes in web or mobile applications.
Date Updated: 2015-07-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebMixpanel Unveils Codeless Mobile Analytics

Mobile analytics provider Mixpanel has released a solution for “point & click analytics” that eliminates the need for companies to write code to implement and update analytics in their apps.

ProgrammableWebEFI Launches Fiery API for Digital Print Management

EFI, a digital imaging solution provider, has launched the Fiery API to enable print service providers to develop customized solutions that streamline digital print operations. The API allows service providers to integrate digital print operations with existing and third-party IT systems. The integration allows for smooth data sharing and eliminates multiple manual steps traditionally involved in printing projects.

ProgrammableWebDynatrace Adds Support for Docker API to APM Tools

Among developers, Docker containers are clearly all the rage these days. But it may very well turn out to be the Docker API that winds up saving IT organizations from a management crisis that is starting to build inside IT operations teams.

ProgrammableWebApple Yanks Nest From Stores in Favor of Fussy HomeKit

Apple stirring up some controversy? Pshaw, never! OK, except for today and, yes, earlier this week, too. Here are the latest reasons Apple has come under fire from consumers and developers alike.

Amazon Web ServicesElastic MapReduce Release 4.0.0 With Updated Applications Now Available

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. First launched in 2009 (Announcing Amazon Elastic MapReduce), we have added comprehensive console support and many, many features since then. Some of the most recent features include:

Today we are announcing Amazon EMR release 4.0.0, which brings many changes to the platform.  This release includes updated versions of Hadoop ecosystem applications and Spark which are available to install on your cluster and improves the application configuration experience. As part of this release we also adjusted some of the ports and paths so as to be in better alignment with several Hadoop and Spark standards and conventions. Unlike the other AWS services which do not emerge in discrete releases and are frequently updated behind the scenes, EMR has versioned releases so that you can write programs and scripts that make use of features that are found only in a particular EMR release or a version of an application found in a particular EMR release.

If you are currently using AMI version 2.x or 3.x, read the EMR Release Guide to learn how to migrate to 4.0.0.

Application Updates
EMR users have access to a number of applications from the Hadoop ecosystem. This version of EMR features the following updates:

  • Hadoop 2.6.0 – This version of Hadoop includes a variety of general functionality and usability improvements.
  • Hive 1.0 -This version of Hive includes performance enhancements, additional SQL support, and some new security features.
  • Pig 0.14 – This version of Pig features a new ORCStorage class, predicate pushdown for better performance, bug fixes, and more.
  • Spark 1.4.1 – This release of Spark includes a binding for SparkR and the new Dataframe API, plus many smaller features and bug fixes.

Quick Cluster Creation in Console
You can now create an EMR cluster from the Console using the Quick cluster configuration experience:

Improved Application Configuration Editing
In Amazon EMR AMI versions 2.x and 3.x, bootstrap actions were primarily used to configure applications on your cluster. With Amazon EMR release 4.0.0, we have improved the configuration experience by providing a direct method to edit the default configurations for applications when creating your cluster. We have added the ability to pass a configuration object which contains a list of the configuration files to edit and the settings in those files to be changed. You can create a configuration object and reference it from the CLI, the EMR API, or from the Console. You can store the configuration information locally or in Amazon Simple Storage Service (S3) and supply a reference to it (if you are using the Console, click on Go to advanced options when you create your cluster in order to specify configuration values or to use a configuration file):

To learn more, read about Configuring Applications.

New Packaging System / Standard Ports & Paths
Our release packaging system is now based on Apache Bigtop. This will allow us to add new applications and new applications to EMR even more quickly.

Also, we have moved most ports and paths on EMR release 4.0.0 to open source standards. For more information about these changes read Differences Introduced in 4.x.

Additional EMR Configuration Options for Spark
The EMR team asked me to share a couple of tech tips with you:

Spark on YARN has the ability to dynamically scale the number of executors used for a Spark application. You still need to set the memory (spark.executor.memory) and cores (spark.executor.cores) used for an executor in spark-defaults, but YARN will automatically allocate the number of executors to the Spark application as needed. To enable dynamic allocation of executors, set spark.dynamicAllocation.enabled to true in the spark-defaults configuration file. Additionally, the Spark shuffle service is enabled by default in Amazon EMR, so you do not need to enable it yourself.

You can configure your executors to utilize the maximum resources possible on each node in your cluster by setting the maximizeResourceAllocation option to true when creating your cluster. You can set this by adding this property to the “spark” classification in your configuration object when creating your cluster. This option calculates the maximum compute and memory resources available for an executor on a node in the core node group and sets the corresponding spark-defaults settings with this information. It also sets the number of executors—by setting spark.executor.instances to the initial core nodes specified when creating your cluster. Note, however, that you cannot use this setting and also enable dynamic allocation of executors.

To learn more about these options, read Configure Spark.

Available Now
All of the features listed above are available now and you can start using them today

If you are new to large-scale data processing and EMR, take a look at our Getting Started with Amazon EMR page. You’ll find a new tutorial video, along with information about training and professional services, all aimed at getting you up and running quickly and efficiently.

Jeff;

 

ProgrammableWebExtract Spatial Relationships in Data with the Mapsense Context API

Mapsense, a startup specializing in tools and services that understand big location data sets, has announced the availability of the Mapsense Context API, which allows users to extract spatial relationships in their data.

ProgrammableWeb: APIsBugsnag Events

The Bugsnag Events REST API allows developers to access and integrate the event functionality of Bugsnag with other applications. The API methods include listing project events, retrieving event details, and managing events. Bugsnag is a software that detects and diagnose crashes in web or mobile applications.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBugsnag Users

The Bugsnag Users REST API allows developers to access and integrate the users functionality of Bugsnag with other applications. The API methods include listing account users, retrieving user details, and managing users. Bugsnag is a software that detects and diagnose crashes in web or mobile applications.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsAmazon Machine Learning

The Amazon Machine Learning REST API allows developers to build applications based on Amazon Machine Learning models that find patterns in data. Some example uses of this API are applications for fraud detection, forecasting demand, targeted marketing, and click prediction.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsUnofficial Marvel Stream

The project Unofficial Marvel Stream works to access API data in the form of paginated displays. Developers can integrate information related to characters or comics into their application. This work, created by Matt DesLauriers, is available on GitHub where users can find installation, example, usage, events, and running tests.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBurning Soul Moon

Moon API is part of an independent project by Burning Soul, a computer and technology organization. This API allows to obtain data from the moon such as current age of the moon in days, illumination, stage, distance from the earth, and distance from the sun. Use this API to integrate scientific information into educational apps. Example output, structure, and usage available.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBurning Soul WhoIs

WhoIs API is part of an independent project by Burning Soul. It offers to fetch data related to domain status, owner's name and e-mail, and DNS server information. This API only presents content provided by the server, meaning errors could be displayed because of privacy.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBurning Soul GeoIP

GeoIP API is part of an independent project by Burning Soul. Data allows to access database and identify location by IP address. The API display responses according to current remote address. JSON data will be presented as output once users send additional IP info via URL.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBurning Soul QRCode

QRCode API is part of an independent project by Burning Soul. What the API can do for developers is to convert data to QR code, providing output stored on server as a static file. Generate codes by script or by calling the image type with date in the URL. Resources include data to convert to QRCode, size of the output image, and error connection of the data.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsJustVisual Face Detection/Expression

JustVisual Face Detection/Expression API provides data access to face recognition features. Integrating with this API, users can find faces inside the query image and return the coordinates where the faces were found. The API can also classify the expression, once the face has been identified.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsJustVisual Adoptable Pet

JustVisual Adoptable Pet API offers data access to image recognition capabilities, in this case to identify adoptable pets. A sample GET request to the pets API is available. Every time a user uploads the picture of a pet, results will display organizations such as PetFinder.com and AdoptaPet.com, institutions that have that particular pet ready for adoption. Besides images, responses will present imageURL, title, description, pageURL, and breed.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsGoogle Nearby

Google Nearby API allows to discover other devices, connect, and interact with messages. As Google suggests, this API could be useful for collaboration, using a whiteboard, local multiplayer gaming, using a network, and multi-screen gaming, using a device such as Android TV. What the API can do for developers is to communicate with nearby devices in real time. This API is associated to Nearby Connections. Soon, Google will release Nearby Messages, as Google Play Services 7.8 is available. More info at https://developers.google.com/nearby/
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsGitbook

The Gitbook API allows users to list their books, get details about a book, get details about an author, proofread a text, spellcheck a list of words, and access the OPDS catalog. Gitbook is a publishing toolchain designed to help take users' works from ideas to finished books. Users can publish their finished books using Git or GitHub and sell them on all main marketplaces at the price they want.
Date Updated: 2015-07-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebAplos API Integrates NonProfit and Church Data with Aplos Software

Aplos Software unveiled today its API, which will allow software providers who serve nonprofits and churches to connect to Aplos’ online nonprofit software and securely share data. Specifically, the API is designed to connect with software providers that offer donor management software, church management software, online donation platforms, or payroll software.

ProgrammableWebGoogle Releases Abelana, a Reference Implementation for gRPC Services

Earlier this year, we covered gRPC, an open source HTTP/2-based framework from Google to create remote procedure services that enable a more efficient communication mechanism between mobile devices and server-side applications.

ProgrammableWebDaily API RoundUp: BeaconsInSpace, Crowd Valley, Tropo, Unofficial Instructables

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesJoining a Linux Instance to a Simple AD (AWS Directory Service)

If you are tasked with providing and managing user logins to a fleet of Amazon Elastic Compute Cloud (EC2) instances running Linux, I have some good news for you!

You can now join these instances to an AWS Directory Service Simple AD directory and manage credentials for your user logins using standard Active Directory tools and techniques. Your users will be able to log in to all of the instances in the domain using the same set of credentials. You can exercise additional control by creating directory groups.

We have published complete, step-by-step instructions to help you get started. You’ll need to be running a recent version of the Amazon Linux AMI, Red Hat Enterprise Linux, Ubuntu Server, or CentOS on EC2 instances that reside within a Amazon Virtual Private Cloud, and you’ll need to have an AWS Directory Service Simple AD therein.

You simply create a DHCP Options Set for the VPC and point it at the directory, install and configure a Kerberos client, join the instance to the domain, and reboot it. After you have done this you can SSH to it and log in using an identity from the directory. The documentation also shows you how to log in using domain credentials, add domain administrators to the sudo’ers list, and limit access to members of specific groups.

Jeff;

ProgrammableWebSighthound Releases Sighthound Cloud Computer Vision API

Sighthound, Inc. today announced the release of Sighthound Cloud, a service that enables software developers to detect and classify people and faces in still images by means of a simple REST API call.

In tests against publicly available databases, Sighthound Cloud proved more accurate than other leading cloud services in detecting faces, and in addition can find people whether or not faces are visible.

ProgrammableWebGoogle Formally Removes Files API

Two years after Google deprecated the Files API, the company has announced that it will formally remove the API on Aug. 5. The Files API enabled apps to read and write blobs to both the App Engine Blobstore storage system and Google Cloud Storage.

ProgrammableWebAkana Adds Business Analytics to API Management Platform

Looking to provide more insight into the business processes swirling around its API management platform, Akana today released an update to Envision that adds business analytics capabilities to the operational analytics that platform already provides.

Amazon Web ServicesNew Amazon CloudWatch Action – Reboot EC2 Instance

Amazon CloudWatch monitors your cloud resources and applications, including Amazon Elastic Compute Cloud (EC2) instances. You can track cloud, system, and application metrics, see them in graphical form, and arrange to be notified (via a CloudWatch alarm) if they cross a threshold value that you specify. You can also stop, terminate, or recover an EC2 instance when an alarm is triggered (see my blog post, Amazon CloudWatch – Alarm Actions for more information on alarm actions).

New Action – Reboot Instance
Today we are giving you a fourth action. You can now arrange to reboot an EC2 instance when a CloudWatch alarm is triggered. Because you can track and alarm on cloud, system, and application metrics, this new action gives you a lot of flexibility.

You could reboot an instance if an instance status check fails repeatedly. Perhaps the instance has run out of memory due to a runaway application or service that is leaking memory. Rebooting the instance is a quick and easy way to remedy this situation; you can easily set this up using the new alarm action. In contrast to the existing recovery action which is specific to a handful of EBS-backed instance types and is applicable only when the instance state is considered impaired, this action is available on all instance types and is effective regardless of the instance state.

If you are using the CloudWatch API or the AWS Command Line Interface (CLI) to track application metrics, you can reboot an instance if the application repeatedly fails to respond as expected. Perhaps a process has gotten stuck or an application server has lost its way. In many cases, hitting the (virtual) reset switch is a clean and simple way to get things back on track.

Creating an Alarm
Let’s walk through the process of creating an alarm that will reboot one of my instances if the CPU Utilization remains above 90% for an extended period of time. I simply locate the instance in the AWS Management Console, focus my attention on the Alarm Status column, and click on the icon:

Then I click on Take the action, choose Reboot this instance, and set the parameters (90% or more CPU Utilization for 15 minutes in this example):

If necessary, the console will ask me to confirm the creation of an IAM role as part of this step (this is a new feature):

The role will have permission to call the “Describe” functions in the CloudWatch and EC2 APIs. It also has permission to reboot, stop, and terminate instances

I click on Create Alarm and I am all set!

This feature is available now and you can start using it today in all public AWS regions.

Jeff;

ProgrammableWebGoogle Cloud Storage Nearline Goes to General Availability

Earlier this year, Google Cloud Storage Nearline disrupted the cloud cold storage service by providing near-real-time access to storage data. This was generally not the norm in cold storage services such as Amazon Glacier, where the access time could be in hours. The cost and low-latency features made it attractive for organizations to consider the service.

ProgrammableWebAPI Craft Conference Starts July 27 in Detroit

After a sellout event last year, there is still room for API developers with a commitment to best practice design to attend the API Craft unconference July 27-29 in Detroit.

To commence proceedings, a preconference meetup will be led by Lorinda Brandon, focusing on Swagger.

Matt Webb (Schulze & Webb)What's new on Machine Supply

Okay! So since I first tweeted about Machine Supply 48 hours ago, there have been 46 books recommended by people-who-aren't-me! In case you missed it, here's where I explain what Machine Supply is.

And, for example, here's my recommendation for Wild Life.

I have also earned Amazon affiliate fees totalling - drumroll - $3.30.

$$$

To celebrate I've added a simple way to see new recommendations -- once you're signed in, there's a "What's new" page which lists the 15 most recent.

tbh I'm not totally happy with the functionality, but it'll do as a start.

ProgrammableWebCogito API Finance Brings Finance Industry Content Analysis

Expert System (EXSY.MI), the leader in multilingual semantic intelligence technology for the effective management of unstructured information, today announced Cogito API Finance, the first, ready-to-deploy and fully configured solution specifically designed to help financial industry developers and IT professionals speed their search and content analysis application deployments, avoiding months of engineering and design time.

ProgrammableWeb: APIsUnofficial Instructables

Unofficial Instructables API available on Mashape. Instructables in a DIY platform that allows to publish online guides. This resource does not guarantee support by Instructables, however it aims to provide data access of categories such as technology and food; lists in the form of metadata; and details that displays the content of an individual Instructable. See more of the Unofficial API by Adam Watters at http://www.instructables.com/id/Using-the-Instructables-Unofficial-API/
Date Updated: 2015-07-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCrowd Valley

The Crowd Valley REST API provides developers access to integrate the functionality of Crowd Valley with other applications and to create other applications. Some example API methods include retrieving organization information, retrieving investment information, and managing user information. Crowd Valley provides financial information and backend support for creating of digital financial and investment platforms.
Date Updated: 2015-07-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow to turn your predictive models into APIs Using Domino

This blog post is about deploying analytical models (e.g., predictors, recommenders, classifiers) as REST APIs. I will explain why this is useful, and show you how to do this without a team of full-stack developers/engineers. All you need are the R/Python models you develop and a Domino Data Lab account.

ProgrammableWebDaily API RoundUp: littleBits, Demographics Pro, FreightAPI, 7digital

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebIBM Extends Outreach Program to Open Source Community

IBM today extended its outreach program to the open source community by releasing 50 projects to the open source community. It also unfurled developerWorks Open, a community site for open source developers that is modeled on an existing developerWorks online community through which IBM shares access to emerging code with its broader developer community.

ProgrammableWebHow to Leverage Machine Learning via Predictive APIs

There’s no doubt that being able to predict how your clients will act, can give you an edge over the competition. WIth predictive APIs, the technology is now in place for organizations to realize such an advantage.

ProgrammableWebHow Predictive APIs Simplify Machine Learning

App developers are always looking for ways to make the lives of their users easier and for ways to introduce innovative features that help users save time. For this reason, Machine Learning (ML) has been increasingly popular in app development. Classical examples include spam filtering, priority filtering, smart tagging, and product recommendations.

ProgrammableWebGoogle Cloud Bigtable Announces Go API

Google announced the beta of Cloud Bigtable in May. Cloud Bigtable is a fully managed NoSQL service that is scalable and able to handle large amounts of data. It is powered by Bigtable, the service behind popular Google services such as Gmail, YouTube and more.

ProgrammableWebMicrosoft Dynamics CRM Web API Claims Long Term Viability

Microsoft recently announced a new Web API for its Dynamics CRM offering. The initial announcement of the Microsoft Dynamics CRM Web API introduced a preview version so developers can get their hands dirty with the new API.

ProgrammableWebSmartsheet Publicly Releases API 2.0, Deprecates 1.1

Smartsheet has announced the public availability of the Smartsheet API 2.0. New features with the 2.0 launch include multipart upload, bulk-insert/bulk-update, and pagination of results. Smartsheet expects these new features to improve efficiency when interacting with Smartsheet data.

Matt Webb (Schulze & Webb)Machine Supply

I read a bunch of books -- here are the books I read in 2008 which was a particularly good year. Some books are comfort blankets (Red Mars, Kim Stanley Robinson), some are like the best hikes: a steady workout on the muscles accompanied by epiphany after epiphany after epiphany (Philosophy & Simulation, Manuel DeLanda). Ursula le Guin makes me forget where I am. Three Men in a Boat (Jerome K. Jerome) makes me laugh out loud, and was the first book recommended to me by Angela. We're now married. So.

Last week I was having a beer with Ben and Tom (literally everyone in this industry is called Ben or Tom or Matt), swapping sci-fi recommendations. It wasn't for finding new books, or at least not exclusively -- knowing what books someone loves is to know a person. I read 104 books in 2008, that was tough going. In the maybe 70 reading years I have available - mod a life-extending singularity cascading its way into reality - I could read a maximum 7,280 books. At all, ever. There are 6,000 books published every day. Knowing what books someone loves is to know their perspective and their journey, to have something special in common, to share a language.

I heard once that geeks come in two flavours: those who read A Thousand Plateaus; those who read Godel, Escher, Bach.

I'm ATP through and through. It changed my life. Here's chapter 1 as a PDF, I used to keep it printed by the door to give out to Jehovah's Witnesses. It's a philosophy roller coaster, a call to arms. Didn't get on with GEB.

I'm Starship Troopers not Dune, The Beatles not the Stones.

Recommendations

Anyway, I like to collect book recommendations. Sometimes I even read the books. At conferences, for years, I've asked people for their 3 recommendations.

Not favourites. Not the books they think I ought to read. Just 3 recommendations, whatever's on their mind. I try to find a board and some post-its and get people to share. Here are some recommendations from Design Engaged in 2004 where I met so many friends for the first time. Here's Matt Jones' version of the same question from Foo in 2014 -- I wasn't there, but touchingly the board is titled "The Matt Webb question: What 3 books should I read this year?" Thank you! I'll be at Foo in a couple of weeks, let's do the same session.

I love to share my recommendations with other people. Here are the books I read in April and May 2015.

So I made a website.

Machine Supply

At Machine Supply I can make a book recommendation by pasting in an Amazon link and writing a short paragraph. Then when I share a link to that (on my blog or on Twitter), my reason comes joined together with two Amazon links... one to the US site and one to the UK site. That's always been a niggle for me, to bundle those things together, to make a recommendation which is easy to share.

I'm classing this as a hobby, which means I'm trying to make the kind of website that I'd use. I'm not a hugely early adopter generally. I don't spend much time kicking the tyres of online services, I need encouragement to keep using things because I'm enormously forgetful, and I'm hugely sceptical about putting words I write into other people's databases rather than plain text on my own laptop.

All of which means -- that's what I'm making. A website to make it easy for me to share book recommendations. Here's my recommendation for The Peripheral (William Gibson), and here it is again as it appears on Twitter.

What was amazing -- and honestly what I hoped would happen, and what I'll make sure the site encourages to happen, but didn't know whether it would happen or not - what was amazing is that a few friends tried out Machine Supply when I tweeted about it yesterday.

And already I've seen @blech recommended Spacesuit: Fashioning Apollo. (Now bought on Amazon.) And @chrbutler recommended The Book of Strange New Things - which I also love - and by the way mentioned four other books, one of which is a deeply loved favourite of mine, and the other three I hadn't heard of. So those are now on my books-to-check-out list.

What next?

As it says on the front page, Current status: Pre-pre-alpha, hobby. Links will break. Cities will fall.

I've got a hobby! Haven't had one of those in a while.

Have a play. Let me know if anything breaks. My aim is to make a handy, finely-tuned little crystal. Any and all ideas welcome.

Machine Supply is over here.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>