ProgrammableWeb7 Things to Consider When Designing a Travel API

Leading travel companies are increasingly broadening their distribution channels by building platforms that enable partners and customers to connect and access their inventory and sales systems via application programming interfaces (APIs). APIs are the tools powering many of today’s mobile and Web applications.

Anne van Kesteren (Opera)Web platform security boundaries

What are the various security boundaries the platform offers? I have an idea, but I’m not completely sure whether it is exhaustive:

  • Origins: scheme, host, and port, or a unique identifier. Used by most platform features.
  • Origin groups: all origins whose scheme and host's registrable domain are the same (or scheme and host if host is not a domain, or just origin, if origin is a unique identifier). document.domain has forced this upon us.
  • Schemeless origin groups: all origins whose host's registrable domain are the same (or host if host is not a domain, or just origin, if origin is a unique identifier). Cookies are the worst.

There is also the HTTP cache, which leaks everywhere, but is far less reliable.

ProgrammableWebDaily API RoundUp: CoachHire, Tierion, Alerta, BetterWorks, ChargeIO, Bucky Box

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebTwitter Fabric Adds Deep Linking Capabilities with Branch Integration

Fabric, Twitter's mobile development platform, has added deep linking capabilties thanks to a newly launched integration with Branch, a deep linking and attribution tool.

ProgrammableWebRed Hat to Acquire 3scale

Red Hat recently announced that it has signed a definitive agreement to acquire 3scale, a provider of application programming interface (API) management technology.

ProgrammableWebBetterWorks Announces API to Connect Workplace Data Points

BetterWorks; creator of enterprise software that manages strategic plans, collaborative goals and ongoing business performance; announced an API dedicated to company goals. The BetterWorks API connects data points from across a company and displays the data in a manner that the company can view the data in context of its larger business goals.

Daniel Glazman (Disruptive Innovations)Implementing Media Queries in an editor

You're happy as a programmer ? You think you know the Web ? You're a browser implementor ? You think dealing with Media Queries is easy ? Do the following first:

Given a totally arbitrary html document with arbitrary stylesheets and arbitrary media constraints, write an algo that gives all h1 a red foreground color when the viewport size is between min and max, where min is a value in pixels (0 indicating no min-width in the media query...) , and max is a value in pixels or infinite (indicating no max-width in the media query).  You can't use inline styles, of course. !important can be used ONLY if it's the only way of adding that style to the document and it's impossible otherwise. Oh, and you have to handle the case where some stylesheets are remote so you're not allowed to modify them because you could not reserialize them :-)

What's hard? Eh:

  • media attributes on stylesheet owner node
  • badly ordered MQs
  • intersecting (but not contained) media queries
  • default styles between MQs
  • remote stylesheets
  • so funny to deal with the CSS OM where you can't insert a rule before or after another rule but have to deal with rule indices...
  • etc.

My implementation in BlueGriffon is almost ready. Have fun...

ProgrammableWeb: APIsBetterWorks

The BetterWorks API allows developers to integrate enterprise goal-setting software into their applications. BetterWorks provides a single platform from which users can manage their strategic plans, collaborative goals, and ongoing performance conversions. BetterWorks uses S.M.A.R.T. (Specific, Measurable, Attainable, Relevant, Time-bound) goals and OKRs (Objectives and Key Results) to help users formulate and achieve their business goals.
Date Updated: 2016-06-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCrowdStrike Falcon Intelligence

The Falcon Intelligence API provides real time information about new adversary groups, indicators, and news. 2 plans are available: standard delivers new threat information, and premium focuses on uninterrupted business operations. This platform offers unknown threat identification by using signature matching, static analysis, and machine learning procedures. Additionally, CrowdStrike offers protection for malware and malware free based attacks. Developers need to register to access API documentation. CrowdStrike is an Irvine based network security firm.
Date Updated: 2016-06-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCrowdStrike Falcon Streaming

The Falcon Streaming API provides a constant source of information for real time threat detection and prevention. This platform offers unknown threat identification by using signature matching, static analysis, and machine learning procedures. Additionally, CrowdStrike offers protection for malware and malware free based attacks. Developers need to register to access API documentation. CrowdStrike is an Irvine based network security firm.
Date Updated: 2016-06-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsStudentConnect

The StudentConnect API allows websites to validate the personal profiles of students that sign up for products that are restricted to, or targeted at, students enrolled in academic institutions. The API provides an integrative interface for filtering and retrieving the personal details of students from the databases of their respective colleges. Its provider, the Student Money Saver, is a vendor of proprietary solutions for building student databases for targeted marketing. Contact the API provider for its technical documentation.
Date Updated: 2016-06-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebDevelopers Choices Are Few When it Comes to Web Search APIs

This is part one of a two-part article covering the current state of web search APIs. Please be sure to check out part two. 

ProgrammableWebGoogle vs. Microsoft Bing Search APIs: A Detailed Comparison

This is part two of a two-part article covering the current state of Web search APIs. Please be sure to check out part one. 

ProgrammableWeb: APIsAlerta

Alerta is a notifications platform that combines alerts from the same environment together to avoid duplicates in order to see only the most recent ones. Also, notifications can have multiple custom attributes and be tagged for monitoring purposes. The following alert sources are supported: Syslog, SNMP, Nagios, Zabbix, and Sensu. The Alerta API uses JSON for data exchange, and API Keys for authentication.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsminimesos

minimesos is an experimentation and testing environment for the Mesos cluster manager. It can create clusters which receive assertions from the API; after tests are done, the clusters are destroyed. This is a REST based API which exchanges data in JSON format. Container Solutions is an Amsterdam based software consultancy that offers R&D, and training services.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBucky Box

The Bucky Box API simplifies the integration of shopping carts for third-party sites. Users can also embed it as a turnkey system for channeling new customers along with new orders into the Bucky Box ecommerce platform for food distribution. The API is a full release and well documented version that requires authorized access through an API Key. Developers must contact the API provider to access the instructions for purchasing API Keys.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsvirtualQ

This API is a waiting service for voice calls. Integrable on most systems, virutalQ reduces waiting hold times. Callers receive notice when a call agent is available via text, web or call back. Other options are to leave feedback after every call. It is also available to place a request via mobile app.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRoq.ad Cross-Device User Identification

The Roq.ad Cross-Device User Identification API lets users submit a list of device identifiers and get back a list of identifiers that belong to the same users as those on the input list. Roq.ad's technology can match devices to people and people to households. This service complies with Germany's consumer privacy laws.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsManufaktura Controls

The Manufaktura Controls API provides users with methods for displaying musical notes on many .Net platforms, including ASP.NET MVC, WinForms, WPF, Silverlight, and Universal Apps. Manufaktura Controls is a company that develops software components for visualizing and processing music. Their software is designed specifically for use by libraries, archives, musical institutions, and IT companies.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsDigital Bible Platform

The Digital Bible Platform API is a free service that lets users access Bible text, audio, and video programmatically. The Digital Bible Platform is a large repository of Biblical rich content. Users can access the Bible in more than 800 languages, get premium Bible audio, and get Bible videos designed for deaf viewers.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsApizee ApiRTC

The Apizee ApiRTC API is a platform that integrates WebRTC technology, and adds real-time text, audio, and video capabilities to third party applications. This platform features visual colaboration, plugin-free web based communication, and presence-based text messaging. The ApiRTC exchanges information in JSON format. Apizee is a French SaaS firm that offers video-conferencing, enterprise collaboration, and telemedicine deployment streamlining.
Date Updated: 2016-06-22
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesNew – Cross-Account Copying of Encrypted EBS Snapshots

AWS already supports the use of encrypted Amazon Elastic Block Store (EBS) volumes and snapshots, with keys stored in and managed by AWS Key Management Service (KMS). It also supports copying of EBS snapshots with other AWS accounts so that they can be used to create new volumes. Today we are joining these features to give you the ability to copy encrypted EBS snapshots between accounts, with the flexibility to move between AWS regions as you do so.

This announcement builds on three important AWS best practices:

  1. Take regular backups of your EBS volumes.
  2. Use multiple AWS accounts, one per environment (dev, test, staging, and prod).
  3. Encrypt stored data (data at rest), including backups.

Encrypted EBS Volumes & Snapshots
As a review, you can create an encryption key using the IAM Console:

And you can create an encrypted EBS volume by specifying an encryption key (you must use a custom key if you want to copy a snapshot to another account):

Then you can create an encrypted snapshot from the volume:

As you can see, I have already enabled the longer volume and snapshot IDs for my AWS account (read They’re Here – Longer EBS and Storage Gateway Resource IDs Now Available for more information).

Cross-Account Copying
None of what I have shown you so far is new. Let’s move on to the new part! To create a copy of the encrypted EBS snapshot in another account you need to complete four simple steps:

  1. Share the custom key associated with the snapshot with the target account.
  2. Share the encrypted EBS snapshot with the target account.
  3. In the context of the target account, locate the shared snapshot and make a copy of it.
  4. Use the newly created copy to create a new volume.

You will need the target account number in order to perform the first two steps. Here’s how you share the custom key with the target account from within the IAM Console:

Then you share the encrypted EBS snapshot. Select it and click on Modify Permissions:

Enter the target account number again and click on Save:

Note that you cannot share the encrypted snapshots publicly.

Before going any further I should say a bit about permissions! Here’s what you need to know in order to set up your policies and/or roles:

Source Account – The IAM user or role in the source account needs to be able to call the ModifySnapshotAttribute function and to perform the DescribeKey and ReEncypt operations on the key associated with the original snapshot.

Target Account – The IAM user or role in the target account needs to be able perform the DescribeKey, CreateGrant, and Decrypt operations on the key associated with the original snapshot. The user or role must also be able to perform the CreateGrant, Encrypt, Decrypt, DescribeKey, and GenerateDataKeyWithoutPlaintext operations on the key associated with the call to CopySnapshot.

With that out of the way, let’s copy the snapshot…

Switch to the target account, visit the Snapshots tab, and click on Private Snapshots. Locate the shared snapshot via its Snapshot ID (the name is stored as a tag and is not copied), select it, and choose the Copy action:

Select an encryption key for the copy of the snapshot and create the copy (here I am copying my snapshot to the Asia Pacific (Tokyo) Region):

Using a new key for the copy provides an additional level of isolation between the two accounts. As part of the copy operation, the data will be re-encrypted using the new key.

Available Now
This feature is available in all AWS Regions where AWS Key Management Service (KMS) is available. It is designed for use with data & root volumes and works with all volume types, but cannot be used to share encrypted AMIs at this time. You can use the snapshot to create an encrypted boot volume by copying the snapshot and then registering it as a new image.

Jeff;

 

ProgrammableWebDaily API RoundUp: FCC, Pretio, Backstitch, EasySendy

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebAirbnb Explores Shift from Marketplace to Travel Platform through API Strategy

An API may be in Airbnb's future. This week, at the Cannes Lions International Festival of Creativity, Airbnb CEO, Brian Chesky, told Adweek that he envisions the company moving from a marketplace towards a platform. With over 60 million users across the globe, Airbnb is ripe for data partnerships and expanding its services beyond simply booking a room.

ProgrammableWebWhy Your API's End-Usage Context Matters To Great Developer Experiences

As the focus on APIs grows, we spend a lot of time optimizing the Developer Experience by focusing on delivering robust documentation, easy accessible and un-throttled sandboxes, various support channels, and community events. As the industry matures, developers have come to expect all of these things and companies recognize that they are an important investment on the road to API adoption.

ProgrammableWebWhy Your API's End-Usage Context Matters To Great API Design

One of the challenges with API design and strategy is focusing on the complexities of designing a user experience for your API. In most product development roadmaps, the focus is on B2C or B2B and the ideas behind those delivery models are well understood. For the last few years in the API industry, our focus has been on B2D where we’ve been challenging ourselves to design a great developer experience based on usage models for an API.

ProgrammableWebHow the Right Technical Writer Can Take API Docs from Good to Great

When RESTful APIs first hit the radar, one of the benefits often cited was that the APIs could be easily understood from the WADL alone. As the industry evolves and APIs become more central to the software development process, both providers and consumers have started to recognize the need for better, more easily consumed documentation.

ProgrammableWeb: APIsFCC Consumer Help Center Complaint Data

The FCC Consumer Help Center Complaint Data API integrates customer complains. It is available in JSON, XML, and CSV formats with a token. Documentation can be found in the link provided by Socrata.com
Date Updated: 2016-06-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebUdacity Announces New Google Maps APIs Course

A recent post on the Geo Developers Blog announced a new course designed to teach developers the best practices when using Google’s web APIs.

Amazon Web ServicesGuest Post – Zynga Gets in the Game with Amazon Aurora

Long-time AWS customer Zynga is making great use of Amazon Aurora and other AWS database services. In today’s guest post you can learn about how they use Amazon Aurora to accommodate spikes in their workload. This post was written by Chris Broglie of Zynga.

Jeff;

Zynga has long operated various database technologies, ranging from simple in-memory caches like Memcached, to persistent NoSQL stores like Redis and Membase, to traditional relational databases like MySQL. We loved the features these technologies offered, but running them at scale required lots of manual time to recover from instance failure and to script and monitor mundane but critical jobs like backup and recovery. As we migrated from our own private cloud to AWS in 2015, one of the main objectives was to reduce the operational burden on our engineers by embracing the many managed services AWS offered.

We’re now using Amazon DynamoDB and Amazon ElastiCache (Memcached and Redis) widely in place of their self-managed equivalents. Now, engineers are able to focus on application code instead of managing database tiers, and we’ve improved our recovery times from instance failure (spoiler alert: machines are better at this than humans). But the one component missing here was MySQL. We loved the automation Amazon RDS for MySQL offers, but it relies on general-purpose Amazon Elastic Block Store (EBS) volumes for storage. Being able to dynamically allocate durable storage is great, but you trade off having to send traffic over the network, and traditional databases suffer from this additional latency. Our testing showed that the performance of RDS for MySQL just couldn’t compare to what we could obtain with i2 instances and their local (though ephemeral) SSDs. Provisioned IOPS narrow the gap, but they cost more. For these reasons, we used self-managed i2 instances wherever we had really strict performance requirements.

However, for one new service we were developing during our migration, we decided to take a measured bet on Amazon Aurora. Aurora is a MySQL-compatible relational database offered through Amazon RDS. Aurora was only in preview when we started writing the service, but it was expected to become generally available before production, and we knew we could always fall back to running MySQL on our own i2 instances. We were naturally apprehensive of any new technology, but we had to see for ourselves if Aurora could deliver on its claims of exceeding the performance of MySQL on local SSDs, while still using network storage and providing all the automation of a managed service like RDS. And after 8 months of production, Aurora has been nothing short of impressive. While our workload is fairly modest – the busiest instance is an r3.2xl handling ~9k selects/second during peak for a 150 GB data set – we love that so far Aurora has delivered the necessary performance without any of the operational overhead of running MySQL.

An example of what this kind of automation has enabled for us was an ops incident where a change in traffic patterns resulted in a huge load spike on one of our Aurora instances. Thankfully, the instance was able to keep serving traffic despite 100% CPU usage, but we needed even more throughput. With Aurora we were able to scale up the reader to an instance that was 4x larger, failover to it, and then watch it handle 4x the traffic, all with just a few clicks in the RDS console. And days later after we released a patch to prevent the incident from recurring, we were able to scale back down to smaller instances using the same procedure. Before Aurora we would have had to either get a DBA online to manually provision, replicate, and failover to a larger instance, or try to ship a code hotfix to reduce the load on the database. Manual changes are always slower and riskier, so Aurora’s automation is a great addition to our ops toolbox, and in this case it led to a resolution measured in minutes rather than hours.

Most of the automation we’re enjoying has long been standard for RDS, but using Aurora has delivered the automation of RDS along with the performance of self-managed i2 instances. Aurora is now our first choice for new services using relational databases.

Chris Broglie, Architect (Zynga)

 

Amazon Web ServicesNew – AWS Marketplace for the U.S. Intelligence Community

AWS built and now runs a private cloud for the United States Intelligence Community.

In order to better meet the needs of this unique community, we have set up an AWS Marketplace designed specifically for them. Much like the existing AWS Marketplace, this new marketplace makes it easy to discover, buy, and deploy software packages and applications, with a focus on products in the Big Data, Analyics, Cloud Transition Support, DevOps, Geospatial, Information Assurance, and Security categories.

Selling directly to the Intelligence Community can be a burdensome process that limits the Intelligence Community’s options when purchasing software. Our goal is to give the Intelligence Community as broad a selection of software as possible, so we are working to help our AWS Marketplace sellers through the onboarding process so that the Intelligence Community can benefit from use of their software.

If you are an Amazon Marketplace Seller and have products in one of the categories above, listing your product in the AWS Marketplace for the Intelligence Community has some important benefits to your ISV or Authorized Reseller business:

Access – You get to reach a new market that may not have been visible or accessible to you.

Efficiency – You get to bypass the contract negotiation that is otherwise a prerequisite to selling to the US government. With no contract negotiation to contend with, you’ll have less business overhead.

To the greatest extent possible, we hope to make the products in the AWS Marketplace also available in the AWS Marketplace for the U.S. Intelligence Community. In order to get there, we are going to need your help!

Come on Board
Completing the steps necessary to make products available in the AWS Marketplace for the U.S. Intelligence Community can be challenging due to security and implementation requirements. Fortunately, the AWS team is here to help; here are the important steps:

  1. Have your company and your products listed commercially in AWS Marketplace if they are not already there.
  2. File for FOCI (Foreign Ownership, Control and Influence) approval and sign the AWS Marketplace IC Marketplace Publisher Addendum.
  3. Ensure your product will work in the Commercial Cloud Services (C2S) environment. This includes ensuring that your software does not make any calls outside to the public internet.
  4. Work with AWS to publish your software on the AWS Marketplace for the U.S. Intelligence Community. You will be able to take advantage of your existing knowledge of AWS and your existing packaging tools and processes that you use to prepare each release of your product for use in AWS Marketplace.

Again, we are here to help! After completing step 1, email us (icmp@amazon.com). We’ll help with the paperwork and the security and do our best to get you going as quickly as possible. To learn more about this process, read my colleague Kate Miller’s new post, AWS Marketplace for the Intelligence Community, on the AWS Partner Network Blog.

Jeff;

ProgrammableWeb: APIsilandcloud

Configure multiple Virtual Machines located across several data centers. Protect data while managing your disaster recovery options via access to iland’s console and cloud infrastructure using this REST API. Simplify your data footprint while monitoring resources. iland provides enterprise cloud hosting and other services.
Date Updated: 2016-06-20
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsApizee Server

Apizee is a platform that integrates WebRTC technology, and adds real-time text, audio, and video capabilities to third party applications. This platform features visual collaboration, plugin-free web based communication, and presence-based text messaging. The Server API is used to obtain and delete recorded video streams, and exchanges information in JSON format. Apizee is a French SaaS firm that offers video-conferencing, enterprise collaboration, and telemedicine deployment streamlining.
Date Updated: 2016-06-20
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWeb Speech Specification

The Web Speech Specification API integrates text-to-speech features in JavaScript language. Resources include speech recognition, speech grammar, and speech synthesis.
Date Updated: 2016-06-20
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsečv Slovakia Car Registration

The ečv Slovakia Car Registration API provides car registration information by sending its license plate number. The returned information includes make and model, engine size, registration year, horsepower, color, and VIN number. This API is SOAP 1.1 based, returns information in JSON and XML formats, and requires API Keys for authentication.
Date Updated: 2016-06-20
Tags: [field_primary_category], [field_secondary_categories]

Daniel Glazman (Disruptive Innovations)Pourquoi il n'aurait pas du arrêter d'utiliser CSS

Cet article est un commentaire sur celui-ci. Je l'ai trouvé via un tweet de Korben, évidemment. Il m'a un peu hérissé le poil, évidemment, et donc j'ai besoin de fournir une réponse à son auteur via ce blog.

Les sélecteurs

L'auteur de l'article fait les trois reproches suivants aux sélecteurs CSS :

  1. La définition d'un style associé à un sélecteur peut être redéfinie ailleurs
  2. Si on associe plusieurs styles à un sélecteur, les derniers définis dans le CSS auront toujours la priorité
  3. Quelqu'un peut péter les styles d'un composant pour peu qu'il ne sache pas qu'un sélecteur est utilisé ailleurs

Le moins que l'on puisse dire est j'ai halluciné en lisant cela. Quelques lignes au-dessus, l'auteur faisait une comparaison avec JavaScript. Reprenons donc ses trois griefs....

Pour le numéro 1, il râle parce que  if (foo) { a = 1; } ... if (foo) { a = 2;} est tout simplement possible. Bwarf.

Dans le cas numéro 2, il râle parce que dans if (foo) { a = 1; a = 2; ... } les ... verront la variable a avoir la valeur 2.

Dans le cas numéro 3, oui, certes, je ne connais pas de langage dans lequel quelqu'un qui modifie sans connaître le contexte ne fera aucune connerie...

La spécificité

Le « plein délire » de !important, et je suis le premier à reconnaître que cela ne m'a pas aidé dans l'implémentation de BlueGriffon, c'est quand même ce qui a permis à des tétraflopées de bookmarklets et de codes injectés dans des pages totalement arbitraires d'avoir un résultat visuel garanti. L'auteur ne râle pas seulement contre la spécificité et son calcul mais aussi sur la possibilité d'avoir une base contextuelle pour l'application d'une règle. Il a en partie raison, et j'avais moi-même il y a longtemps proposé un CSS Editing Profile limitant les sélecteurs utilisés dans un tel profil pour une plus grande facilité de manipulation. Mais là où il a tort, c'est que des zillions de sites professionnels utilisant des composants ont absolument besoin des sélecteurs complexes et de ce calcul de spécificité...

Les régressions

En lisant cette section, j'ai laché un sonore « il exagère tout de même »... Oui, dans tout langage interprété ou compilé, modifier un truc quelque part sans tenir compte du reste peut avoir des effets de bords négatifs. Son exemple est exactement conforme à celui d'une classe qu'on dériverait ; un ajout à la classe de base se retrouverait dans la classe dérivée. Oh bah c'est pas bien ça... Soyons sérieux une seconde svp.

Le choix de priorisation des styles

Là, clairement, l'auteur n'a pas compris un truc fondamental dans les navigateurs Web et le DOM et l'application des CSS. Il n'y a que deux choix possibles : soit on utilise l'ordre du document tree, soit on utilise des règles de cascade des feuilles de styles. Du point de vue du DOM, class="red blue" et class="blue red" sont strictement équivalents et il n'y a aucune, je répète aucune, garantie que les navigateurs préservent cet ordre dans leur DOMTokenList.

Le futur de CSS

Revenons à la comparaison JS de l'auteur. En gros, si on a en ligne 1 var a=1 en ligne 2 alert(a), l'auteur râle parce que si on insère var a = 2 entre les deux lignes on affichera la valeur 2 et pas 1... C'est clairement inadmissible (au sens de pas acceptable) comme argument.

La méthodologie BEM

Un pansement sur une jambe de bois... On ne change rien mais on fout des indentations qui augmentent la taille du fichier, gênent son édition et sa manipulation, et ne sont aucunement compréhensibles par une machine puisque tout cela n'est pas préservé par le CSS Object Model.

Sa proposition alternative

J'ai toussé à m'en éjecter les poumons du corps à la lecture de cette section. C'est une horreur non-maintenable, verbeuse et error-prone.

En conclusion...

Oui, CSS a des défauts de naissance. Je le reconnais bien volontiers. Et même des défauts d'adulte quand je vois certaines cochoncetés que le ShadowDOM veut nous faire mettre dans les Sélecteurs CSS. Mais ça, c'est un rouleau-compresseur pour écraser une mouche, une usine à gaz d'une magnitude rarement égalée.

Je ne suis globalement pas d'accord du tout avec bloodyowl, qui oublie trop facilement les immenses bénéfices que nous avons pu tirer de tout ce qu'il décrie dans son article. Des centaines de choses seraient impossibles sans tout ça. Alors oui, d'accord, la Cascade c'est un peu capillotracté. Mais on n'attrape pas des mouches avec du vinaigre et si le monde entier a adopté CSS (y compris le monde de l'édition qui vient pourtant de solutions assez radicalement différentes du Web), c'est bien parce que c'est bien et que ça marche.

En résumé, non CSS n'est pas « un langage horriblement dangereux ». Mais oui, si on laisse n'importe qui faire des ajouts n'importe comment dans un corpus existant, ça peut donner des catas. C'est pareil dans un langage de programmation, dans un livre, dans une thèse, dans de la mécanique, partout. Voilà voilà.

Shelley Powers (Burningbird)This Week with the Clinton Email Industry

The Freedom of Information Act was never intended to be a jobs program for lawyers.

Following up on my previous stories regarding the FOIA lawsuit related to the Clinton emails, earlier this month Judicial Watch  deposed Karin Lang, Director of Executive Secretariat Staff at State, and Ambassador Stephen Mull, currently lead coordinator for the implementation of the Iran Nuclear deal for the US.

With Ambassador Mull, we learned that he really can’t remember an email sent in 2011 related to Clinton’s Blackberry. I don’t know why not. Can’t most of us remember every email we sent five years ago?

With Director Lang, we discovered it was the viral photo of Secretary Clinton in sunglasses that sparked a discussion about Clinton’s email, but we don’t know when the discussion occurred, or with whom. She also confirmed that none of the prior Secretaries of State had a government email address, so Secretary Clinton not having one was not unusual.

In addition, in a flurry of filings demanded by Judge Emmet Sullivan, Bryan Pagliano’s lawyer filed a copy of Pagliano’s limited immunity agreement with the DOJ, as well as an argument for him being able to plead the Fifth in a civil lawsuit. The immunity agreement was filed under seal, meaning only the Judge can see it.

To paraphrase Pagliano’s lawyer, pleading the Fifth in a civil lawsuit is not only allowed, but an accepted practice if the witness had concerns about future action related to the topic at hand. Since we already know the FBI is investigating Clinton’s email server—in some regard—the lawyer asserted that Pagliano’s concerns were reasonable.

Judicial Watch filed motions disagreeing with keeping the immunity agreement under seal, as well as Pagliano having the right to plead the Fifth.

The DOJ also filed a motion about keeping the immunity agreement under seal, as it is associated with an ongoing investigation. Pagliano’s lawyers filed a motion concurring with the DOJ. They also gently reminded Judge Sullivan that the only issues pending are whether Pagliano’s deposition is videotaped and if the DOJ immunity agreement is kept sealed. Pagliaono’s right to invoke the Fifth is without question, contrary to Judicial Watch’s attempts to compel Pagliano’s testimony.

Judge Sullivan agreed, for the most part, with Pagliano. He denied Pagliano’s request not to videotape the deposition, probably because all of the videotapes are being kept confidential. But he granted Pagliano’s request to keep the immunity letter under seal. That Pagliano can plead the Fifth is a given.

Now, all of that’s behind Door Number One.

Behind Door Number Two…Another Judicial Watch Lawsuit Against State

I noticed that Judicial Watch’s filings for this case have a sort of breathless quality to them. And no wonder. While it was busy filing motions in the Honorable Judge Emmet Sullivan’s court, it was also filing motions for another FOIA lawsuit against State in another court, under the Honorable Judge Royce Lamberth.

In that case, which is based on an original FOIA request for information related to Benghazi talking points, State is exerting a greater deal of pushback against Judicial Watch’s demand for discovery, because Judicial Watch got too greedy trying to set the discovery parameters:

Now, for the first time, in its proposed reply, Judicial Watch attempts to justify these discovery requests about not just the search for records responsive to this narrow FOIA request, which sought documents within the Office of the Secretary regarding certain talking points about the Benghazi attacks, but for all searches conducted for emails related to the Benghazi attacks. Plaintiff improperly seeks discovery on topics far beyond the scope of its FOIA request, including but not limited to searches for records for the Accountability Review Board, searches in response to congressional inquiries, in preparation of Secretary Clinton’s testimony before Congress, and searches for records responsive to other much broader FOIA requests. The attempt is far too late. Notably, even this belated attempt fails to offer any actual explanation as to the need for discovery ranging far beyond the searches conducted in response to the FOIA request at issue here. Judicial Watch simply asserts, without additional explanation or the necessary attestations, that discovery about unrelated searches “go[es] to the heart” of the Court’s Order.

I believe that “go[es] to the heart” is equivalent to, “We wants it, Precious”.

But Wait…There’s More

The two lawsuits I just described aren’t the only lawsuits Judicial Watch has going related to FOIA requests. According to information in the FOIA Project, and data I pulled from PACER (the federal court system database), Judicial Watch has filed nineteen FOIA lawsuits since January 1. This is in addition to prior year lawsuits still being litigated, like the two I just mentioned. From what I’ve been able to discover, Judicial Watch has at least 17 active FOIA lawsuits in the District of Columbia federal court; the vast majority are related to the Clinton emails.

They must be on first name basis with everyone in the court. Perhaps the Judicial Watch lawyers join the federal court employees in a weekly poker game.

Judicial Watch isn’t the only organization filing these lawsuits. According to one of the motions filed by State in the Lamberth court case, there are currently sixty  FOIA lawsuits pending in court related to the Clinton emails.

Sixty. That’s enough for an entire industry made up of lawyers, legal assistants, law clerks, and FOIA researchers. Let’s hope we never have another former cabinet member run for President: the government couldn’t afford it.

Generations of Workers For One FOIA Request

The Republican National Committee has filed at least seven FOIA lawsuits related to Clinton or the Clinton emails.  The State has worked with the RNC to meet the demands in most of the lawsuits. In one, though, the State asked to have the case dismissed because, according to it, it would take generations of workers in order to meet the demand.

In this particular request, the RNC asked for all emails, to and from, for Cheryl Mills, Jacob Sullivan, Patrick Kennedy, and Bryan Pagliano. Even after the search was limited the government discovered the result would be a burden:

Even after applying the search terms and date limits (to the extent possible given
technological limitations), there remained approximately 450,000 pages of documents that are potentially responsive to the Mills, Sullivan, and Kennedy requests. To be more specific, there are about 100,000 pages potentially responsive to the Mills request, 200,000 pages potentially responsive to the Sullivan request, and 150,000 pages potentially responsive to the Kennedy request. Moreover, the State Department considers the documents responsive to these requests to be complex because they include classified documents and interagency communications that could have to be referred to other agencies for their review.  Given the Department’s current FOIA workload and the complexity of these documents, it can process about 500 pages a month, meaning it would take approximately 16-and-2/3 years to complete the review of the Mills documents, 33-and-1/3 years to finish the review of the Sullivan documents, and 25 years to wrap up the review of the Kennedy documents – or 75 years in total (without considering the requests for the Pagliano records).

Can you imagine having a job whose sole purpose is to process these email requests?

“Hey Sally, how was work yesterday?”

“Pretty good. We had four redactions.”

“Four! Wow, must have been exciting.”

“Yeah, we all went out for a beer after work to celebrate.”

At least Judicial Watch is a pro when it comes to FOIA requests. It knows to keep requests sized so they’re not rejected outright as being a burden. Still, in my opinion, and backed by data, Judicial Watch is the organization putting the most demand on State and other agencies. It’s requests are smaller, but it files new ones on a frequent basis, barely pauses for the agencies involved to process the requests, and then files a lawsuit demanding a response.

How much does this all cost?

Agencies must maintain employees who respond to FOIA requests. The State Department has had to hire at least 50 new employees, just to handle the increased number of FOIA requests. At the end of 2015, it had 21, 759 FOIA requests still pending. This, on top of the 20,000+ FOIA requests it expects to get this year, all under a 15% budget cut from Congress.

In addition, every FOIA lawsuit takes time and money, both in the courts, and in the Department of Justice, which defends the lawsuits.

Most people probably expect these costs. What they may not expect is that the government agencies may also have to foot the bill for the lawyers and legal costs of the FOIA lawsuit plaintiffs.

President George Bush signed the Open Government Act, which amended the FOIA. Among the new additions were provisions making it easier for FOIA lawsuit plaintiffs to obtain legal fees when they “substantially prevail” over the government agency. In addition, a provision also changed the funds for such fees, so that they now came directly out of the agency’s operating budget.

Even without the amendments, organizations could win legal fees for cases against government agencies. In 2004, in a lawsuit against the Department of Commerce, Judicial Watch was awarded close to $900,000. It was only on appeal that some of the award was reversed, because the Judge had awarded Judicial Watch fees for its discovery disputes with third parties who were outside of the DOC’s control.

Discovery disputes like the one related to Bryan Pagliano.

Checking into the Department of Justice records for closed FOIA cases in 2015, for the most part legal fees are not awarded. However, the government agencies still footed the bill for over 2 million in lawyer fees and court costs.

The costs associated with FOIA litigation isn’t in the attorney fees, though. It’s in the court’s time, and the DoJ’s time, and in the agencies time to make additional or expanded FOIA searches. For instance, in 2015, decisions were rendered in 36 Judicial Watch cases, but only one had court and attorney fees awarded.

Keeping Lawyers Gainfully Employed

Judicial Watch isn’t the only organization filing FOIA lawsuits but it is, by far, the most active. From every indication, this is all the organization does.

It discovers a tidbit of information, or hears of something in the newspaper, and then files multiple FOIA requests. In most cases, the agencies respond. If they don’t respond in 2-4 months, though, Judicial Watch files a lawsuit. And why not? It has a staff of lawyers, and it only costs $400.00 to file a lawsuit.

Since the majority of information it seeks is related to Democratic leaders and/or causes, Judicial Watch uses the results of its effort as fund raisers in the conservative community. And it ensures a steady stream of support by how it presents the data it finds.

As an example, the latest Judicial Watch release was related to a lawsuit seeking documents under the FOIA regarding waivers to access web email for officials in the Department of Homeland Security. Judicial Watch presents the data in the worst possible light:

Jeh Johnson and top officials at Homeland Security put the nation’s security at risk by using personal email despite significant security issues,” said Judicial Watch President Tom Fitton. “And we know now security rules were bent and broken to allow many these top Homeland officials to use ‘personal’ emails to conduct government business. This new Obama administration email scandal is just getting started. If the waivers were appropriate, then they wouldn’t have been dropped like a hot potato as soon as they were discovered by the media.

When you look through the emails, though, you realize that personal email access wasn’t a nefarious plot to skirt open records laws, or undermine the security of our nation. It’s just people wanting to access their personal email via web application, because they can’t use their smartphones while on the job.

A mistake in judgement, perhaps. End of the world? Nope.

All of this—the never-ending FOIA requests and multitudes of related lawsuits, in addition to fishing-expedition discovery— is perfectly legal. It may even seem to be a goodness… except the agencies are so tied up responding to organizations like Judicial Watch that other requests, from individuals or smaller organizations without lawyers permanently ensconced at the DC court, end up waiting months, perhaps even years, for a response. And we can’t afford to file a lawsuit in order to ensure our requests go to the top of the heap.

I currently have one request into the DOJ for a lawsuit completely unrelated to Clinton’s emails. I did receive an acknowledgement of my request. However, I would surprised if I receive the documents I’m after before next year. And it’s not because the DOJ is being a slackard. It’s because of organizations that have turned the FOIA into a money machine. Organizations, like Judicial Watch.

ProgrammableWebWells Fargo Cites New API as Screen Scraping Countermeasure

Despite a prior resistance to sharing banking data for reasons that included overloading the data servers, Wells Fargo recently announced that it has created an API to make business bank account data available in accounting software Xero.

ProgrammableWebDaily API RoundUp: Beam, TM Forum, Skype Bot, CleverTap, Radiant Media Player, Microsoft

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebHow FHIR APIs Can Benefit the Health of a Population

The ever-growing use of data in organizations is exposing new opportunities built on more reliable information, as seen in the continued progress in HIT (Health Information Technology). In this article on HIT Consultant, Erica Garvin discusses a major issue in HIT, and the impact of FHIRs on population health.

Micah Dubinko (Yahoo!)Antennas and photons

In the previous article I described how antennae work in terms of EM waves. But EM isn’t exactly a wave. Quantum aspects require modeling as particles. Photons. But I can’t really figure out how a photon traveling through space gets converted into an electron current in a wire. There are some cases where treating EM as waves really seems simpler.

But there are probably places where considering it as particles matters too. Like, maybe, the EM drive. Multiple independent tests have confirmed that this device, simply by bouncing microwaves around inside a specially-shaped resonator, produces thrust. Huh?

A new paper suggests a theoretical model where this doesn’t violate Newton’s 3rd law. But the explanation involves paired up out-of-phase photons. Are there existing technologies or experiments where this phenomonon takes place?

I wonder if it’s analogous in any way to how electrons pair up to manifest superconductivity… Would love to hear from the Physics crowd. Add your comment below.

 

ProgrammableWeb: APIsEasySendy Subscribers

Customize and delivery large email campaigns using multiple SMTP relay servers with this Json based API. Allows detailed customization of EasySendy email subscriber lists and is able to integrate with various SMTP relay services of your choice.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsEasySendy Pro

Send targeted emails and monitor subscriber’s by segments while sending personalized emails to segmented lists. This Json and XML based API allows you to customize and deliver large email campaigns using multiple SMTP relay servers of your choice.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsInspectlet

Inspectlet is a service to analyze users with Heatmaps and screen capture while recording actual visitor sessions. Then search and retrieve your session recordings online using this JSON-based API. "inspectlet" can be modified for complex searches by name, country, tags, IP, browser, device and more.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsTV Control Specification

The TV Control Specification API integrates television control via internet browser. It provides navigator, TVManager, TVTuner, TVSource, TVChannel, TVMediaStream, and TVBroadcastedEvent methods.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsApplied Informatics HaVOC Health Vocabulary

HaVoc is a REST based API which offers access to every medical terminology registered in the UMLS (Unified Medical Language System), and is used to implement health and biomedical vocabularies to health applications. This API features class based queries, autosuggestions for disease names or symptoms, synonyms, and abbreviations. JSON is used for data exchange, and API Keys are required for authentication. Applied Informatics is a New York based firm that provides holistic technology solutions and services.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsVoicePing Enterprise Communication

The VoicePing Enterprise Communication API allows developers to integrate communication solutions into their enterprise applications. The API includes methods for alerting workers with audio notifications, getting worker location information, and dynamically grouping workers. VoicePing's communication solutions were designed for companies whose workers are distributed in the field. It allows workers to communicate one-on-one or in a group as well as text or send photos.
Date Updated: 2016-06-17
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebSnapchat Unveils Ads API

Snapchat, which recently surpassed Twitter in daily usage, has unveiled an API that allows advertisers to purchase Snapchat ads programmatically and at scale. Part of a broader launch of new ad offerings, Snapchat's Ads API could help the company grow its ad revenue and pave the way for an IPO.

ProgrammableWebFog Creek Releases HyperDev Developer PlayGround

Fog Creek Software, a software development company known for products such as Trello, Stack Overflow, Kiln, and FogBugz, has announced the release of HyperDev (beta), a developer playground for building full-stack Web applications.

Cameron MollApps are dying

Wait, which ‘apps’ are we talking about?

Lately there’s been considerable debate about the future of native apps, ranging from the cooling of downloads to the dubious utility of instant apps and assumptions about progressive web apps as heir apparent to native apps.

It’s anyone’s guess what the next few years will yield, but it brings to mind pg. 91 from a book I wrote in 2007. While most of the book became technologically irrelevant not more than a year after publishing, there’s one argument that has withstood nearly a decade of digital disruption:

The success of the web, as we know it today, is largely due to one piece of software: the browser. I can access nearly any website, application (including email) … with that one browser.

To assume users will be satisfied downloading an app¹ for every site they frequent, or for every content provider they associate themselves with, is to assume users have adequate storage space on their devices and that they are willing to pay the costs, both data and time, to download these apps.

In all likelihood, most users will probably download an app for a couple of their favorite products, but beyond that, a browser will be — or should be — sufficient for interacting with web content.

Proud papa of that prediction, though I don’t dare assume it will withstand another decade of disruption.

Yet I’m very intrigued by the future of progressive web apps (or PWA for short) as a further manifestation of what I predicted. The term was coined by Alex Russell over dinner in June 2015, but only recently has it gained respectable traction in the media.

All signs point to progressive web apps as having some serious potential to eliminate the need for native apps and return the usage throne to browsers.²

Consider Patagonia. They’ve bid farewell to their iPhone app, claiming the Patagonia website is beautiful and functional in all mobile web browsers. “You may delete [our native app] from your device.”

Brash move or rash decision? Either way, native apps are dead to Patagonia.

Less controversial is Snapdrop, a shining example of a progressive web app. It’s like Apple’s AirDrop but through any browser, any device on the same network. Type snapdrop.net in the URL bar of any browser and share files with any other device connected to the service within your network. No app needed.

Unlike AirDrop, Snapdrop seems to work every time.

Progressive unity

Over the past couple years I’ve made the rounds at numerous conferences pitching the idea of Unified Design. In a nutshell, Unified Design presents a functionally and aesthetically cohesive product experience across endless screens and platforms, regardless of where the experience starts, continues, and ends. Think of adding a product to your Amazon cart at work with a desktop browser and finishing checkout in bed using the Amazon smartphone app. It just works.

The need for Unified Design has been amplified by the growing disconnect between a product’s native app and its web app (or website) counterpart. Often the two are functionally and aesthetically disparate and, in some cases, dysfunctional.

Progressive web apps are, at least in theory, inherently unified. There is no native app, m-dot URL, or separate database to speak of. It just works. In any browser and on any device. In theory. Of course, browsers often choke on theory, despite their best efforts to be ‘progressive’ in the traditional sense of the word.

Vive la app

In truth, I don’t anticipate native apps will die off anytime soon. But I’m warming to the idea that they may be less relevant to the future of the web, and I reaffirm that “a browser will be — or should be — sufficient for interacting with web content.”

Progressive web apps are poised to be remarkably relevant to the future of the web. Let’s not screw it up.

¹ In 2007 the word “app” didn’t really exist. Instead we had terms like “smart client” and “thin client”. I’ve replaced instances of these terms in the excerpt quoted here with terms more familiar to today’s readers i.e. “app”.

² Let’s be realistic. Is there any chance Candy Crush will be a progressive web app anytime soon? Highly debatable. How about Snapchat as a progressive web app? Also debatable, but far more plausible. In fact, it would be nearly impossible to argue why Snapchat shouldn’t be available within the browser.

Additional reading

Norman Walsh (Sun)Non Standard

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Non Standard

Volume 19, Issue 17; 16 Jun 2016

Being a few reflections on twenty years in markup standards.

</header>

If you think of standardization as the best that you know today, but which is to be improved tomorrow; you get somewhere.

<footer>
Henry Ford
</footer>

I’m a markup geek. I like markup. I abandoned word processors the minute I found Script. I was so determined to stay away from word processors that when running Script on the mainframe became irksome, I tried (naively!) to write my own Script processor for the PC. I abandoned Script when I found TEX, TEX when I found SGML, and SGML when XML came along.

I started learning about SGML in the mid-nineties. The earliest record of standards participation that I can find is in the agenda for the June 27-29 Davenport Group meeting in 1994:

Tuesday, June 28

 9:00 - 10:00 Continental Breakfast
10:00 - 12:00 DocBook Issues (see attachment 1)
12:00 -  1:00 Catered Lunch
 1:00 -  3:00 Presentation by O'Reilly & Assoc.
              Lenny Muellner and Norm Walsh
              DocBook -> gtroff Project
 3:00 -  3:30 Break
 3:30 -  4:30 Open Discussion Period (see attachment 2)

The Davenport Group was the consortium that managed DocBook maintenance before SGML Open (which became OASIS). So I’ve been doing markup standards for more than twenty years.

There are lots of reasons to be engaged in the standards process:

  • To achieve interoperability. The whole point of a standard is to make it possible for different systems to interoperate. Some economic environments foster standardization and some don’t.

  • To demonstrate thought leadership. Or to be perceived to be demonstrating it, anyway. Standards participation as a marketing exercise.

  • To influence the outcomes. Ideally, the only goal is to produce the best standard. In practice, corporations want the standard to match their existing implementations and they’ll endeavor to achieve that goal more or less subtly depending on the circumstances.

  • Idealism. Corporations want control. Individuals want freedom. We tell ourselves that interoperability reduces control and increases freedom.

  • To get there first. If you’re working on something that does X and X is being standardized, working on the standard helps you get to standard X first. Even if the standard is being developed in the open, first hand experience in the conversations about the standard can be valuable.

  • Camaraderie. For years, all of my best friends have been folks that I’ve met doing standards. The XML community is absolutely chock full of smart, interesting, wonderful people. Statistically, way over the average, I think. There are few things quite as intellectually enjoyable as getting together with smart folks trying to solve a hard problem.

I’ve done it for all of those reasons at one time or another, mostly at the W3C.

I’m listed as an editor on 31 published documents at the W3C. If you include all the working drafts that lead up to those 31 specifications, the number is closer to 100, almost all of them are about XML. For the bargain price of $495, The Gartner Group will sell you the Hype Cycle for XML Technologies circa 2006. My summary is free: it was a fun ride.

And what did we end up with (in 2016)? Well…

JSON

JSON is clearly not XML, though you’ll find lots of folks who claim it’s better for lots of things. Some of them are right about some things. You have to squint awfully hard to view JSON as markup, but I’ll grant that it is, of a sort. Simpler, by some measures, than XML, but evidently still too complicated.

HTML

It’s the clear winner and still champion of angle bracket markup languages. It’s not XML either. It has an XML serialization but that hardly matters. If you’re going to usefully parse HTML, you’re going to do it with a bespoke parser that turns any sequence of characters into an HTML document. That’s of real value in some environments.

DITA

In the technical documentation space that I’ve cared most about for all these years, DITA appears to be the dominant player today. Marketing muscle and the promise of reduced costs make it an easy sell to managers. It’s also not XML. [Oh! You’re going to get hate mail about that —ed] Like HTML, it has an XML serialization, but that’s a convenience as much as anything else. It absolutely isn’t idiomatic XML. If you don’t believe me, consider the enormously rich, complex, and subtle DITA semantics embodied in Hyperdocument Authoring Link Management Using Git and XQuery in Service of an Abstract Hyperdocument Management Model Applied to DITA Hyperdocuments. Or, more directly, simply consider how the semantics of every DITA element depend on the complicated interpretation of a structured attribute value.

I’m not saying this is bad or unnecessary for the task it’s attempting to solve. I’m just saying a general purpose, vocabulary neutral XML tool isn’t going to provide very reliable insights into a DITA document.

DocBook, JATS, TEI, etc…

Finally, somewhere down in the more-or-less long tail, we come to idiomatic XML vocabularies. Each has a community of users, a potentially enormous corpus of documents, and will survive happily into the future no matter what prognosticators say about the death of XML. They’ll switch to something else when something better comes along, HTML5 with class attributes ain’t [expletive deleted —ed] it.

Standardization gave us parsers, not used for the most widely deployed markup languages; a couple of grammar-based schema languages, a widely implemented one that relatively few people like and a marginally implemented one that most people do; at least one very nice rule-based schema language; several processing languages that really are better than everything else for markup; a pipeline language that’s too little, too late, and too complicated; a standard vocabulary for linking with only minority adoption; a fragment identifier framework; and a transclusion processor. Plus vocabularies of course, and maybe other things I’m forgetting at the moment.

All of the recent standards efforts that I’ve been involved in have been refinements of existing work: very interesting and potentially powerful refinements, but under development by smaller and smaller groups. It took me more months than I am willing to count to find a second implementor for XInclude 1.1, despite the fact that it offers a small, useful, compatible enhancement. Since October of last year, I think the design of a potential XProc 2.0 has become dramatically more interesting, but any efforts to continue its development will have to come to terms with whether or not it is possible to achieve two implementations.

At the end of the day, I have come to accept the unpalatable truth that there are fewer and fewer organizations interested in continued development of XML standards and a tiny minority of overworked volunteers attempting to accomplish them. For several years, I’ve pressed on with the goal of “finishing one or two remaining things,” but I also have to accept is that there will always be “one more thing.” It’s the nature of engineering that there’s always room for improvement.

I just can’t convince myself that it’s the best use of my time anymore and my employer, despite being XML to the core, isn’t really wringing a lot of value out of my efforts either. With sorrow, regret, and angst, I have reached the conclusion that it is time for me to stop.

Standards, that is, not XML.

XML has a long future ahead and I look forward to doing my part to advance the state of the art: teaching and presenting at conferences and workshops, implementing new tools and products, and helping people and organizations leverage XML for real value.

I have some optimism that I can turn the few hours a week that I have of late spent spread too thin across several fronts, fretting and feeling guilty about my failure to complete one or another set of action items, into productive time to work on my own projects, including “JAFPL”: Just Another F…ine Pipeline Language, my own ideas for an XProc successor.

Maybe someday I’ll be persuaded to work on standards again. I really did enjoy it and I really did meet some wonderful folks. Good luck to you all! And see you at Balisage and XML Summer School and beyond. My vacation plans collided with XML London this year, but I’m looking forward to it next year (and XML Prague and XML Amsterdam and all the rest).

</article>

ProgrammableWebDaily API RoundUp: Tiltify, Walgreens, SmartRecruiters, ApplicantStack, Ravelin, Egnyte

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebGoogle Finalizes Android N APIs and SDK

Google today began distributing the Android N Final SDK. Google says developer feedback helped it wrap up the new APIs included in Android N, which are now available via the latest SDK. Joining the final SDK and APIs is Android N Developer Preview 4. Developers now have all they need to put together their Android N apps. 

ProgrammableWebAutodesk Announces New Forge Platform APIs and Forge Fund Investments

Autodesk, a leading 3D design, engineering, and entertainment software provider, announced at Forge DevCon 2016 several updates to the Forge platform including the release of three new APIs; Data Management API,

ProgrammableWeb: APIsStattleship

The Stattleship API is designed to provide brands with the data they need to connect with sports fans via social media. Stattleship's data scientists help determine which sports, teams, and players a given brand's customers are passionate about. They can then automate the creation of social media assets used to engage with customers before, during, and after a game.
Date Updated: 2016-06-15
Tags: [field_primary_category], [field_secondary_categories]

Jeremy Keith (Adactio)Amsterdam Brighton Amsterdam

I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.

It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.

Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…

Enquire within upon everything.

I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.

After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.

Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!

Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.

You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.

Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).

ProgrammableWebApple Announces iOS 10, macOS Sierra; Updates watchOS, tvOS

As part of Apple’s 2016 Word Wide Developers Conference Keynote, the company unveiled its latest iterations of four major operating system platforms, macOS ‘Sierra’ (the successor to OSX), iOS 10, tvOS and watchOS 3, each packed with a plethora of new features and improvements.

Thomas Vander Wal (InfoCloud)Adaptive Teams Needs [Flickr]

vanderwal posted a photo:

Adaptive Teams Needs

Tools for teams have a variety of needs that separate them from other group platforms. These needs can be in the service, integrated, or loosely connected but need to be there in some manner to meet team's needs as they work and shift to meet needs.

ProgrammableWebHow API First Design Could Have Avoided These Failures

When I speak at conferences and meetups about  some of my experiences consulting and advising on APIs - the most instructive story I could tell dates back to when I contracted as an engineer (many many years ago) to build an API from the ground up.

That story, goes something like this:

Daniel Glazman (Disruptive Innovations)Synology meets Sony Vaio in my Hall of Worst

All in all, I am quite satisfied of all the hardware I bought across all these years. Computers, disks, memory, etc. All that geekery is reasonably good and the epic failures are rare. And even when an epic failure happens, the manufacturer is usually smart enough to issue a recall and free replacement even if the 2 years period after purchase is over. (e.g. Apple with the graphics card failure that hit so many older MBPs a while ago). In two occasions only, the manufacturer did show a pretty bad behaviour:

  1. the first case was years ago, when my Sony Vaio laptop was hit by the infamous "inverted trackpad" bug. In short, Sony avoided a 3 cents' expense isolating the trackpad electrical system from the metal case of its Vaio line and the trackpad was inverting the x axis, sometimes the y axis, sometimes both... Totally unusable of course. Millions of Sony laptops were hit by the issue. I had to shout on the phone to obtain, a few months after purchase of the most expensive Vaio at that time, a replacement of my computer. Cured me from Sony computers forever.

  2. and the most recent case, happening right now, is my Synology DS412+ NAS server. In short, it's been plagued by a zillion of issues, all severe since let's say day 50. But every time I was ready to send back the NAS to Synology, the support was asking me to try something (or giving that hint on their fora) that led to a normal reboot of the unit despite of motherboard warnings. On reboot, the warnings were going away... A few days ago, my DS412+ stopped working, with a forever-blinking blue led light. Motherboard failure, again and again and again.
    The Web is full of people explaining how bad the DS*12 series is, and the reports of motherboard failures are absolutely everywhere. They even issued a press release but never contacted customers, eh.
    Unlike Apple, Synology refused today to change my motherboard. And it took a week, and tweets added to my email, to eventually obtain that response from them. To someone else, they proposed to buy a new motherboard for $395, so 80% of the unit's price (yes, expensive!). They can rot in hell, of course.
    So Synology, it's a crappy support for buggy motherboards sold for an extremely high price. If you're lucky with your motherboard, you will never see a problem. If like me you bought a buggy unit shipped buggy, Synology will refuse responsibility after the end of the guarantee period. And during it, you'll need to ship the unit at your own expense.
    Synology's DSM is cool, their units are cool. But they're not reliable enough and the way Synology considers customers who paid a premium price is just a total shame and not in line with 2016 industry's standards, and most probably the French law here.
    Conclusion: avoid Synology as much as you can. You've been warned.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>