Amazon Web ServicesAmazon SNS - Now With Enhanced Support for iOS 8

Amazon Simple Notification Service (SNS) is a fast, flexible, managed push service that makes it simple and cost-effective to push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.

With yesterday's launch of iOS 8, Apple has introduced new features that enable some rich new use cases for mobile push notifications — interactive push notifications, third party widgets, and larger (2 KB) payloads. With these new features, you can message and engage your customers even when your app is not currently active.

Today, we are pleased to announce support for the new mobile push capabilities announced with iOS 8. We are publishing a new iOS 8 Sample App that demonstrates how these new features can be implemented with SNS, and have also implemented support for larger 2KB payloads.

Interactive Push Notifications and Widgets
The new iOS 8 Interactive Push Notifications lets apps present custom options in the push banner. These custom options allow users to interact with the app without moving it to the foreground, resulting in a less intrusive and more fluid experience.

Widgets are miniature versions of your mobile app that users can dock to the Notifications Center. Interactive notifications and widgets are great for quick extensions to your core app that extend the experience throughout the day.

Both interactive push notifications and widgets can be implemented by adding additional parameters to your push notifications. You can use SNS platform-specific payloads to specify those additional parameters for your iOS customers.

Suppose you have the following message payload (yes, this is JSON within JSON):

{"APNS":"{\"aps\":{\"alert\":\"MESSAGE\",\"category\":\"MESSAGE_CATEGORY\"} }"}

Your iOS app can set up a notification category using the following code:

UIMutableUserNotificationCategory *messageCategory = [[UIMutableUserNotificationCategory alloc] init];
messageCategory.identifier = @"MESSAGE_CATEGORY";

[messageCategory setActions:@[readAction, ignoreAction, deleteAction]
  forContext:UIUserNotificationActionContextDefault];

Larger (2 KB) Payloads
In the past, the Apple Push Notification service (APNs) would only accept up to 256 bytes. As of yesterday's launch, APNs can now accept payloads as large as 2 KB. This change was introduced along with iOS 8, but 2 KB payloads will also work for devices running older versions of iOS. You can use the larger payloads to deliver richer messages and instructions to your app, which is particularly useful for silent notifications that get delivered to your app in the background. SNS also supports 2 KB APNs payloads beginning today.

Ready Now
Our customers tell us that one major benefit of SNS is they no longer have to keep up with changes to the APIs of each messaging platform. These SNS enhancements introduced today are an example of new features that become quickly available to our customers without added server-side development.

To learn more about getting started with SNS Mobile Push, and to download our new iOS 8 sample app (which includes the full version of the code example mentioned above), simply click here.

-- Jeff;

Amazon Web ServicesThe AWS Loft Will Return on October 1st

As I promised earlier this year, the AWS Pop-up Loft is reopening on Wednesday, October 1st in San Francisco with a full calendar of events designed to help developers, architects, and entrepreneurs learn about and make use of AWS.

Come to the AWS Loft and meet 1:1 with an AWS technical expert, learn about AWS in detailed product sessions, and gain hands-on experience through our instructor-led Technical Bootcamps and our self-paced hands-on labs. Take a look at the Schedule of Events to learn more about what we have planned.

Hours and Location
The AWS Loft will be open Monday through Friday, 10 AM to 6 PM, with special evening events that will run until 8 PM. It is located at 925 Market Street in San Francisco.

Special Events
We are also setting up a series of events with AWS-powered startups and partners from the San Francisco area. The list is still being finalized but already includes cool companies like Runscope (Automated Testing for APIs and Backend Services), NPM (Node Package Manager), Circle CI (Continuous Integration and Deployment), Librato (Metrics, Monitoring, and Alerts), CoTap (Secure Mobile Messaging for Businesses), and Heroku (Cloud Application Platform).

A Little Help From Our Friends
AWS and Intel share a passion for innovation, along with a track record of helping startups to be successful. Intel will demonstrate the latest technologies at the AWS Loft, including products that support the Internet of Things and the newest Xeon processors. They will also host several talks.

The folks at Chef are also joining forces with the AWS Loft and will be bringing their DevOps expertise to the AWS Loft through hosted sessions and a training curriculum. You'll be able to learn about the Chef product — an automation platform for deploying and configuring IT infrastructure and applications in the data center and in the Cloud.

Watch This!
In order to get a taste for the variety of activities and the level of excitement you'll find at the AWS Loft, watch this short video:

Come Say Hello
I will be visiting and speaking at the AWS Loft in late October and hope to see and talk to you there!

-- Jeff;

WHATWG blogRolling out TLS and HSTS

All whatwg.org and html5.org domains, including subdomains, are now available over TLS. We are also enabling HSTS though this is not done everywhere just yet. If you find Mixed Content issues be sure to let us know or provide a pull request on GitHub.

Update: TLS and HSTS are now deployed everywhere on both domains. We also submitted the domains to the HSTS preload list.

Norman Walsh (Sun)The short-form week of 8–14 Sep 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 5 messages in 11 conversations. (With 4 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

In a conversation that started on Wednesday at 04:40am

Capsule review of the BA 787: least comfortable coach seat I have had in a long time. All 9+ hours of it..—@ndw
@ndw Sympathies. 9hrs too!—@dpawson

Wednesday at 05:17am

FAV
With ubiquitous online wearables, any pretense that the web is something we contribute to goes away. We are tracked and we consume—@Pinboard

Wednesday at 07:46am

@j4 // FIXME: This can't possibly be right—@ndw

Wednesday at 02:12pm

FAV
Yes I suppose it's odd that atheists sometimes say Jesus, and Oh My God. But odder than the religious saying "let's be reasonable"?—@hughlaurie

Wednesday at 03:57pm

RT @EFF: Hate that little spinning wheel? Without net neutrality, you'd have to get used to it. Take action https://t.co/KCZOuRp6ve #Intern—@ndw

Wednesday at 09:22pm

Alrighty folks what a developer gotta do to get 900 followers? Gotta catch up with @ndw. I know, he's like the EF Hutton of XML, but still!—@peteaven

Thursday at 02:47pm

@doriantaylor hmmm, from my feed, I guess @mnot rfc4229 and @ndw rfc3151.txt—@gridinoc

Saturday at 05:42am

@xmlgrrl @xmlsummerschool @laurendw Welcome to this side of the pond! See you tomorrow!—@ndw

Saturday at 02:44pm

FAV
Think some tribe just prefer random fairy story instead of make learn actual fact.—@PlioceneBloke

Saturday at 03:40pm

FAV
English are pragmatic people, but are notorious for aversion to change, particularly when stationery is involved. http://t.co/aAqHkYx2q7 —@trieloff

Saturday at 03:47pm

RT @mnot: It would be cool if they painted the outlines and names of streets above on the ceilings of metro/tube/subway stations to help yo…—@ndw
</article>

ProgrammableWeb: APIsAtosho Channel Partner Search

Atosho is an e-commerce company based in Copenhagen. Along with Retailer API, Atosho offers Channel Partner Search API that helps developers to create a correspondence between XML and JSON results of an API call.
Date Updated: 2014-09-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMailee.me

Mailee.me is an e-mail marketing service that offers monitoring of e-mail receivers. The company aims to provide consumer insights with the goal to create targeted marketing campaigns. Developers can integrate Mailee.me to any online application through a REST API that requires access key. The main benefit of this API about e-mail marketing is to keep contacts in sync. In the site, developers can review tutorials. They can also access a code library available in Java, PHP, Ruby, Python, Objective-C, Windows 8, and .NET.
Date Updated: 2014-09-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsAbove.com

Above.com is a Trellian company that allows users to monetize transactions, either by selling or buying web domains. The main value of this API is the possibility to integrate platforms with domain investment purposes. This API supports XML format and requires an API Key. In the website, developers can see resources such as API queries, authentication, secure connection, error codes, usage limits, and code examples.
Date Updated: 2014-09-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRich Citations

PLOS Labs's Rich Citations API automatically collects rich citation information from any PLOS article. It accessible through a web API, allowing developers and researchers to use rich citations from PLOS articles to build their own tools and databases.
Date Updated: 2014-09-18
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesConsistent View for Elastic MapReduce's File System

Many AWS developers are using Amazon Elastic MapReduce (a managed Hadoop service) to quickly and cost-effectively build applications that process vast amounts of data. The EMR File System (EMRFS) allows AWS customers to use Amazon Simple Storage Service (S3) as a durable and cost-effective data store that is independent of the memory and compute resources of any particular cluster. It also allows multiple EMR clusters to process the same data set. This file system is accessed via the s3:// scheme.

Because S3 is designed for eventual consistency, if one application creates an S3 object it may take a short time (typically measured in tens or hundreds of milliseconds) before it is visible in a LIST operation. This small window can sometimes lead to inconsistent results when the output files produced by one MapReduce job are used as the input of another job.

Today we are making EMRFS even more powerful with the addition of a consistent view of the files stored in Amazon Simple Storage Service (S3). If you enable this feature, you can be confident that all of your files will be processed as intended when you run a chained series of MapReduce jobs. This is not a replacement file system. Instead, it extends the existing file system with mechanisms that are designed to detect and react to inconsistencies. The detection and recovery process includes a retry mechanism. After it has reached a configurable limit on the number of retries (to allow S3 to return what EMRFS expects in the consistent view), it will either (your choice) raise an exception or log the issue and continue.

The EMRFS consistent view creates and uses metadata in an Amazon DynamoDB table to maintain a consistent view of your S3 objects. This table tracks certain operations but does not hold any of your data. The information in the table is used to confirm that the results returned from an S3 LIST operation are as expected, thereby allowing EMRFS to check list consistency and read-after-write consistency.

Enabling the Consistent View
This feature is not enabled by default. You can, however, enable it when you create a new Elastic MapReduce cluster from the command line, the Elastic MapReduce API, or the Elastic MapReduce Console. Here are the options that are available to you when you use the console:

As you can see, you can also enable S3 server-side encryption for EMRFS.

Here's how you enable the consistent view from the command line when you create a new EMR cluster:

$ aws emr create-cluster --name TestCluster --ami-version 3.2.1 \
  --instance-type m3.xlarge --instance-count 3 \
  --emrfs Consistent=True --ec2-attributes KeyName=YOURKEYNAME

Important Details
In general, once enabled, this feature will enforce consistency with no action on your part. For example, it will create, populate, and update the DynamoDB table as needed. It will not, however, delete the table (it has no way to know when it is safe to do so). You can delete the table through the DynamoDB console or you can add a final cleanup step to the last job on your processing pipeline.

You can also sync a folder to load it into a consistent view. This is useful to add new folders to the view that were not written by EMRFS, or to manually sync a folder being managed by EMRFS. You can log in to the Master node of your cluster and run the emrfs command like this: table:

$ emrfs sync s3://bucket/folder

There is no charge for this feature, but you will pay an hourly charge for the data stored in the DynamoDB table (the first 100 MB is available to you at no charge at part of the AWS Free Usage tier and for the level of provisioned read and write capacity). By default, the table is provisioned for 500 read capacity units and 100 write capacity units. As I noted earlier, you are responsible for deleting the table when you no longer need it.

Be Consistent
This feature is available now and you can start using it today!

-- Jeff;

WHATWG blogThe Future of the Web: My Vision (May 1, 2012)

Like probably many others who read this blog, I am a web design enthusiast, web standards advocate, and web designer by trade. I have been working with HTML since the early 2000s, and have enjoyed it ever since.

Over the years, the web has evolved around me. I have watched it grow and adapt. And now, as a newly started professional web designer, I wish to contribute.

From this week forward, I will be writing and sharing both opinions and tutorials on my opinions of the web, where it’s gone, and most importantly, where it’s going.



<section>

Article 1: Websites and Sectioning
Part 1: The Basics


<warning>Warning: This article discusses the topic of <semantics>semantics</semantics></warning>

As a researcher by hobby, I often find myself reading through various encyclopedic websites. Whether it be a wiki, or a single purpose website devoted specifically to that aim, I spend countless hours of my time using them.

With the new work on HTML to semantically markup a website, I am rather excited to see what the future may hold for such informational websites. The concepts written in the specifications and drafts are very intriguing, and will hopefully someday improve the semantic importance of the web. Combine this with the ability for future screen readers, search engines, etc. to extract such data, the possibilities are near endless.

However, as I have read around, I have noticed that not all things are as clear as they should be. Although it has been several years since the formerly labeled HTML5 specification has come into the light, I can still see these arguments floating about.

In this specific case, I am referring to the use of the semantic elements such as <article>, <section>, and so-forth, as well as their relationship to the <h1>-<h6> elements.

Because it seems that many are in disagreement about the matter, I felt I should share my opinions as to what they mean, and how they could be used.

Below, you will see the method I have devised for sectioning out content.


(Note: All information is of my personal opinion, and may not reflect the views of other web designers, the WHATWG, or the W3C.)


Firstly, let us imagine the idea of a web page, likely encyclopedic in content. This page has a single focus, a single section, and a single paragraph.

At the very basic, it would be marked up as follows:


<article>
<section>
<p>Hello, World!</p>
</section>
</article>

<figure> Figure 1 <figcaption>Output of Example 1</figcaption> </figure>

As we can see, this bare minimum design utilizes three elements: <article>, <section>, and <p>. These elements, as can be semantically understood, represent the start of the article, the internal section, and the paragraph within the section.

Fairly simple, right?

Well, let’s take this a step further. What if you were to want to add a title to the article?

This is how it would be done.


<article>
<header><h1>Hello, World!</h1></header>
<section>
<p>Hello, World!</p>
</section>
</article>

<figure> Figure 2 <figcaption>Output of Example 2</figcaption> </figure>

Now, we see the addition of two new elements: <header> and <h1>. The <header> element designates this region of the document as the header of the article. The <h1> element designates this line of text as the title, or heading of the article.

Still, this seems simple, doesn’t it?

For our next step, let’s say that we wish to increase the scope of this article, from Hello, World! to Hello, World! and Foobar.


<article>
<header><h1>Hello, World! and Foobar</h1></header>
<section>
<h1>Hello, World!</h1>
<p>Hello, World!</p>
</section>
<section>
<h1>Foobar</h1>
<p>Foo</p>
<p>Bar</p>
</section>
</article>

<figure> Figure 3 <figcaption>Output of Example 3</figcaption> </figure>

Now we have an article which is both titled, and has two titled sections, each containing a heading within an <h1> element. We also have the article itself, headed within both an <h1> and <header> element.

This concept, though simplistic, is easy to read by humans, and holds semantic value to machines and scripts.


In conclusion, this is the way that I view the new method of sectioning content in HTML. Using this method, we come up with a quick, easy method to divide a document, and even a website, into logical sections which can be easily read by both humans and machines.

Next time, we will be discussing part two of this topic: Styling.

Until then,

-Christopher Bright

</section>

Amazon Web ServicesAmazon AppStream Now Supports Chrome Browser and Chromebooks

As you might know from reading my earlier posts (Amazon AppStream - Deliver Streaming Applications from the Cloud and Amazon AppStream - Now Available to All Developers), Amazon AppStream gives you the power to build complex applications that run from simple devices, unconstrained by the compute power, storage, or graphical rendering capabilities of the device. As an example of what AppStream can do, read about the Eve Online Character Creator (pictured at right).

Today we are extending AppStream with support for desktop Chrome browsers (Windows and Mac OS X) and Chromebooks. Developers of CAD, 3D modeling, medical imaging, and other types of applications can now build compelling, graphically-intense applications that run on an even wider variety of desktops (Linux, Mac OS X, and Microsoft Windows) and mobile devices ( Fire OS, Chromebooks, Android, and iOS). Even better, AppStream's cloud-based application hosting model obviates the need for large downloads, complex installation processes and sophisticated graphical hardware on the client side. Developers can take advantage of GPU-powered rendering in the cloud and use other AWS services to host their application's backend in a cost-effective yet fully scalable fashion.

Getting Started With Appstream
The AppStream Chrome SDK (available via the AppStream Downloads page) contains the documentation and tools that you need to have in order to build AppStream-compatible applications. It also includes the AppStream Chrome Application. You can use it as-is to view and interact with AppStream streaming applications, or you can customize it (using HTML, JavaScript, and CSS) with custom launch parameters.

The AppStream Chrome Application runs on Chrome OS version 37 and higher, on Chrome desktop browsers for Windows, Mac OS X, and Linux, and on Chromebooks. Chrome mobile and other HTML 5 web browsers are not currently supported. The application is available in the Chrome Web Store (visit Appstream Chrome App) and can be launched via chrome://apps.

The AppStream SDK is available at no charge. As detailed in the AppStream Pricing page, you also have access to up to 20 hours of streaming per month for 12 months as part of the AWS Free Tier. You will also have to register for a Chrome Developer Account at a cost of $5 (paid to Google, not to AWS).

-- Jeff;

Anne van Kesteren (Opera)TLS: issues with StartSSL

On the upside, they basic offer certificates that are free for non-commercial usage and help out quickly. If you have a more complicated domain setup (e.g. whatwg.org requires both *.whatwg.org and *.spec.whatwg.org, html5.org requires *.html5.org), you have to get validated for USD 60. That validation then allows you to issue as many non-commercial certificates as you want for domains you own for about a year. If you pay, the certificates last for two years, otherwise one.

Now the downside:

  • Revoking a certificate costs USD 25, even though it should be easy to automate.
  • Does not allow setting the Common Name on a certificate. E.g. if you have both example.com and sub.example.com it will likely pick the latter, even though that might not at all be the most common name.
  • Provided intermediate certificate uses SHA-1 rather than SHA-256.
  • The user interface is really quite bad and not forgiving. You have to carefully follow the steps and not make mistakes.
  • They spelled my name with an uppercase “v”. Very sad. This ends up on every certificate. Not sure how to resolve this.

Anne van Kesteren (Opera)TLS: deploy HSTS

HSTS (HTTP Strict Transport Security) is a policy delivered through an HTTP header, over an encrypted connection. It indicates that a domain is only to be accessed over TLS going forward. If after the policy is installed a domain is fetched using http://example.com/, the user agent is required to fetch https://example.com/ instead. This prevents sslstrip attacks.

The solution for initial incoming fetches using http:// is to permanently redirect those to https://. This initial fetch is still susceptible to the sslstrip attack, but all future fetches will not be. To prevent this attack even for initial fetches, Google is experimenting with an HSTS preload list (annevankesteren.nl is awaiting review). Hopefully longer term we can figure out a decentralized solution.

HSTS can also help with attacks on subdomains and if you really want to protect your online presence you need to enable this. It is somewhat bad protocol design this is an opt-in rather than an opt-out, but so be it. E.g. an active network attacker could trick a user into visiting secure.example.com and present a different site there through DNS spoofing and possibly even steal cookies. This is bad and therefore deploying HSTS without includeSubDomains needs very careful consideration and is almost always a bad idea.

Deploying TLS without HSTS is just irresponsible at this point, but fortunately it is rather easy to enable by adding a simple header. E.g. through .htaccess on Apache:

Header set Strict-Transport-Security "max-age=31415926; includeSubDomains" env=HTTPS

Redirecting from http:// to https:// can be done through .htaccess as well, though you might also be able to configure this at a higher level (see HTTP to HTTPS), depending on your hosting setup (e.g. DreamHost has distinct configuration for non-TLS and TLS hosting):

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,L]

Anne van Kesteren (Opera)TLS: next steps

In TLS: first steps I outlined the need for everyone to start using TLS. And how this can be free for non-commercial use. I got validated through StartSSL within a week and although I have my issues with them, issuing certificates worked fine and slightly more domains use TLS now than did last week. Yay!

The way this works is that I have a private key from which I generate a certificate request that I hand over to the CA (certificate authority). The CA then issues me a certificate if everything is in order. I then install the private key and the certificate on my server and enable TLS hosting. I did not want to pay for a unique IP address so I am using the SNI TLS extension. This will cause limited breakage in some older clients that will fade out over time. (I also had to install an intermediate certificate. This is sometimes required and indicated by the CA.)

I encourage everyone reading this to enable TLS, deploy HSTS, and start pondering about how we can improve this system. E.g. since a domain registrar already has to validate a potential domain owner to some extent, perhaps they can issue a certificate with it for free? That leaves the private key, but some certificate authorities already offer to generate those. And although that is not ideal, it would be better than not having TLS at all.

Anne van Kesteren (Opera)TLS: issues with DreamHost

There are numerous issues with TLS on DreamHost in a shared hosting setup unfortunately.

  • Does not support TLS 1.2.
  • Does not support forward secrecy.
  • Has SSL 3.0 enabled (even when no unique IP is acquired for the domain and SNI is required, which SSL 3.0 does not support).
  • Uses RC4 which has known weaknesses and fails on initial attempt in Internet Explorer 11.
  • Provides no convenient way to add certificates. When a certificate is valid for a domain and its subdomains (e.g. both html5.org and *.html5.org), you have to configure it for each of them separately. For html5.org I have twelve domain entries in my control panel. Having to copy and paste the key, certificate, and intermediate certificate, as well as clicking through several screens for each of those entries, is very time consuming. The situation for whatwg.org is even worse.
  • Has no built-in support for HSTS and therefore requires configuration for each domain.

To be honest, if I did not have this legacy and started looking for a solution today for twenty-thirty domains and as many subdomains, I would likely not pick DreamHost and try to figure out setting up something myself on a VPS. DreamHost is great value, but it does not offer perfection and that is irking.

Reportedly DreamHost will be improving this as they switch their server OS. Once they do I will update this post.

Daniel Glazman (Disruptive Innovations)Journée des coups de pied au cul qui se perdent

C'est officiel, je regrette de ne pas avoir une pointure 45-fillette pour en filer quelques-uns bien placés... Je vis dans un petit lotissement neuf de quatre maisons. Nous y avons été les premiers occupants depuis le 31 décembre dernier. Les premiers voisins sont arrivés en mai, les autres en juillet. Le premier arrivé après nous s'est trompé dans la fourniture de son PDL (point d'accés réseau électrique) et EDF, sans coup férir et sans notification aucune, a fermé notre contrat électrique... Je ne vous raconte pas le souk. Puis les occupants de la troisième maison ont débarqué et se sont eux aussi trompé mais cette fois de compteur d'eau, ahem. Et la Lyonnaise des Eaux a également, sans coup férir et sans notification aucune, a fermé notre contrat de fourniture d'eau. Certes la fourniture de courant et d'eau n'a jamais été coupée mais quel foutoir. Encore heureux que personne n'a pris notre contrat gaz !!! Attention, je ne dis pas que mes voisins sont les coupables du tout ; les coupables sont la Lyonnaise des Eaux et EDF, qui n'ont jamais vérifié que le détenteur du compteur ou PDL que je suis avait vraiment quitté les lieux, qui ne l'ont jamais notifié de la fermeture unilatérale de son contrat, et qui demandent à ses futurs clients des informations bien trop compliquées ou inaccessibles pour eux. Au fait, tant la Lyonnaise qu'EDF avaient mon mail, mon adresse postale et mon n° de téléphone mobile pour me contacter... La-men-ta-ble.

Ce qui est intéressant c'est le pourquoi de tout ça. Autrefois, dans les temps reculés d'avant le début du troisième millénaire, un technicien passait pour vérifier les compteurs, trouver le bon compteur, etc. Ce genre d'erreurs était donc rare. Aujourd'hui, on demande au client lambda de fournir son PDL (électricité), PCE (gaz), n° de compteur d'eau. La plupart des gens n'ont aucune idée de ce que c'est, sont souvent incapables de localiser l'équipement en question et quand il est localisable, la trappe de visite a parfois été bloquée par le dernier technicien de visite. Bref, on économise les visites IRL, on compte sur un coût faible d'erreur et surtout on met les soucis sur les épaules du client... Je n'ose imaginer combien de temps il aurait fallu pour seulement détecter le souci dans le cas d'une personne âgée sur qui tomberait une merde pareille.

Dans mon cas, le coût de l'absence de procédures de vérification à la Lyonnaise et EDF, c'est une heure et demie de conversations téléphoniques avec les divers intervenants, une présence à mon domicile (et donc pas à mon bureau...) tout l'après-midi du vendredi qui vient. Les "gestes commerciaux" proposés par la Lyonnaise de Eaux et EDF sont assez loin de compenser les 4 heures de travail ainsi perdues.

Donc zut. Et je reste poli.

Anne van Kesteren (Opera)TLS: first steps

Given Pervasive Monitoring Is an Attack and the proposal to require TLS for new platform features I was ready to invest some time. Overall TLS advocacy is increasing. On IRC someone quipped “TLS is the new utf-8”.

TLS helps with protecting credentials, what content is being viewed, the path and query components of a URL, and provides an assurance you are actually getting content from the given domain rather than an attacker. Of course, if the given domain is compromised or talks to other domains in the backend without encryption, you are none the wiser. That relies on the user trusting the domain owner.

TLS used to not have an equivalent to the Host header, requiring a unique IP address for each domain you wanted to have TLS on. With the Server Name Indication (SNI) extension, TLS has become virtually free. The catch is that older Android and Internet Explorer on Windows XP will not be able to get to your domain. (Just as Internet Explorer 2 cannot get to your domain now as it likely requires a Host header.) Python 2 is being patched.

Mathias already has TLS deployed and asked if html5.org could follow. I use DreamHost. DreamHost supports “Secure Hosting” though does not use it much itself yet unfortunately and has ample room for improvement. Their support team claims they are working on it as they are transitioning to a new OS for their servers.

DreamHost is also rather expensive for certificates. StartTLS is not, so I created an account there. Unfortunately html5.org has a number of subdomains so I had to pay USD 60 to get myself validated. That way I get access to more elaborate certificates. Lacking proof of my mobile phone number I am now waiting up to ten business days for a letter with a key to arrive that will get me said validation. Otherwise StartSSL has been excellent in support so far and although the UI leaves much to be desired, it is workable.

Another problem remaining with DreamHost (shared hosting) is that databases are on a different domain and the connection to it is not encrypted. Now they claim to have safety measures in place and generally only allow internal access to those databases, but that could be better still. (See To Celebrate Spying on Google Users, the NSA Drew a Smiley Face.)

Amazon Web ServicesAWS Week in Review - September 6, 2014

Let's take a quick look at what happened in AWS-land last week:

Monday, September 8
Tuesday, September 9
Wednesday, September 10
Thursday, September 11
Friday, September 12

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

-- Jeff;

Daniel Glazman (Disruptive Innovations)Molly needs you, again!

There are bad mondays. This is a bad monday. And this is a bad monday because I just discovered two messages - among others - posted by our friend Molly Holzschlag (ANC is Absolute Neutrophil Count):

First message

Second message

If you care about our friend Molly and value all what she gave to Web Standards and CSS across all these years, please consider donating again to the fund some of her friends set up a while ago to support her health and daily life expenses. There are no little donations, there are only love messages. Send Molly a love message. Please.

Thank you.

ProgrammableWeb: APIsHablaa

Hablaa provides crowdsourced document translation services. Documents are split into smaller parts and translated simultaneously by multiple translators who only translate into their native languages. Quality is ensured by a final review by a professional translator. Hablaa's Dictionary Translation API translates words to and from more than 140 languages. The API is currently free to use with no signup required.
Date Updated: 2014-09-15
Tags: [field_primary_category], [field_secondary_categories]

WHATWG blogThe Future of the Web: My Vision (May 23, 2012)

I apologize for the longer than expected wait for this article, but now, we may continue.

The article below will pick up where Part 1 left off.



<section>

Article 1: Websites and Sectioning
Part 2: Styling


<warning>Warning: This article discusses the topic of <semantics>semantics</semantics></warning>

As we began with previously, we now have the basics down as to how to properly divide information into sections. This time, we will be looking into the second aspect of sectioning: Styling.

When a search engine or screen reader scans a website, it does so by evaluating the markup of the page. Humans however, view by way of sight. If we were to use absolutely no form of styling, it would be difficult, if not impossible, to logically divide an article into sections.

</section><section>

When we read through a webpage, several common features are used to guide us in logically separating it out:

<figure class="list">
  • Bolded text defining headings.
  • Heading text larger than the content below it.
  • Logical heading levels which decrease in size as it gets deeper.
  • Ocasionally indenting subsections and subheadings within their parents.
</figure>

Web browsers of today will often do much of this work for you, but this is not always enough. Below, I will show you at most basic, the heading layout I choose to use when building a website.


<article>
<header><h1>Main Heading</h1></header>
<section class="topsection">
<h1>Top Heading</h1>
<p>Section Content</p>
<section class="subsection">
<h1>Heading</h1>
<p>Section Content</p>
<section class="subsection">
<h1>Heading</h1>
<p>Section Content</p>
<section class="subsection">
<h1>Heading</h1>
<p>Section Content</p>
<section class="subsection">
<h1>Heading</h1>
<p>Section Content</p>
<section class="subsection">
<h1>Heading</h1>
<p>Section Content</p>
</section>
</section>
</section>
</section>
</section>
</section>
</article>

Now obviously, this is more compact than any site would naturally be, and wouldn't use such simplified text. However, this perfectly demonstrates the decreasing levels of the font size of the heading. With each level, it gradually scales downward, starting from nearly 3 times larger than the content at highest, to the same size as the content at lowest.

To achieve this, we use the following CSS Code.


header > h1 { font-size: 2.75em; }
section.topsection { font-size: 2em; }
section > section { font-size: 75%; }
section h1 { font-size: inherit; }
section * { font-size: 16px; }
section h1, section p { margin: 0; }

As you can see, the above code generates a much more aethetically pleasing, and far more understandable result than one without styling.

</section>

At this point, you are probably asking: How can this actually apply to real life?

Well, below this I will explain.


<section>

Use in Practical Design

As mentioned above, the true power of this method is revealed when used in practical application.

In the figure below, I will demonstrate said power by use of the first chapter of the book Alice's Adventures in Wonderland by Lewis Carroll.

<figure> 404 <figcaption>Styling used in Practical Application</figcaption> </figure>

In the above sample, you can easily see the logically broken down structure of the chapter. You have the main article with its heading, the major logical division (the chapter), and two minor logical divisions (the sections). Although the subsections lack heading text, you can still tell that they are seperated. Even if you were to remove the row of stars, the spacing alone would be able to divide them.

And now, to show the code that makes it work:

HTML:
<article>
<header>
<hgroup>
<h1>ALICE'S ADVENTURES IN WONDERLAND</h1>
<h2>By Lewis Carroll</h2>
<h3>THE MILLENNIUM FULCRUM EDITION 3.0</h3>
</hgroup>
</header>
<section class="topsection">
<h1>CHAPTER I. Down the Rabbit-Hole</h1>
<section class="subsection">
<p>...</p>
<p>...</p>
<p>...</p>
<p>...</p>
<p>...</p>
</section>
<div class="sep">* * * * * * * * * * * * * * * * * * * *</div>
<section class="subsection">
<p>...</p>
<p>...</p>
<p>...</p>
</section>
</section>
</article>
Note: Text removed to prevent overcluttering.


CSS:
hgroup * { margin: 0; }
hgroup h1 { font-size: 2em; }
hgroup h2 { font-size: 1.5em; }
hgroup h3 { font-size: 1em; }
section { margin-bottom: 2em; }
section h1 { margin: 0.5em 0; }
p { text-indent: 1em; margin: 1em 0; }
p:first-of-type { margin-top: 0; }
.sep { font-weight: bold; margin: 0 1em 2em; }
Note: To show the full extent of the sample, this example uses a multi-tier heading for the article. This will be explained further in Part 3.

And here we have it. Together, the markup and the styling provides a neat, clean look which greatly improves readability. As well, it is written in semantic code which allows computers to properly understand its meaning.

</section>

In conclusion of this section, we have now learned the importance of sectioning, as well as how to properly style sections. These two concepts, when applied properly, could help create a much richer and more consistantly appealing world wide web in the coming future.

Next time, we will be discussing more advanced ascpets of this layout principle, such as multi-tiered headings and asides.

Until then, I hope that you have learned something new from this subject.

-Christopher Bright

<section>

Notes

  • Starting from now, if this article does not thoroughly desribe something for you, please leave a comment detailing your problem. I will do my best to update it.
  • I will soon be working out a method to rate the usefulness of this article, for future reference.
  • Any comments made will be taken into consideration for the future. I intend to make this beneficial for everyone, so all comments are accepted.
  • Examples will now be made within iframes, due to the possible large size of the content.
</section> <section>

References

Alice's Adventures in Wonderland
Written by Lewis Carroll
Converted to ebook by David Widger of Project Gutenberg.
www.gutenberg.org/files/11/11-h/11-h.htm#2HCH0001
All rights go to their respective owners.
</section> <section>

Updates

  • I have made a fix to the specific coding of the CSS, condensing it as much as possible, I believe.
  • Although further feedback will be needed to make a final decision, this may be the final entry of this series at this time.
</section>

WHATWG blogCSS Books & CSS Figures

Today we're happy to add two more specs to the WHATWG stable, Books and Figures! These are specifications focused on CSS features. Books provides ways to turn HTML document into books, either on screen or on paper. Using Books, authors can style cross-references, footnotes, and most other things needed to present books on screen or paper. Figures are also based on traditional publishing — it provides ways to float elements with respect to columns and pages, and describes how to wrap text around them.

Printing was part of the first CSS proposal in 1994, and many of these CSS features have been in use since the first paper book written in HTML and CSS (CSS — Designing for the Web) was published in 2005. Now, a decade later, bestsellers are routinely produced with CSS. There are currently two implementations of these specifications that are able to produce books: AntennaHouse and Prince. We hope that our continued work on these specifications will help existing implementations converge, and also encourage browsers to present web content as pages. Pages can be printed, stored as PDF files, or shown on screens. Many users will prefer pages to scrollbars, and these specifications will help make it happen.

As an example of a feature from Books, consider footnotes. Turning an element into a footnote is easy:

span.footnote { float: footnote } 

This can, with a few more styles, be formatted as:

Body text, with a superscripted footnote saying '[1]' in red, and under the body, a short line delimiting the body from text in a smaller font size, with the same red '[1]' indicating the footnote.

In another example, common newspaper and magazine layouts can be achieved with just a 10-line style sheet. Here's what this could look like:

The page with five columns of body text, but with a paragraph in a larger font size spanning the first two columns to the upper left, an image with a caption spanning the last three columns at the top on the right, an image covering the second and third columns at the bottom, and a smaller image with a caption at the bottom right.

More screenshots and code examples are available.

Feedback on these specifications, like all our others, is most welcome, either on www-style, on WHATWG's mailing list, or as bugs in the bug database.

The features described by these specifications were previously published as part of a W3C CSS working group document that I wrote. Today we are moving the work to WHATWG, a W3C Community Group.

Ben Buchanan (200ok)Getting started with a Raspberry Pi

Raspberry Pis are awesome little machines. Even with the Aussie Tax™ increasing the price, for about $50 you get a surprisingly capable computer in a tiny form factor.

There are a few gotchas getting up and running, though – particularly if you haven’t used Linux before. For example I didn’t know the right terminology for packages and using apt-get, which made searching for help a bit annoying at first.

This post is basically a list of things I’d have liked to know when I bought my first Pi, just to get up and running faster. Hopefully these tips will save someone some time…

First, choose your Pi

Currently there are three models to choose from: A, B and B+. There’s a comparison chart, but the short version is: if in doubt, get the B+.

While most of the B+ specs are the same as the B, the form factor’s neater; the B+ has more GPIO pins and USB ports; and it resolves some niggles around power. If you can’t get hold of a B+ you’ll probably still be fine with a B, but you might as well get the newer version if you can.

So what about the Model A? It’s the cheapest option but has lower specs – half the RAM, no ethernet and just one USB port. It’s not really a ‘budget Pi’, it’s for projects where power consumption and weight are key considerations. So unless you’re looking for a Pi to attach to your quadcopter, you probably don’t need an A.

In the end I paid a few extra dollars for the red Pi manufactured by Egoman. After all, shouldn't all Pis be red?

Red Raspberry Pi by Egoman

Basic supporting hardware

The full list of hardware depends a little on how you plan to use your Pi, but the basics you need:

  • Decent 4gig+ SD card (for Model A or B) or micro SD card (for Model B+).
    • 4-8gigs is plenty for the OS, just avoid the very cheapest SD cards with lower write speeds.
    • If you’re loading the OS yourself, don’t forget a reader for your SD card.
    • If you don’t want to load the OS yourself, you can buy Pi kits with a preloaded OS (it’s not hard to do though!).
  • Cables for power and display:
    • HDMI cable
    • A power supply. You can power the Pi with a dedicated 5v power supply, but any powered USB port or a good-quality charger will do. I’ve used USB ports on my router, laptop, desktop and TV; and chargers for my phone and kindle.
    • Avoid really cheap chargers or USB ports on cheap power boards, as they tend to fluctuate below their supposed specifications and the Pi will brown out and reboot.
  • There’s no on-board wifi, so you need an ethernet cable or a wifi dongle like the Wi-Pi.
  • USB keyboard and mouse (see considerations below)
  • A case/enclosure for the Pi is optional but worth considering… of course you can just put one together with Lego ;)

If you’re geeky buying a Pi, you probably have a lot of that stuff already.

Hardware considerations

  • The Model B only has two USB ports, so you run out before you can connect a keyboard, mouse and wifi dongle. It also won’t power high-draw devices like hard drives. Solutions include:
    • Get a keyboard with integrated mouse. I use the Logitech K400r, which was plug and play with Occidentalis. The K400r’s dongle coexists quite happily with a WiPi dongle.
    • Use a powered USB hub to add more ports and provide more power (avoiding brownouts).
  • Be careful about using power off a PC, TV etc – if it’s something that’s going to be turned on and off a lot, you will need to remember to shut down the Pi first. Otherwise you’re doing the equivalent of shutting down your computer by yanking the power cable out of the wall.
  • If you’re going to be shuffling a lot of data on or off the Pi, do yourself a favour and use ethernet.
  • The Pi only outputs HDMI and RCA composite video, which means some old monitors might need an adaptor.

Choosing an OS

Raspberrypi.org has a list of common OS options along with installation guides.

  • Raspbian/NOOBS is the ubiquitous and recommended choice. NOOBS stands for New Out Of Box Software and aims to make the setup process as easy as possible. Note NOOBS is an installer/config manager and Raspbian is the actual OS. Raspbian is Debian Linux optimised for the Pi.
  • If you plan to use the Pi to work with electronics, you may want to look into Occidentalis. Created by Adafruit, it packages up some extra libraries etc for working with electronics.
  • If you plan to use the Pi as a media centre, look into RASPBMC, which is the popular XBMC ported to Pi.

It is worth noting the Pi-friendly Linux distributions are bare bones, optimised for the low-power hardware. This makes them feel a bit less polished than common desktop distros like Ubuntu, so don’t judge the current state of desktop Linux based on the Pi!

Initial setup

The precise details vary between the distributions, but you should ensure you:

  • set a new password (recommended). You can do this any time with passwd at the command line. You can generally ignore the complex password requirement prompt, but do treat the Pi like any other ‘real’ computer or device and protect it accordingly.
  • choose whether the Pi should boot to the command line or the GUI by default
  • decide if you want to enable SSH

If you were prompted for these things during setup, no worries… but if not, try running sudo raspi-config from the command line. This should open a configuration menu.

Your wifi dongle should have its own instructions, but I found many don’t include the commands for listing USB and network devices:

  • lsusb – lists attached USB devices
  • iwconfig – lists network devices. Be careful as each network device gets its own persistent name/reference, so if you plug in two different dongles the second will not be accessible at wlan0. Not a big deal, except every tutorial seems to assume you’re doing a fresh setup and says your wifi dongle will be wlan0 rather than getting you to actually check. This caused a serious headache for me as I’d used a borrowed WiPi then bought my own, thinking I could just swap them out.

Don’t be afraid of the command line

It’s almost guaranteed you’ll need to use the command line interface as well as the GUI, even if only to do the initial setup. So, it’s handy to know a few CLI basics:

  • If the Pi doesn’t boot into a GUI, the command startx will usually start the GUI.
  • Once you’ve started the GUI, you might need to run CLI commands – you don’t need to shut down the GUI, just start up a command or console window within the GUI.
  • You manage software/packages using apt-get. This tip is so basic people regularly forget to mention it.
    • apt-get update – update the list of available packages (you need to run this first)
    • apt-cache search thing – search the list of available packages for ‘thing’, eg. “synaptic”
    • apt-get install thing – install ‘thing’
    • Note you’ll probably need to run apt-get commands with sudo, eg. sudo apt-get install thing. This gives the command elevated system privileges required for updates.
  • Many distributions provide a GUI wrapper for apt-get; but if yours does not, you can use apt-get to install one (eg. Synaptic).
  • Not all distros will remind you to update the OS and packages. To upgrade everything, automatically saying ‘yes’ to any prompts, with dependencies managed for you (note this will probably take a while so don’t do it if you’re in a hurry):
    • sudo apt-get update && sudo apt-get -y dist-upgrade

Command Line text editors

  • Help! I’m stuck in some kind of text screen and can’t get out! ... You’re probably in Vi or Vim. Try: q, q [enter], :q [enter], esc :q [enter], ctrl+c [enter] or ctrl+d [enter] … one of them should get you out. Consider this one an ‘achievement unlocked’ moment of learning the command line. Everyone gets stuck in vim when they’re new to the CLI!
  • Even if you didn’t have the “help I’m stuck” moment, you may encounter Vi or Vim pretty early on in your experience. Vi(m)‘s a ubiquitous CLI text editor and many tutorials will tell you to use it to edit configuration files and so forth. It’s worth looking up an intro tutorial. If in doubt between the two, use vim.
  • You may be able to use nano instead, it’s less powerful but relatively user-friendly. Run “nano —version” to see if it’s available. If it’s not you should be able to install it with apt-get.
  • You can override the default editor, eg. to load nano by default instead of vi.
  • Never ask anyone the difference between vim and emacs, nor which one is better.

General tips

  • You can’t hot-swap USB devices on the Pi A/B. Plug things in before you power up the Pi. Otherwise it will reboot as soon as you try to plug/unplug something.
  • There’s no on/off button, it’s either plugged into power or it’s not…
    • For a clean shutdown, run sudo shutdown -h now before you pull the plug.
    • To reboot, sudo reboot
  • You may need to plug the Pi in to HDMI before booting it, sometimes it won’t recognise HDMI later on.
  • Sharing files over your network isn’t too hard, eg. you can use Samba (you may need to install the samba package on the Pi first). Set up a shared location on your mac or pc then connect from the Pi back to the host machine, logging in with the user accounts set up on the host machine (usually your own login). Use the Pi’s File Manager and Go menu; or you can try going straight to smb://computername/
  • If you’re installing things like Nodejs, remember to get distributions for ARM processors (or ideally distros specifically for the Pi). Avoid compiling from source on the Pi if you possibly can, as it tends to be slow as in all-night-to-compile-nodejs slow.

Some useful links

Last thoughts

Hopefully these tips will help get you up and running; but remember that any problem you have with a Pi has probably happened before. Google for help, you will almost certainly find it.

The next challenge is deciding what to do with your Pi… have fun!

Bob DuCharme (Innodata Isogen)A schemaless computer database in 1965

To enable flexible metadata aggregation, among other things.

figure 3

I've been reading up on America's post-war attempt to keep up the accelerated pace of R&D that began during World War II. This effort led to an infrastructure that made accomplishments such as the moon landing and the Internet possible; it also led to some very dry literature, and I'm mostly interested in what new metadata-related techniques were developed to track and share the products of the research as they led to development.

One dry bit of literature is the proceedings of the 1965 Toward a National Information System: Second Annual National Colloquium On Information Retrieval. The conference was sponsored by the American Documentation Institute, who had a big role in the post-war information sharing work, as well as the University of Pennsylvania's Moore School of Electrical Engineering (where Eckert and Mauchly built ENIAC and its successor EDVAC) and some ACM chapters.

In a chapter on how the North American Aviation company (now part of Boeing) revamped their practices for sharing information among divisions, I came across this description of some very flexible metadata storage:

All bibliographic information contained in both the corporate and divisional Electronic Data Processing (EDP) subsystems is retained permanently on magnetic tape in the form of variable length records containing variable length fields. Each field, with the exception of sort keys, consists of three adjacent field parts: field character count, field identification, and field text (see Figure 3). There are several advantages to this format: it is extremely compact, thereby reducing computer read-write time; it provides for definition and consequent addition of new types of fields of bibliographic information without reformatting extant files; and its flexibility allows conversion of files from other indexing abstracting services.

I especially like that "it provides for definition and consequent addition of new types of fields of bibliographic information without reformatting extant files." This reminds me of one slide in my presentation last month at the Semantic Technology and Business / NoSQL Now! conferences last month, where my talk was on a track shared by both conferences, about how a key advantage of schemaless NoSQL databases is the ability to add a new value for a new property to a data set with no need for the schema evolution steps that can be so painful in a relational database.

Moore's law has led to less of a reliance on arranging data in tables to allow the efficient retrieval of that data. The various NoSQL options have explored new ways to do this, and it was great to see that one aerospace company was doing it 49 years ago. Of course, retrieving data from magnetic tape is less efficient than modern alternatives, but it was a big step past the use of piles of punched cards, and pretty modern for its time, as you can see from the tape spools on the picture of EDVAC's gleaming successor below. I thought it was cool to see that, although tabular representation of data long predates relational databases (hierarchical and network databases also stored sets of entities as tables, but with much less flexibility) that someone had implemented such a flexible model so long ago, especially to represent metadata, with a use case that we often see now with RDF: to allow "conversion of files from other indexing abstracting services"—in other words, to accomodate the aggregation of metadata from other sources that may not have structured their data the same way that yours is structured.

Univac 9400

Univac photo by H. Müller CC-BY-SA-2.5, via Wikimedia Commons


Please add any comments to this Google+ post.

Jeremy Keith (Adactio)Other days, other voices

I think that Mandy’s talk at this year’s dConstruct might be one of the best talks I’ve ever heard at any conference ever. If you haven’t listened to it yet, you really should.

<object data="https://huffduffer.com/flash/player.swf?soundFile=http://dconstruct.s3.amazonaws.com/2014/podcast/dconstruct2014-mandy-brown.mp3" height="24" type="application/x-shockwave-flash" width="290"><param name="movie" value="https://huffduffer.com/flash/player.swf?soundFile=http://dconstruct.s3.amazonaws.com/2014/podcast/dconstruct2014-mandy-brown.mp3"><param name="wmode" value="transparent"><audio controls="controls" preload="none" src="http://dconstruct.s3.amazonaws.com/2014/podcast/dconstruct2014-mandy-brown.mp3">Hypertext as an Agent of Change on Huffduffer</audio></object>

There are no videos from this year’s dConstruct—you kind of had to be there—but Mandy’s talk works astoundingly well as a purely audio experience. In fact, it’s remarkable how powerful many of this year’s talks are as audio pieces. From Warren’s thoughtful opening words to Cory’s fiery closing salvo, these are talks packed so full of ideas that revisiting them really pays off.

That holds true for previous years as well—James Burke’s talk from two years ago really is a must-listen—but there’s something about this year’s presentations that really comes through in the audio recordings.

Then again, I’m something of a sucker for the spoken word. There’s something about having to use the input from one sensory channel—my ears—to create moving images in my mind, that often results in a more powerful experience than audio and video together.

We often talk about the internet as a revolutionary new medium, and it is. But it is revolutionary in the way that it collapses geographic and temporal distance; we can have instant access to almost any information from almost anywhere in the world. That’s great, but it doesn’t introduce anything fundamentally new to our perception of the world. Instead, the internet accelerates what was already possible.

Even that acceleration is itself part of a longer technological evolution that began with the telegraph—something that Brian drove home in in his talk when he referred to Tom Standage’s excellent book, The Victorian Internet. It’s probably true to say that the telegraph was a more revolutionary technology than the internet.

To find the last technology that may have fundamentally altered how we perceive the world and our place in it, I propose the humble gramophone.

On the face of it, the ability to play back recorded audio doesn’t sound like a particularly startling or world-changing shift in perspective. But as Sarah pointed out in her talk at last year’s dConstruct, the gramophone allowed people to hear, for the first time, the voices of people who aren’t here …including the voices of the dead.

Today we listen to the voices of the dead all the time. We listen to songs being sung by singers long gone. But can you imagine what it must have been like the first time that human beings heard the voices of people who were no longer alive?

There’s something about the power of the human voice—divorced from the moving image—that still gets to me. It’s like slow glass for the soul.

In the final year of her life, Chloe started publishing audio versions of some of her blog posts. I find myself returning to them again and again. I can look at pictures of Chloe, I can re-read her writing, I can even watch video …but there’s something so powerful about just hearing her voice.

I miss her so much.

Jeremy Keith (Adactio)Indie Web Camp UK 2014

Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.

It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.

The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.

So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.

But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.

Indie Web Camp UK attendees

I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.

With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.

By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.

There was lots of progress in other areas too, particularly with webactions. Some of that progress relates to what I’ve been saying about Web Components. More on that later…

Throw in some Transmat action, location-based hacks, and communication tools; all-in-all a very productive weekend.

ProgrammableWeb: APIsBitKonan

BitKonan is a Croatian-based BitCoin and cryptocurrency trading platform that offers a flat trading rate of 0.29% per transaction. The BitKonan Public API allows access to market data including recent BitCoin prices, highest buy and sell orders, order book, and recent transactions. Using an API key, developers may access the private API to retrieve a user's balance, view a specific user's transactions, and perform order maintenance. Requests to the server are made over HTTP and return JSON objects.
Date Updated: 2014-09-12
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsDandelion Wikisearch

With this experimental API, users could find Wikipedia pages if they can't recall the exact title. Dandelion Wikisearch can be useful for developers who work with Internet data and who are interested to add this service in a particular website or application. The site displays the parameters of text and lang as required in addition to limit, offset, query, and include as optional. JSON responses are available along with code samples. Developers can receive support via e-mail and community forum.
Date Updated: 2014-09-12
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMyJobHelper

MyJobHelper is a site that allows a user to search through a comprehensive database for specific job listings. The site gives the the ability to filter by zip code and by job title. The MyJobHelper API allows developers to integrate MyJobHelper's extensive database and search capabilities into 3rd party applications and web presences. In order to gain API access, publishers must submit a company profile to Join the MyJobHelper Partner Program. The MyJobHelper.com service is intended for use within the United States. Located at the bottom of the home website is the phrase "Canadian users are forbidden." Canadians beware.
Date Updated: 2014-09-12
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSenfluence Social Media Monitoring

Senfluence was founded in 2007 with the goal to monitor social media. With the API Social Media Monitoring, developers have access to a demo key that will help to import results, search articles in Twitter, Facebook and other social media channels, and monitor in autopilot. The API supports XML and JSON formats. Once developers register, they will choose between several pricing options according to the number of services requested. In the site, users will have access to an API Key, an API Syntax, and an API response.
Date Updated: 2014-09-12
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesNow Hiring: Product Marketing Managers for the AWS Team

Are you interested in a job that lets you combine your technical skills with your marketing savvy and your desire to communicate? If so, the Product Marketing Manager position may be a great fit for you. You'll get to work directly with the teams behind the full range of AWS services, working with them to create high quality marketing deliverables that accurately describe the value propositions for their offerings.

To learn more about this position, take a look at our new video, Product Marketing Opportunities at Amazon Web Services:

In the video, several of my AWS colleagues talk about the Product Marketing Manager role -- what they do and how it benefits our customers. You'll get a peek behind the scenes (and into the hallways) and see what it is like to work on the AWS Team.

We are hiring for multiple positions within this job category. While the specifics will vary from role to role, the job description for Product Marketing Manager - Amazon EC2 should give you a pretty good idea of the responsibilities that you would have and qualifications that we are looking for. To apply, simply email your resume to pmm@amazon.com.

If this is not quite the role for you, don't give up yet! We are doing a lot of hiring right now; check out the AWS Careers page for a full list of open positions. Perhaps one of them is right for you!

-- Jeff;

Amazon Web ServicesSearch and Interact With Your Streaming Data Using the Kinesis Connector to Elasticsearch

My colleague Rahul Patil wrote a guest post to show you how to build an application that loads streaming data from Kinesis into an Elasticsearch cluster in real-time.

-- Jeff;


The Amazon Kinesis team is excited to release the Kinesis connector to Elasticsearch! Using the connector, developers can easily write an application that loads streaming data from Kinesis into an Elasticsearch cluster in real-time and reliably at scale.

Elasticsearch is an open-source search and analytics engine. It indexes structured and unstructured data in real-time. Kibana is Elasticsearch's data visualization engine; it is used by dev-ops and business analysts to setup interactive dashboards. Data in an Elasticsearch cluster can also be accessed programmatically using RESTful API or application SDKs. You can use the CloudFormation template in our sample to quickly create an Elasticsearch cluster on Amazon Elastic Compute Cloud (EC2), fully managed by Auto Scaling.

Wiring Kinesis, Elasticsearch, and Kibana
Here's a block diagram to help you see how the pieces fit together:

Using the new Kinesis Connector to Elasticsearch, you author an application to consume data from Kinesis Stream and index the data into an Elasticsearch cluster. You can transform, filter, and buffer records before emitting them to Elasticsearch. You can also finely tune Elasticsearch specific indexing operations to add fields like time to live, version number, type, and id on a per record basis. The flow of records is as illustrated in the diagram below.

Note that you can also run the entire connector pipeline from within your Elasticsearch cluster using River.

Getting Started
Your code has the following duties:

  1. Set application specific configurations.
  2. Create and configure a KinesisConnectorPipeline with a Transformer, a Filter, a Buffer, and an Emitter.
  3. Create a KinesisConnectorExecutor that runs the pipeline continuously.
All the above components come with a default implementation, which can easily be replaced with your custom logic.

Configure the Connector Properties
The sample comes with a .properties file and a configurator. There are many settings and you can leave most of them set to their default values. For example, the following settings will:

  1. Configure the connector to bulk load data into Elasticsearch only after you've collect at least 1000 records.
  2. Use the local Elasticsearch cluster endpoint for testing.

bufferRecordCountLimit = 1000
elasticSearchEndpoint = localhost

Implementing Pipeline Components
In order to wire the Transformer, Filter, Buffer, and Emitter, your code must implement the IKinesisConnectorPipeline interface.

public class ElasticSearchPipeline implements
    IKinesisConnectorPipeline<String,ElasticSearchObject> 

public IEmitter<ElasticSearchObject> getEmitter
    (KinesisConnectorConfiguration configuration) {
    return new ElasticSearchEmitter(configuration);
}

public IBuffer<String> getBuffer(
    KinesisConnectorConfiguration configuration) {
    return new BasicMemoryBuffer<String>(configuration);
}

public ITransformerBase <String, ElasticSearchObject> getTransformer 
    (KinesisConnectorConfiguration configuration) {
    return new StringToElasticSearchTransformer();
}

public IFilter<String> getFilter
    (KinesisConnectorConfiguration configuration) {
    return new AllPassFilter<String>();
}

The following snippet implements the abstract factory method, indicating the pipeline you wish to use:

public KinesisConnectorRecordProcessorFactory<String,ElasticSearchObject> 
    getKinesisConnectorRecordProcessorFactory() {
         return new KinesisConnectorRecordProcessorFactory<String, 
             ElasticSearchObject>(new ElasticSearchPipeline(), config);
    }

Defining an Executor
The following snippet defines a pipeline where the incoming Kinesis records are strings and outgoing records are an ElasticSearchObject:

public class ElasticSearchExecutor extends 
    KinesisConnectorExecutor<String,ElasticSearchObject>

The following snippet implements the main method, creates the Executor and starts running it:

public static void main(String[] args) {
    KinesisConnectorExecutor<String, ElasticSearchObject> executor 
        = new ElasticSearchExecutor(configFile);
    executor.run();
}

From here, make sure your AWS Credentials are provided correctly. Setup the project dependencies using ant setup. To run the app, use ant run and watch it go! All of the code is on GitHub, so you can get started immediately. Please post your questions and suggestions on the Kinesis Forum.

Kinesis Client Library and Kinesis Connector Library
When we launched Kinesis in November of 2013, we also introduced the Kinesis Client Library. You can use the client library to build applications that process streaming data. It will handle complex issues such as load-balancing of streaming data, coordination of distributed services, while adapting to changes in stream volume, all in a fault-tolerant manner.

We know that many developers want to consume and process incoming streams using a variety of other AWS and non-AWS services. In order to meet this need, we released the Kinesis Connector Library late last year with support for Amazon DynamoDB, Amazon Redshift, and Amazon Simple Storage Service (S3). We then followed up that with a Kinesis Storm Spout and Amazon EMR connector earlier this year. Today we are expanding the Kinesis Connector Library with support for Elasticsearch.

-- Rahul

Amazon Web ServicesElastiCache T2 Support

As you may already know, Amazon Elastic Compute Cloud (EC2)'s new T2 instance type provides a solid level of baseline performance and the ability to burst above the baseline as needed. As I wrote in my blog post, these instances are ideal for development, testing, and medium-traffic web sites.

Today we are bringing the benefits of the T2 instance type to Amazon ElastiCache. The cache.t2.micro (555 megabytes of RAM), cache.t2.small (1.55 gigabytes of RAM), and cache.t2.medium (3.22 gigabytes of RAM) cache nodes feature the latest Intel Xeon processors running at up to 3.3 GHz. You can launch new cache nodes using the Memcached or Redis engines.

T2 instances are supported only within an Amazon Virtual Private Cloud. The Redis Backup and Restore feature and the Redis AOF are not currently usable with the T2 instances. You can launch them in the usual ways (command line, API, CloudFormation, or Console):

Pricing and Availability
Pricing for T2 cache nodes starts at $0.008 per hour for Three Year Heavy Utilization Reserved Cache Nodes and $0.017 per hour for On-Demand Cache Nodes. (see the ElastiCache Pricing page for more information). As part of the AWS Free Tier, eligible AWS users have access to a cache.t2.micro instance for 750 hours per month at no charge.

The new cache nodes are available today in all AWS Regions except AWS GovCloud (US) and you can start using them today!

-- Jeff;

Jeremy Keith (Adactio)Web Components

The Extensible Web Summit is taking place in Berlin today because Web Components are that important. I wish I could be there, but I’ll make do with the live notes, the IRC channel, and the octothorpe tag.

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous. That’s probably a good sign.

Here’s what I wrote after the last TAG meetup in London:

This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

And here’s what I wrote after the Edge conference:

If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once.

To explain…

The exciting thing about Web Components is that they give developers as much power as browser makers.

The frightening thing about Web Components is that they give developers as much power as browser makers.

When browser makers—and other contributors to web standards—team up to hammer out new features in HTML, they have design principles to guide them …at least in theory. First and foremost—because this is the web, not some fly-by-night “platform”—is the issue of compatability:

Support existing content

Degrade gracefully

You can see those principles at work with newly-minted elements like canvas, audio, video where fallback content can be placed between the opening and closing tags so that older user agents aren’t left high and dry (which, in turn, encourages developers to start using these features long before they’re universally supported).

You can see those principles at work in the design of datalist.

You can see those principles at work in the design of new form features which make use of the fact that browsers treat unknown input types as type="text" (again, encouraging developers to start using the new input long before they’re supported in every browser).

When developers are creating new Web Components, they could apply that same amount of thought and care; Chris Scott has demonstrated just such a pattern. Switching to Web Components does not mean abandoning progressive enhancement. If anything they provide the opportunity to create whole new levels of experience.

Web developers could ensure that their Web Components degrade gracefully in older browsers that don’t support Web Components (and no, “just polyfill it” is not a sustainable solution) or, for that matter, situations where JavaScript—for whatever reason—is not available.

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

This wouldn’t matter so much if developers were only shooting themselves in the foot, but because of the wonderful spirit of sharing on the web, we might well end up shooting others in the foot too:

  1. I make something (to solve a problem).
  2. I’m excited about it.
  3. I share it.
  4. Others copy and paste what I’ve made.

Most of the time, that’s absolutely fantastic. But if the copying and pasting happens without critical appraisal, a lot of questionable decisions can get propagated very quickly.

To give you an example…

When Apple introduced the iPhone, it provided a mechanism to specify that a web page shouldn’t be displayed in a zoomed-out view. That mechanism, which Apple pulled out of their ass without going through any kind of standardisation process, was to use the meta element with a name of “viewport”:

<meta name="viewport" value="...">

The value attribute of a meta element takes a comma-separated list of values (think of name="keywords": you provide a comma-separated list of keywords). But in an early tutorial about the viewport value, code was provided which showed values separated with semicolons (like CSS declarations). People copied and pasted that code (which actually did work in Mobile Safari) and so every browser must support that usage:

Many other mobile browsers now support this tag, although it is not part of any web standard. Apple’s documentation does a good job explaining how web developers can use this tag, but we had to do some detective work to figure out exactly how to implement it in Fennec. For example, Safari’s documentation says the content is a “comma-delimited list,” but existing browsers and web pages use any mix of commas, semicolons, and spaces as separators.

Anyway, that’s just one illustration of how code gets shared, copied and pasted. It’s especially crucial during the introduction of a new technology to try to make sure that the code getting passed around is of a high quality.

I feel kind of bad saying this because the introductory phase of any new technology should be a time to say “Hey, go crazy! Try stuff out! See what works and what doesn’t!” but because Web Components are so powerful I think that mindset could end up doing a lot of damage.

Web developers have been given powerful features in the past. Vendor prefixes in CSS were a powerful feature that allowed browsers to push the boundaries of CSS without creating a Tower of Babel of propietary properties. But because developers just copied and pasted code, browser makers are now having to support prefixes that were originally scoped to different rendering engines. That’s not the fault of the browser makers. That’s the fault of web developers.

With Web Components, we are being given a lot of rope. We can either hang ourselves with it, or we can make awesome …rope …structures …out of rope this analogy really isn’t working.

I’m not suggesting we have some kind of central authority that gets to sit in judgement on which Web Components pass muster (although Addy’s FIRST principles are a great starting point). Instead I think a web of trust will emerge.

If I see a Web Component published by somebody at Paciello Group, I can be pretty sure that it will be accessible. Likewise, if Christian publishes a Web Component, it’s a good bet that it will use progressive enhancement. And if any of the superhumans at Filament Group share a Web Component, it’s bound to be accessible, performant, and well thought-out.

Because—as is so often the case on the web—it’s not really about technologies at all. It’s about people.

And it’s precisely because it’s about people that I’m so excited about Web Components …and simultaneously so nervous about Web Components.

ProgrammableWeb: APIsSenfluence Sentiment & Influence

Senfluence is formed by 2 words: sentiment and influence. The goals of this API are to understand sentiments and to analyze the popularity of websites in 13 languages. When users are redirected to the get demo link, they will first see the benefits that include to obtain a sentiment analysis based on positive, negative and neutral feedback and to score documents based on keywords. Other uses of the API consist in adding scores to developers' tools and analyze non social media web documents. This API supports JSON format. In the site, developers will find an API key, an API syntax for JSON,and an API response.
Date Updated: 2014-09-11
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPrint From Windows Phone

Print From Windows Phone offers an API in SOAP format that can be used to send documents to print using the print spooler software. Users need to install the software on the PC to connect to the destination printer, either via WIFI or USB. The site shows 3 steps to use Print From Windows Phone: create an account to have access to 10mb, download the software required, use the e-mail in the website to print from a Windows phone. To test the service, developers can access 2 responses in SOAP with respective requests and responses.
Date Updated: 2014-09-11
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSMSBump

SMSBump is a multi-channel messaging service that is capable of delivering messages to more than 200 countries and networks. It can send SMS, VMS, USSD, MMS, and WhatsApp messages. The SMSBump API allows users to send any of these kinds of messages and to check their account balances programmatically.
Date Updated: 2014-09-11
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNetBulkSMS

NetBulkSMS is a Nigerian bulk SMS provider that can deliver messages to more than 210 countries around the world, and instant message delivery is guaranteed. The NetBulkSMS API enables developers to send single and bulk SMS from their websites by integrating them with the NetBulkSMS messaging gateway.
Date Updated: 2014-09-11
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsExpress SMS

Express SMS is a platform that allows users to send and receive text messages from their applications, websites, or systems. The platform provides international coverage along with intelligent routing, load distribution, and dynamic capacity management. Express SMS's messaging gateway can handle up to 1500 messages per second.
Date Updated: 2014-09-11
Tags: [field_primary_category], [field_secondary_categories]

Norman Walsh (Sun)The short-form week of 1–7 Sep 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 8 messages in 10 conversations. (With 4 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

Monday at 06:42pm

FAV
Weren't the Garleks all destroyed in the Thyme War? http://t.co/XMb5NCJh8L —@Fizzygrrl

Tuesday at 02:00pm

We're only 48 days away from early voting, and ALL Texas voters need to know about the new ID requirements to vote http://t.co/LY05q8dnWD —@ndw

In a conversation that started on Wednesday at 12:37am

@eliasisrael "No."—@ndw
@ndw So, I’m not crazy then. Good to know.—@eliasisrael
@eliasisrael You may or may not be crazy, I'm unqualified to judge. But that isn't evidence to support the assertion.—@ndw

Wednesday at 05:08pm

RT @OccupyWallStNYC: The new world order. http://t.co/HHYZBMzl52 —@ndw

In a conversation that started on Wednesday at 06:36pm

"Indecision is a bitch." The point half way between "stop" and "go" on a bicycle is falling over.—@ndw
@ndw Especially dramatic if you have bike shoes that fasten to the pedals.—@cramerdw
@cramerdw I do. Got caught on the bike trail's short, steep rise to Stratford. Was going to stop. Car stopped. Was going to go. Fell over.—@ndw

Wednesday at 06:38pm

Wednesday at 11:54pm

RT @billmaher: Just since Aug 4, 19 people have been beheaded in Saudi Arabia, most for non-violent crimes. But they have oil, so those are…—@ndw

Thursday at 09:51pm

FAV
Remember when Twitter was just a protocol and there was a flourishing ecosystem of independent clients atop it? Me too. It was AWESOME.—@dhh

Friday at 03:04pm

FAV
At the end of the day, life should ask us, ‘Do you want to save the changes?’—@BiIIMurray

Saturday at 12:07am

@pgor I use ack every day.—@ndw
</article>

Norman Walsh (Sun)The short-form week of 25–31 Aug 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 10 messages in 16 conversations. (With 5 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

Monday at 09:42am

In a conversation that started on Monday at 01:42pm

What's the state of the art in universal remote controls these days? (TV, DVD, Roku, etc.)—@ndw
@ndw There's a Harmony for every price point. Lots of device support.—@willseth
@ndw 83 per household seems pretty typical. I bought 3 items, same make. No inter-operation whatsoever.—@dpawson
@ndw I still love my Harmony One.—@bphogan
@bphogan That one seems to be discontinued, and way more expensive than I recall.—@ndw

Monday at 01:45pm

Screaming nutjobs quoting my nick again. I wonder who they really meant? No, I don't. I really don't care.—@ndw

Tuesday at 01:05pm

If I'd known that an airline power adapter was unavailable for the Thinkpad W530, I would probably have bought it anyway. Probably.—@ndw

In a conversation that started on Wednesday at 10:15am

"An instructor at a shooting range died after a 9yr old girl shot him in the head with the Uzi he was showing her how to use." #wordsfailme —@ndw
.@ndw NRA answer will probably be “To protect kids and adults, we need smaller guns, designed for kids!”—@olivierjeulin

Wednesday at 01:40pm

XML Summer School starts in just over two weeks! Great material! Great instructors! Register now if you haven't! http://t.co/jSpYJ1YXgt —@ndw

Saturday at 12:59pm

FAV
A turn-based strategy wargame where you start in the middle of an unwinnable conflict and have to figure out how to de-escalate into peace.—@wileywiggins

Saturday at 02:31pm

FAV
Dear @twitter : I already have plenty of "engagement," don't force your engagement-boosting ploys on me. Not my fault your growth is slow.—@chort0

Saturday at 03:10pm

Saturday at 05:39pm

RT @jwz: "The things that will last on the internet are not owned." Inessential: My blog isn't part of a system where its... http://t.co/Sj… —@ndw

In a conversation that started on Saturday at 05:43pm

@robinberjon it's a good ques. and I'm not disparaging it, but I fear anyone who doesn't understand "don't be a jerk" is prob. gonna be one.—@ndw
@ndw It can also be good for people regularly at the wrong end of jerking that we've got their backs.—@robinberjon
@ndw That came out all sorts of wrong, but you know what I mean!—@robinberjon
@ndw There is truth there, but it can be useful to have a policy you can point to to tell jerks why they're being kicked.—@robinberjon

Saturday at 05:48pm

@mathling Mmmmmm!—@ndw

Saturday at 06:13pm

FAV
Fun fact: If you like puzzles, Legos, cooking, baking, sewing, or generally making things, you’ll probably like coding, too. #tmyk —@rockbot

Saturday at 06:54pm

@sayveiga Looking awfully tasty!—@ndw

Saturday at 06:55pm

FAV
you can tell that I'm not smart because I keep doing whatever I think is the right thing instead of what would get me a bunch of money—@garybernhardt

Sunday at 07:12am

XML Stars, the journal is out! http://t.co/lNS6ggKpyZ Stories via @xmlgrrl @JeniT @ndw —@dominixml
</article>

Norman Walsh (Sun)The short-form week of 18–24 Aug 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 5 messages in 7 conversations. (With 3 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

In a conversation that started on Sunday at 08:02pm

Would a GitHub repo of AsciiDocs form a reasonable approximation of a wiki? With pull requests barring spam? #cantbeanoriginalthought —@ndw
@ndw See ikiwiki for starters, I'd guess?—@tapoueh
@ndw I like the idea. We use it for http://t.co/ZQbk1rZq5GG. See http://t.co/rig2sH0BfX & https://t.co/PQlwGmImnt plus deployment via CI.—@mojavelinux
@ndw GitHub’s wiki (git repo under the hood) allows AsciiDoc syntax, via #asciidoctor see e.g. https://t.co/08lI82lWzj—@rdeltour
@rdeltour Yes, but I assume an open GitHub wiki gets spammed into oblivion pretty quickly. The formality of pull requests might help.—@ndw
@ndw you can restrict the list of editors, but yeah PR don’t work with wikis. I agree that off-wiki adoc + PR can be a nice alternative!—@rdeltour
@ndw also, fwiw you need a gh account to edit an open wiki, not sure if spammers are there (… yet)—@rdeltour

In a conversation that started on Sunday at 09:33pm

@peteaven Geek. (Oh, come on, you *asked* for it.)—@ndw
@ndw Ha! Well played sir.—@peteaven

In a conversation that started on Wednesday at 06:12pm

Laugh at the git/github/Travis n00b. https://t.co/XYJHUCtATV—@ndw
Solution posted in a comment. I knew it had to be easy.—@ndw
@ndw alternatively Travis CI just announced it’s now possible to set variables via the repo setting pane :) http://t.co/KWQ7QGDgrr —@rdeltour

Thursday at 01:52pm

FAV
Be careful not to fill the Vessyl with soylent, as the Matrix is not properly set up to simulate a shame singularity—@Pinboard

Thursday at 02:21pm

FAV
Schemas are an impediment to moving fast the same way street lights are. Useless if you are alone, critical when there's a 100 of you.—@squarecog

Friday at 05:11am

#XProc 2.0 spec draft will be edited on github, built with Travis CI. “freaking magical” as @ndw says! http://t.co/mrnzmpwNNw —@rdeltour

Saturday at 12:27am

</article>

Norman Walsh (Sun)The short-form week of 11–17 Aug 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 21 messages in 15 conversations. (With 3 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

In a conversation that started on Wednesday at 08:49pm

Have (almost surely) decided the next release of my DocBook toolchain (XSLT 2.0/HTML5) will require XProc, abandon XSL FO.—@ndw
@ndw I've been out of the XML loop for a year or so - I am curious - why abandon XSL-FO?—@EileenOttawa
@EileenOttawa Working group disbanded. Locus of interest/effort clearly CSS.—@ndw
@ndw Woah! Interesting. I have indeed been under a rock! Thanks much—@EileenOttawa
@ndw How will you get to PDF from XML? @EileenOttawa —@webbr
@webbr @EileenOttawa DocBook XML → "for-print" HTML → HTML+CSS formatter → PDF.—@ndw
@ndw @EileenOttawa Thank you for the details. What's the tool in the chain that gets you to PDF from HTML+CSS?—@webbr
@webbr @EileenOttawa I've tested both PrinceXML (oh, the irony) and AntennaHouse's formatter. There are probably others and surely will be.—@ndw
@ndw @EileenOttawa Do you think that there will soon be a free tool for that link in the chain? At this point, I rely on fop to get to PDF.—@webbr
@webbr @EileenOttawa I expect so. Might be already. Something that rivals fop shouldn't be too difficult.—@ndw
“@ndw: @webbr @EileenOttawa DocBook XML → "for-print" HTML → HTML+CSS formatter → PDF.” @CMaudry sounds familiar? ;)—@wohnjalker
@wohnjalker @ndw @webbr @EileenOttawa The only (enterprise ready) HTML+CSS processor I heard of is @AntennaInfo. Any open source alt?—@CMaudry
@CMaudry @ndw @webbr @EileenOttawa @AntennaInfo how about printing to PDF from your web browser? ;-P—@wohnjalker
@ndw So docbook without print output. Something about lead balloons?—@dpawson
@dpawson No, print output w/XHTML+CSS. Web output won't produce good print, but developer energy is in CSS paged media (vs FO certainly).—@ndw
@ndw Hope you're patient Norm.—@dpawson
@dpawson @ndw at this very moment I'm writing tests for GCPM :)—@dauwhe
@dauwhe @ndw http://t.co/2VgrJha4nKK? Guesses when it might get to PR?—@dpawson
@dpawson @ndw Hoping for last call this autumn, then we'll see.—@dauwhe
[big shift] MT @ndw: Have (almost surely) decided next release of my DocBook toolchain (XSLT 2.0/HTML5) will require XProc, abandon XSL FO—@jeffsonstein
@ndw *Abandon* FO?!? /runs away screaming—@sgmlguru
@sgmlguru It's an experiment. I could be persuaded it's bad, but bet CSS paged media better than fop any day now. Commercial soln already.—@ndw
@ndw It's bad. ;-)—@sgmlguru
@sgmlguru Because?—@ndw
@ndw Every time you use CSS for print formatting, a kitten dies. (In other words, I'm *used to* FO.)—@sgmlguru

In a conversation that started on Monday at 07:06pm

Dang it. I have a toolchain that still relies on FO. I can't just rip it out, I have to make it work in the new workflow. #everythingishard —@ndw
@ndw When you can swap out FO and drop in XSLT+CSS, that is the time to drop FO Norm? You know it makes sense :-)—@dpawson
@ndw lots of people still use FO, you know—@laurendw

In a conversation that started on Friday at 10:58am

That thing where your employer forces you to use Exchange and you wonder if it would be easier to find a different employer.—@ndw
@ndw You wonder? I think it's pretty clear.—@dethe
@ndw @CrazyJoeGallo corollary: your employer makes you use a Mac ...—@pjstadig
Emacs → Exim → SMTP → DavMail → HTTP → Exchange OWS → whatever unholy rites Exchange Server performs. I. WIN.—@ndw
@ndw I spent a bit of the afternoon configuring my version of this at home. Thanks! Knowing it was possible made all the difference.—@mathling
@mathling @ndw Someone should start a NoExchange movement!—@alexmilowski
@alexmilowski @ndw "What do we want? NoExchange! When do we want it? Now!"—@mathling
@mathling @ndw people still think Exchange is enterprise software even at an enterprise software company?—@alexmilowski
@ndw @shelleypowers It's always easier to change employers than use Exchange. And maybe Word.—@gleneivey
@ndw @davemarq If your server allows it you should be able to fetch mail using *NIX tools and avoid much drama.—@polkabecky
@ndw “If you can’t change your organization, you can still change your organization.” — @tedneward —@bsletten
@bsletten @ndw Actually that was Alistair Cockburn: "If you can't change your corporate culture, change your corporate culture". FYI—@tedneward
@bsletten @ndw I only wish I could be that eloquent. :-)—@tedneward
@tedneward @ndw Well, I was willing to give you the benefit of the doubt. So, there’s that.—@bsletten
@bsletten @ndw Which I appreciate. :-)—@tedneward
@ndw @WhatTheBit at least it is not Lotus Notes.—@askbal
@askbal @ndw @WhatTheBit …or GroupWorse™—@gimsieke

Saturday at 05:01pm

FAV
RP NPR + CIA = Credible Disinformation #Snowden #NRP #RecordedFuture http://t.co/axErY15Z6u —@patrickDurusau

Saturday at 06:55pm

All that pre-upgrade day work I did to get data copied was onto incorrectly partitioned disks. #fail #sigh #stillcopying —@ndw

Saturday at 11:08pm

RT @zephoria: Beyond race in America & police militarization, @zeynep argues that #Ferguson is also a net neutrality issue: http://t.co/0Mq… —@ndw

Sunday at 07:13am

XML Stars, the journal is out! http://t.co/rotnE9SrJr Stories via @ndw —@dominixml

In a conversation that started on Sunday at 03:19pm

It is clear that my server is painfully I/O bound. I hope I can get away without fixing that for a while as I have no idea how to.—@ndw
@ndw if you have some batch io processes on there ionice can make a huge dent—@gcarothers
@gcarothers I don't think that's it. I've got four SATA drives hanging off a single controller and I just don't think it's up to the task.—@ndw

Sunday at 06:39pm

FAV
There's technical debt, then there's technical subprime mortgages with exploding balloon payments.—@markimbriaco

In a conversation that started on Sunday at 08:02pm

Would a GitHub repo of AsciiDocs form a reasonable approximation of a wiki? With pull requests barring spam? #cantbeanoriginalthought —@ndw
@ndw See ikiwiki for starters, I'd guess?—@tapoueh
@ndw I like the idea. We use it for http://t.co/ZQbk1rZq5GG. See http://t.co/rig2sH0BfX & https://t.co/PQlwGmImnt plus deployment via CI.—@mojavelinux
@ndw GitHub’s wiki (git repo under the hood) allows AsciiDoc syntax, via #asciidoctor see e.g. https://t.co/08lI82lWzj—@rdeltour
@rdeltour Yes, but I assume an open GitHub wiki gets spammed into oblivion pretty quickly. The formality of pull requests might help.—@ndw
@ndw you can restrict the list of editors, but yeah PR don’t work with wikis. I agree that off-wiki adoc + PR can be a nice alternative!—@rdeltour
@ndw also, fwiw you need a gh account to edit an open wiki, not sure if spammers are there (… yet)—@rdeltour

Sunday at 08:34pm

Upgrade complete. All systems back to normal. Well, as normal as things get around here. http://t.co/nhuQu8OPkN —@ndw

Sunday at 08:35pm

In other news, I moved the XProc test suite to a github repo and released DocBook XSL 2.0 stylesheets 2.0.4-b2. And other stuff. #go #me —@ndw

Sunday at 09:27pm

@jgarber @kplawver @ficly Sorry to see it go. I posted a few things (pseudonymously) and have occasionally imagined sequels.—@ndw

In a conversation that started on Sunday at 09:33pm

@peteaven Geek. (Oh, come on, you *asked* for it.)—@ndw
@ndw Ha! Well played sir.—@peteaven

Sunday at 10:51pm

FAV
63 light years from Earth there's a planet where it rains glass sideways, in 7000km/h winds—@SciencePorn
</article>

Amazon Web ServicesKick-Start Your Cloud Storage Project With the Riverbed SteelStore Gateway

Many AWS customers begin their journey to the cloud by implementing a backup and recovery discipline. Because the cloud can provide any desired amount of durable storage that is both secured and cost-effective, organizations of all shapes and sizes are using it to support robust backup and recovery models that eliminate the need for on-premises infrastructure.

Our friends at Riverbed have launched an exclusive promotion for AWS customers. This promotion is designed to help qualified enterprise, mid-market, and SMB customers in North America to kick-start their cloud-storage projects by applying for up to 8 TB of free Amazon Simple Storage Service (S3) usage for six months.

If you qualify for the promotion, you will be invited to download the Riverbed SteelStore™ software appliance (you will also receive enough AWS credits to allow you to store 8 TB of data per month for six months). With advanced compression, deduplication, network acceleration and encryption features, SteelStore will provide you with enterprise-class levels of performance, availability, data security, and data durability. All data is encrypted using AES-256 before leaving your premises; this gives you protection in transit and at rest. SteelStore intelligently caches up to 2 TB of recent backups locally for rapid restoration.

The SteelStore appliance is easy to implement! You can be up and running in a matter of minutes with the implementation guide, getting started guide, and user guide that you will receive as part of your download. The appliance is compatible with over 85% of the backup products on the market, including solutions from CA, CommVault, Dell, EMC, HP, IBM, Symantec, and Veeam.

To learn more or to apply for this exclusive promotion, click here!

-- Jeff;

ProgrammableWeb: APIsMyUSA

To start application development, users can create an account with MyUSA. The next step is to register the application. App creators should keep in mind that until the app is mature, it will be public for all MyUSA users. The main goal of this API is to create applications that can improve the lives of Americans in the United States. To review some examples, the site displays the Benefits.gov mockup on Github, where users can study the integration between Benefits.gov and MyUSA. In addition, the website offers a link to OAuth 2.0. When developers click this link, they will see a wide variety of implementations such as server libraries and client libraries in Java, PHP and Python languages.
Date Updated: 2014-09-09
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesUse AWS OpsWorks &amp; Ruby to Build and Scale Simple Workflow Applications

From time to time, one of my blog posts will describe a way to make use of two AWS products or services together. Today I am going to go one better and show you how to bring the following trio of items in to play simultaneously:

All Together Now
With today's launch, it is now even easier for you to build, host, and scale SWF applications in Ruby. A new, dedicated layer in OpsWorks simplifies the deployment of workflows and activities written in the AWS Flow Framework for Ruby. By combining AWS OpsWorks and SWF, you can easily set up a worker fleet that runs in the cloud, scales automatically, and makes use of advanced Amazon Elastic Compute Cloud (EC2) features.

This new layer is accessible from the AWS Management Console. As part of this launch, we are also releasing a new command-line utility called the runner. You can use this utility to test your workflow locally before pushing it to the cloud. The runner uses information provided in a new, JSON-based configuration file to register workflow and activity types, and start the workers.

Console Support
A Ruby Flow layer can be added to any OpsWorks stack that is running version 11.10 (or newer) of Chef. Simple add a new layer by choosing AWS Flow (Ruby) from the menu:

You can customize the layer if necessary (the defaults will work fine for most applications):

The layer will be created immediately and will include four Chef recipes that are specific to Ruby Flow (the recipes are available on GitHub):

The Runner
As part of today's release we are including a new command-line utility, aws-flow-ruby, also known as the runner. This utility is used by AWS OpsWorks to run your workflow code. You can also use it to test your SWF applications locally before you push them to the cloud.

The runner is configured using a JSON file that looks like this:

{
  "domains": [{
      "name": "BookingSample",
  }],

  "workflow_workers": [{
     "task_list": "workflow_tasklist"
  }],
 
  "activity_workers": [{
    "task_list": "activity_tasklist"
  }]
}

Go With the Flow
The new Ruby Flow layer type is available now and you can start using it today. To learn more about it, take a look at the new OpsWorks section of the AWS Flow Framework for Ruby User Guide.

-- Jeff;

Amazon Web ServicesAWS Week in Review - September 1, 2014

Let's take a quick look at what happened in AWS-land last week:

Monday, September 1
  • We celebrated Labor Day in the US, and launched nothing!
Tuesday, September 2
Wednesday,September 3
Thursday, September 4
Friday, September 5

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

-- Jeff;

Jeremy Keith (Adactio)dConstruct 2014

dConstruct is all done for another year. Every year I feel sort of dazed in the few days after the conference—I spend so much time and energy preparing for this event looming in my future, that it always feels surreal when it’s suddenly in the past.

But this year I feel particularly dazed. A little numb. Slightly shellshocked even.

This year’s dConstruct was …heavy. Sure, there were some laughs (belly laughs, even) but overall it was a more serious event than previous years. The word that I heard the most from people afterwards was “important”. It was an important event.

Here’s the thing: if I’m going to organise a conference in 2014 and give it the theme of “Living With The Network”, and then invite the most thoughtful, informed, eloquent speakers I can think of …well, I knew it wasn’t going to be rainbows and unicorns.

If you were there, you know what I mean. If you weren’t there, it probably sounds like it wasn’t much fun. To be honest, “fun” wasn’t the highest thing on the agenda this year. But that feels right. And even though it wasn’t a laugh-fest, it was immensely enjoyable …if, like me, you enjoy having your brain slapped around.

I’m going to need some time to process and unpack everything that was squeezed into the day. Fortunately—thanks to Drew’s typical Herculean efforts—I can do that by listening to the audio, which is already available!

Slap the RSS feed in your generic MP3 listening device of choice and soak up the tsunami of thoughts, ideas, and provocations that the speakers delivered.

Oh boy, did the speakers ever deliver!

Warren Ellis at dConstruct Georgina Voss at dConstruct Clare Reddington at dConstruct Aaron Straup Cope at dConstruct Brian Suda at dConstruct Mandy Brown at dConstruct Anab Jain at dConstruct Tom Scott at dConstruct Cory Doctorow at dConstruct

Listen, it’s very nice that people come along to dConstruct each year and settle into the Brighton Dome to listen to these talks, but the harsh truth is that I didn’t choose the speakers for anyone else but myself. I know that’s very selfish, but it’s true. By lucky coincidence, the speakers I want to see turn out to deliver the best damn talks on the planet.

That said, as impressed as I was by the speakers, I was equally impressed by the audience. They were not spoon-fed. They had to contribute their time, attention, and grey matter to “get” those talks. And they did. For that, I am immensely grateful. Thank you.

I’m not going to go through all the talks one by one. I couldn’t do them justice. What was wonderful was to see the emerging themes, ideas, and references that crossed over from speaker to speaker: thoughts on history, responsibility, power, control, and the future.

And yes, there was definitely a grim undercurrent to some of those ideas about the future. But there was also hope. More than one speaker pointed out that the future is ours to write. And the emphasis on history highlighted that our present moment in time—and our future trajectory—is all part of an ongoing amazing collective narrative.

But it’s precisely because the future is ours to write that this year’s dConstruct hammered home our collective responsibility. This year’s dConstruct was a grown-up, necessarily serious event that shined a light on our current point in history …and maybe, just maybe, provided some potential paths for the future.

ProgrammableWeb: APIsS3Bubble

The S3 Bubble is a cloud storage and media streaming service that syncs with an Amazon Web Services API. Through storing, selling, and streaming media S3Bubble states they are a revenue generating service. Currently in production, the API is meant to give users and developers more control in implementing the S3Bubble plugins.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsUSA.gov American Job Center Resource

Usa.Gov offers American Job Center Resource API, an application ideal for developers who aim to help job seekers to find government employment, training, and education. Users can find this API in ONet, a site that displays an interactive demo with REST services, responses, and resources. American Job Center API supports HTTP and XML formats. For more information about this API, developers can contact the staff via email. At the same time, they can connect in social media to interpret audience preferences.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsVacation Labs Tours & Activities

Vacation Labs is a marketing and technology platform for tour operators. It allows tour and activity operators to manage all their content, rates, inventory, and bookings on a cloud-based SaaS platform. It allows them to distribute their products to various marketplaces via APIs. Using this API marketplaces can get a unified feed of all tours and activities that are being managed on the Vacation Labs platform. Individual tour operators can also use these APIs to build custom tools around the platform.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIntlexc

Intlexc API provides users access to a cryptocurrency trading platform, using both public and authenticated methods. Users have the options to create their own private API keys or create public ones for their accounts. Depending on the needs of the users, Intlexc API provides different methods and in depth discussion on how to use its service. Intlexc is open to all users, individuals and businesses alike.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPalth

Palth Public API provides users access to the cryptocurrency trading platform, where they can use different 'get' methods to retrieve information such as market summary, trades, and orders. There is a standard fee of .15% for all trades, unless user comes with referral. Individuals and businesses can register with Palth Exchange.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIs250ok

250ok is a software company that specializes in email delivery. 250ok provides businesses and organizations with necessary insight to identify and prevent potential risks and issues with delivering messages to their recipients. 250ok API provides its customers the freedom to share and use its application data however they want. In addition to being able to both read data and interact with the application using POST/PUT, 250ok customers can also choose the desired output format i.e. JSON, XML, CSV, or serialized for their data.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCoin-Swap

Coin-Swap is a crypto currency exchange platform that provides users a secured environment where they can trade Dogecoin. Coin-Swap API provides its members with access to both use public methods to get market information i.e. general market summary, trades, graph, data as well as to use private methods to their more individualized information i.e. checking on open orders, account balance, creating buying/selling orders.
Date Updated: 2014-09-08
Tags: [field_primary_category], [field_secondary_categories]

Daniel Glazman (Disruptive Innovations)Valtrier

On vient de me passer un extrait du brûlot de Valérie Trierweiler. Il parle du voyage à Washington, quelques jours après l'élection présidentielle de 2012 :

« En dehors de Laurent Fabius, il ne faut pas être expert pour comprendre que la plupart des nouveaux ministres n’ont pas le niveau. Je suis affligée de ce que j’entends. Je les observe en silence, en me demandant comment tel ou tel a pu être nommé ministre. Équilibre de courant, équilibre de sexe, équilibre régional ou de parti. Peu sont là pour leur compétence. Cela crève les yeux de l’ancienne journaliste politique que je suis toujours au fond de moi. La presse critique leur amateurisme. Si j’étais toujours au service politique de Match, écrirais-je autre chose ? « Mais je me tais. »

« Merci pour ce moment. », Valérie Trierweiler, Éditions des Arènes, 2014

Je crois que ce passage est encore plus dévastateur que les commentaires sur la personnalité privée de François Hollande. Si les emmerdes volent en escadrille, tout le monde sait déjà que les épouses trompées de politiciens peuvent former une Armée aérienne complète... De toute manière, peu auraient parié un kopeck sur son couple avec Hollande après l'affaire du tweet.

Si le reste de l'ouvrage est du même cru (ce que j'ignore) alors oui, le livre de Valérie Trierweiler quitte aisément la biographie, le livre de souvenirs, la cautérisation voire la vengeance personnelle pour devenir une grenade dégoupillée. Je m'attends donc à ce que de nombreux autres extraits soient amplement commentés dans les jours qui viennent. Le Canard de mercredi va valoir son pesant de cacahouètes à mon avis... Je ne comprends pas comment elle a pu publier un tel document. Soit elle savait parfaitement l'effet qu'allait avoir son livre et cette femme a besoin de soins, soit elle l'ignorait et c'est consternant. Dans les deux, c'est affligeant.

(Nota bene : les commentaires sont fermés sur cet article, j'ai autre chose à faire que de la chasse au troll)

Amazon Web ServicesFive More EC2 Instance Types for AWS GovCloud (US)

AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud. Today we are enhancing this Region with the addition of five more EC2 instance types. Instances of these types can be launched directly or through Auto Scaling groups.

Let's take a look at the newly available instance types and review the use cases for each one.

HS1 - High Storage Density & Sequential I/O
EC2's HS1 instances provide very high storage density and high sequential read and write performance per instance. They also offer higher storage density than other EC2 instances along with the lowest cost per GB of storage. These instances are ideal for data warehousing, Hadoop/MapReduce applications, and parallel file systems. To learn more about this instance type, read my blog post, The New EC2 High Storage Instance Family.

C3 - High Compute Capacity
The C3 instances are ideal for applications that benefit from a higher amount of compute capacity relative to memory (in comparison to the General Purpose instances), and are recommended for high performance web servers, and other scale-out compute-intensive applications. To learn more about the C3 instances, read A New Generation of EC2 Instances for Compute-Intensive Workloads.

R3 - Memory Optimized
R3 instances are the latest generation of memory-optimized instances. We recommend them for high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, larger deployments of Microsoft SharePoint and other enterprise applications. The R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. My recent post, Now Available - New Memory-Optimized EC2 Instances contains more information.

I2 - High Storage & Random I/O
EC2's I2 instances are high storage instances that provide very fast SSD-backed instance storage optimized for very high random I/O performance, and provide high IOPS at a low cost. You can use I2 instances for transactional systems and high performance NoSQL databases like Cassandra and MongoDB. Like the R3 instances, the I2 instances currently support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. I described these instances in considerable detail last year in Amazon EC2's New I2 Instance Type - Available Now!.

T2 - Economical Base + Full-Core Burst
Finally, the T2 instances are built around a processing allocation model that provides you a generous, assured baseline amount of processing power coupled with the ability to automatically and transparently scale up to a full core when you need more compute power. Your ability to burst is based on the concept of "CPU Credits" that you accumulate during quiet periods and spend when things get busy. You can provision an instance of modest size and cost and still have more than adequate compute power in reserve to handle peak demands for compute power. To learn more about these instances, read my recent blog post, New Low Cost EC2 Instances with Burstable Performance.

Available Now
These instance types are available now to everyone who uses AWS GovCloud (US). Visit the AWS GovCloud (US) EC2 Pricing Page to learn more.

-- Jeff;

ProgrammableWeb: APIsIBM Watson

The IBM Watson Developer Cloud is currently available to a select group of partner developers who are developing “Powered by Watson” applications. These developers are exploring the use of cognitive capabilities of Watson to enhance their business. You can follow the Watson developer site for the latest news and technical how-to guides, API docs, and tools. Or, choose to request access to the Watson Developer Cloud and Watson APIs when they are publicly available. or make a request to participate in the IBM Watson Ecosystem program.
Date Updated: 2014-09-05
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNuPIC

NuPIC is an open source project written in Python / C++ that implements Numenta's Cortical Learning Algorithm (CLA) which has three principle properties: Sparse Distributed Representations, Temporal inference, and On-line learning. The NuPIC API allows developers to work with the raw algorithms, string together multiple regions (including hierarchies), and utilize other platform functions.
Date Updated: 2014-09-05
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesLaunch Your Startup at AWS re:Invent

Sitting here at my desk in Seattle, I am surrounded by colleagues that are working non-stop to make this year's AWS re:Invent conference the best one yet! I get to hear all about the keynotes, sessions, and the big party without even leaving my desk.

In 2013, five exciting startups had the opportunity to launch at re:Invent in a session emceed by Amazon CTO Werner Vogels. Representatives from Koality, CardFlight, Runscope, SportXast, and Nitrous.IO each presented for five minutes and fielded Werner's questions for another minute.

The tradition will continue in 2014. If your AWS-powered startup is currently in stealth mode or if you are already out-and-about and are ready to announce a major feature on stage with Werner, I would like to invite you to apply to do so at re:Invent.

For consideration, please email the following information to awsstartuplaunch@amazon.com (keep your response to 500 words or less):

      Company Overview - Tell us your company name, location, website URL, and give us some information about your core product or service.
      Launch Details - Tell us what you plan to launch or announce.
      Target Audience - Describe the target market and audience for your product or service (businesses, consumers, teachers, students, etc).
      AWS Usage - List the AWS services that you use.
      Team Background - Include some background information on you and on your team.
      Exclusive Offer - Tell us about an exclusive offer that you can make to re:Invent attendees.

You'll need to have the information to us before 5:00 PM PT on September 30th. You've got a month and we can't wait to hear from you!

-- Jeff;

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>