ProgrammableWebDaily API RoundUp: Helcim, Passable, Threat Stack, Plus Amazon Kinesis, Google Kubernetes SDKs

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebChrome 51 Beta Adds Credential Management API Support

Chrome 51 Beta  (the latest version of Chrome) supports the Credential Management API. The API is a standards-track API proposed by W3C and grants developers programmatic access to a browser's credential manager which streamlines the sign-in process.

ProgrammableWebSurvey Reveals How Huge Opportunities Remain to Digitally Transform With APIs

With 2016 well underway, MuleSoft, the company behind the Anypoint Platform for API Management, decided to take a survey of IT decision makers (ITDMs) from across the globe to find out how things are going with respect to various organizations' plans for digita

Amazon Web ServicesGE Oil & Gas – Digital Transformation in the Cloud

GE Oil & Gas is a relatively young division of General Electric, the product of a series of acquisitions made by parent company General Electric starting in the late 1980s. Today GE Oil &Gas is pioneering the digital transformation of the company. In the guest post below, Ben Cabanas, the CTO of GE Transportation and formerly the cloud architect for GE Oil & Gas, talks about some of the key steps involved in a major enterprise cloud migration, the theme of his recent presentation at the 2016 AWS Summit in Sydney, Australia.

You may also want to learn more about Enterprise Cloud Computing with AWS.

Jeff;

Challenges and Transformation
GE Oil & Gas is at the forefront of GE’s digital transformation, a key strategy for the company going forward. The division is also operating at a time when the industry is facing enormous competitive and cost challenges, so embracing technological innovation is essential. As GE CIO Jim Fowler has noted, today’s industrial companies have to become digital innovators to thrive.

Moving to the cloud is a central part of this transformation for GE. Of course, that’s easier said than done for a large enterprise division of our size, global reach, and role in the industry. GE Oil & Gas has more than 45,000 employees working across 11 different regions and seven research centers. About 85 percent of the world’s offshore oil rigs use our drilling systems, and we spend $5 billion annually on energy-related research and development—work that benefits the entire industry. To support all of that work, GE Oil & Gas has about 900 applications, part of a far larger portfolio of about 9,000 apps used across GE. A lot of those apps may have 100 users or fewer, but are still vital to the business, so it’s a huge undertaking to move them to the cloud.

Our cloud journey started in late 2013 with a couple of goals. We wanted to improve productivity in our shop floors and manufacturing operations. We sought to build applications and solutions that could reduce downtime and improve operations. Most importantly, we wanted to cut costs while improving the speed and agility of our IT processes and infrastructure.

Iterative Steps
Working with AWS Professional Services and Sogeti, we launched the cloud initiative in 2013 with a highly iterative approach. In the beginning, we didn’t know what we didn’t know, and had to learn agile as well as how to move apps to the cloud. We took steps that, in retrospect, were crucial in supporting later success and accelerated cloud adoption. For example, we sent more than 50 employees to Seattle for training and immersion in AWS technologies so we could keep critical technical IP in-house. We built foundational services on AWS, such as monitoring, backup, DNS, and SSO automation that, after a year or so, fostered the operational maturity to speed the cloud journey. In the process, we discovered that by using AWS, we can build things at a much faster pace than what we could ever accomplish doing it internally.

Moving to AWS has delivered both cost and operational benefits to GE Oil & Gas.

We architected for resilience, and strove to automate as much as possible to reduce touch times. Because automation was an overriding consideration, we created a “bot army” that is aligned with loosely coupled microservices to support continuous development without sacrificing corporate governance and security practices. We built in security at every layer with smart designs that could insulate and protect GE in the cloud, and set out to measure as much as we could—TCO, benchmarks, KPIs, and business outcomes. We also tagged everything for greater accountability and to understand the architecture and business value of the applications in the portfolio.

Moving Forward
All of these efforts are now starting to pay off. To date, we’ve realized a 52 percent reduction in TCO. That stems from a number of factors, including the bot-enabled automation, a push for self-service, dynamic storage allocation, using lower-cost VMs when possible, shutting off compute instances when they’re not needed, and moving from Oracle to Amazon Aurora. Ultimately, these savings are a byproduct of doing the right thing.

The other big return we’ve seen so far is an increase in productivity. With more resilient, cloud-enabled applications and a focus on self-service capability, we’re getting close to a “NoOps” environment, one where we can move away from “DevOps” and “ArchOps,” and all the other “ops,” using automation and orchestration to scale effectively without needing an army of people. We’ve also seen a 50 percent reduction in “tickets” and a 98 percent reduction in impactful business outages and incidents—an unexpected benefit that is as valuable as the cost savings.

For large organizations, the cloud journey is an extended process. But we’re seeing clear benefits and, from the emerging metrics, can draw a few conclusions. NoOps is our future, and automation is essential for speed and agility—although robust monitoring and automation require investments of skill, time, and money. People with the right skills sets and passion are a must, and it’s important to have plenty of good talent in-house. It’s essential to partner with business leaders and application owners in the organization to minimize friction and resistance to what is a major business transition. And we’ve found AWS to be a valuable service provider. AWS has helped move a business that was grounded in legacy IT to an organization that is far more agile and cost efficient in a transformation that is adding value to our business and to our people.

— Ben Cabanas, Chief Technology Officer, GE Transportation

 

ProgrammableWebHow Utility and Ecosystem APIs Are Different

In 2011, Eventbrite had a single API integration (with MailChimp), but began developing an API program to drive developer activity in the hope that it would drive more business. In this guest post on the Catchy Agency blog, Eventbrite Product Manager Mitch Colleran discusses how they went about building that program, and some of the things they learned along the way.

Amazon Web ServicesRegister Now – AWS DevDay in San Francisco

I am a firm believer in the value of continuing education. These days, the half-life on knowledge of any particular technical topic seems to be less than a year. Put another way, once you stop learning your knowledge base will be just about obsolete within 2 or 3 years!

In order to make sure that you stay on top of your field, you need to decide to learn something new every week. Continuous learning will leave you in a great position to capitalize on the latest and greatest languages, tools, and technologies. By committing to a career marked by lifelong learning, you can be sure that your skills will remain relevant in the face of all of this change.

Keeping all of this in mind, I am happy to be able to announce that we will be holding an AWS DevDay in San Francisco on June 21st.The day will be packed with technical sessions, live demos, and hands-on workshops, all focused on some of today’s hottest and most relevant topics. If you attend the AWS DevDay, you will also have the opportunity to meet and speak with AWS engineers and to network with the AWS technical community.

Here are the tracks:

  • Serverless – Build and run applications without having to provision, manage, or scale infrastructure. We will demonstrate how you can build a range of applications from data processing systems to mobile backends to web applications.
  • Containers – Package your application’s code, configurations, and dependencies into easy-to-use building blocks. Learn how to run Docker-enabled applications on AWS.
  • IoT – Get the most out of connecting IoT devices to the cloud with AWS. We will highlight best practices using the cloud for IoT applications, connecting devices with AWS IoT, and using AWS endpoints.
  • Mobile – When developing mobile apps, you want to focus on the activities that make your app great and not the heavy lifting required to build, manage, and scale the backend infrastructure. We will demonstrate how AWS helps you easily develop and test your mobile apps and scale to millions of users.

We will also be running a series of hands-on workshops that day:

  • Zombie Apocalypse Workshop: Building Serverless Microservices.
  • Develop a Snapchat Clone on AWS.
  • Connecting to AWS IoT.

Registration and Location
There’s no charge for this event, but space is limited and you need to register quickly in order to attend.

All sessions will take place at the AMC Metreon at 135 4th Street in San Francisco.

Jeff;

 

 

 

Amazon Web ServicesHot Startups on AWS – April 2016 – Robinhood, Dubsmash, Sharethrough

Continuing with our focus on hot AWS-powered startups (see Hot Startups on AWS – March 2016 for more info), this month I would like to tell you about:

  • Robinhood – Free stock trading to democratize access to financial markets.
  • Dubsmash – Bringing joy to communication through video.
  • Sharethrough – An all-in-one native advertising platform.

Robinhood
The founders of Robinhood graduated from Stanford and then moved to New York to build trading platforms for some of the largest financial institutions in the world. After seeing that these institutions charged investors up to $10 to place trades that cost almost nothing, they moved back to California with the goal of democratizing access to the markets and empowering personal investors.

Starting with the idea that a technology-driven brokerage could operate with significantly less overhead than a traditional firm, they built a self-serve service that allows customers to sign up in less than 4 minutes. To date, their customers have transacted over 3 billion dollars while saving over $100 million dollars in commissions.

After a lot of positive pre-launch publicity, Robinhood debuted with a waiting list of nearly a million people. Needless to say, they had to pay attention to scale from the very beginning. Using 18 distinct AWS services, a beginning team of just two DevOps people built the entire system. They use AWS Identity and Access Management (IAM) to regulate access to services and to data, simplifying their all-important compliance efforts. The Robinhood data science team uses Amazon Redshift to help identify possible instances of fraud and money laundering. Next on the list is international expansion, with plans to make use of multiple AWS Regions.

Dubsmash
The founders of Dubsmash had previously worked together to create several video-powered applications. As the cameras in smartphones continued to improve, they saw an opportunity to create a platform that would empower people to express themselves visually. Starting simple, they built their first prototype in a couple of hours. The functionality was minimal: play a sound, select a sound, record a video, and share. The initial response was positive and they set out to build the actual product.

The resulting product, Dubsmash, allows users to combine video with popular sound bites and to share the videos online – with a focus on modern messaging apps. The founders began working on the app in the summer of 2014 and launched the first version the following November. Within a week it reached the top spot in the German App Store. As often happens, early Dubsmash users have put the app to use in intriguing and unanticipated ways. For example, Eric Bruce uses Dubsmash to create entertaining videos of him and his young son Jack to share with Priscilla (Eric’s wife / Jack’s mother) (read Watch A Father and His Baby Son Adorably Master Dubsmash to learn more).

Dubsmash uses Amazon Simple Storage Service (S3) for video storage, with content served up through Amazon CloudFront.  They have successfully scaled up from their MVP and now handle requests from millions of users. To learn more about their journey, read their blog post, How to Serve Millions of Mobile Clients with a Single Core Server.

Sharethrough
Way back in 2008, a pair of Stanford graduate students were studying the concept of virality and wanted to create ads that would deserve your attention rather than simply stealing it. They created Sharethrough, an all-in-one native advertising platform for publishers, app developers, and advertisers. Today the company employs more than 170 people and serves over 3 billion native ad impressions per month.

Sharethrough includes a mobile-first content-driven platform designed to engage users with quality content that is integrated into the sites where it resides. This allows publishers to run premium ads and to maintain a high-quality user experience. They recently launched an AI-powered guide that helps to maximize the effectiveness of ad headlines.

Sharethrough’s infrastructure is hosted on AWS, where they make use of over a dozen high-bandwidth services including Kinesis and Dynamo, for the scale of the technical challenges they face. Relying on AWS allows them to focus on their infrastructure-as-code approach, utilizing tools like Packer and Terraform for provisioning, configuration and deployment. Read their blog post (Ops-ing with Packer and Terraform) to learn more.

Jeff;

 

 

Daniel Glazman (Disruptive Innovations)BlueGriffon officially recommended by the French Government

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

ProgrammableWeb: APIsBNC Bitcoin Liquid EOD

Brave New Coin (BNC) provides research and data services that offer trading insight to developers and companies. Its services include Bitcoin price and charts, historical data, and blockchain consulting. The Bitcoin Liquid EOD API connects an application with daily OHLCV data for the BNC Bitcoin Liquid Index based on a midnight UTC close. This REST API uses JSON data format, and requires API Keys for authentication.
Date Updated: 2016-05-02
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPassable

Passable API is a free, lightweight, RESTful web service that developers can easily implement to prevent password hacking and breaches. Developers can send the hash of the password and its algorithm and will receive a 0 (insecure password), or 1 (secure password) response. Developers will find it even easier for them to connect to this API than implementing a password check for user sign-up: just send the request, and if the answer, is 1; it's ok for the user to sign-up, if it is 0 ask the user to type another password. Passable API is currently in beta.
Date Updated: 2016-05-02
Tags: [field_primary_category], [field_secondary_categories]

Anne van Kesteren (Opera)HTML components

Hayato left a rather flattering review comment to my pull request for integrating shadow tree event dispatch into the DOM Standard. It made me reflect upon all the effort that came before us with regards to adding components to DOM and HTML. It has been a nearly two-decade journey to get to a point where all browsers are willing to implement, and then ship. It is not quite a glacial pace, but you can see why folks say that about standards.

What I think was the first proposal was simply titled HTML Components, better known as HTC, a technology by the Microsoft Internet Explorer team. Then in 2000, published in early 2001, came XBL, a technology developed at Netscape by Dave Hyatt (now at Apple). In some form that variant of XBL still lives on in Firefox today, although at this point it is considered technical debt.

In 2004 we got sXBL and in 2006 XBL 2.0, the latter largely driven by Ian Hickson with design input from Dave Hyatt. sXBL had various design disputes that could not be resolved among the participants. Selectors versus XPath was a big one. Though even with XBL 2.0 the lesson that namespaces are an unnecessary evil for rather tightly coupled languages was not yet learned. A late half-hearted revision of XBL 2.0 did drop most of the XML aspects, but by that time interest had waned.

There was another multi-year gap and then from 2011 onwards the Google Chrome team put effort into a different, more API-y approach towards HTML components. This was rather contentious initially, but after recent compromises with regards to encapsulation, constructors for custom elements, and moving from selectors to an even more simplistic model (basically strings), this seems to be the winning formula. A lot of it is now part of the DOM Standard and we also started updating the HTML Standard to account for shadow trees, e.g., making sure script elements execute.

Hopefully implementations follow soon and then widespread usage to cement it for a long time to come.

ProgrammableWebDuraSpace Aims to Formalize Fedora RESTful API into Testable Specification

DuraSpace, a provider of several open source digital repository and storage platforms, has announced that the Fedora community is currently working on the initial phases of drafting a Fedora RESTful API specification. Fedora currently has a stable, RESTful API and an associated reference implementation.

Shelley Powers (Burningbird)2016 Election: Why I’m Supporting Clinton

When I voted in the Missouri Presidential Primary, my choice was Hillary Clinton.

I have watched this woman fight the good fight for decades. When Bill Clinton was elected President and appointed her the chair of the task force to create a plan for reforming health care, I was delighted. Not only did we hope to finally bring about health care reform, but we saw a First Lady given a position commensurate with her capabilities. No more flowers and china…real work.

She served as chair on the committee that devised the plan. She testified for days in Congress in support of the plan. She traveled around the country talking about healthcare reform. She worked tirelessly on a plan that would have, among other things, instituted a mandate that employers provide healthcare insurance for their employees, that no one could be denied coverage, and lower-income people would not have to pay a dime.

I also watched as Republican vilified the plan, with a little help from the insurance companies. But what was harder to watch was the Democrats, with their incessant demands to have their own plans considered instead. President Clinton’s Democratic support fragmented to the point that we lost our first, best effort at healthcare reform. Democratic Senator Patrick Monyihan went so far as to declare there was no health care crises, as he pushed his own agenda.  Representative McDermott and Senator Wellstone pushed a single-payer plan that hadn’t a chance in hell of succeeding, with only 4 additional Senators and 90 House Members in support—including an independent by the name of Bernie Sanders.

Thank goodness President Obama didn’t have as many difficulties with the Democratic Congress as Clinton had, or we’d still be debating that we don’t have a health care crises, we have a health insurance crises.

I have health insurance for the first time in years because of the Affordable Care Act. It may not be perfect, it may not be the ideal, but when you’ve sweated over the fact of being financially ruined because you get sick, well, perfection is in the eye of the beholder.

Hillary Clinton has promised to continue support for the Affordable Care Act, and improve on it, as she can, and as Congress allows. It’s a realistic promise that builds on what we have. It takes into consideration the very real makeup of Congress that will exist in 2017: a House still controlled by  a rabid bunch of extremist Republicans, and possibly, only possibly, a Democratic controlled Senate.

More importantly, she sees the ACA as a stake in the ground in which to tie new reform to, rather than just dig everything up, and start over.

What Clinton promised on healthcare is what she can, as President, accomplish. There’s no fireworks, no talk of revolution—none of the sexy populism and grandiose schemes that seem to be the byword of this election. It would be so easy to promise the moon, along with everyone else, and then backtrack later by claiming Congress is too difficult and the Republicans have too much control. No, she’s quietly confirmed what she knows she can deliver: no more, no less.

In none of the issues listed at the Clinton web site, do we see promises that can’t be met. She doesn’t talk about “working towards” goals, she talks about actual, real-world efforts that we can see, and judge, ahead of time, and also hold her accountable for. That’s not glamorous, but it is gutsy.

She isn’t promising to free half the prison population, because she knows most prisons are state prisons and a President has no control over them. But Clinton can work to reduce the reliance on  mandatory minimum sentences. She can also work with the Justice Department to ensure equal protection for all under our country’s judicial systems—just like Obama is doing now in places like Ferguson, MO.

Clinton isn’t promising to eliminate all student debt; doing so would take an act of Congress. Instead, she wants to work towards refinancing existing loans so former students have more favorable terms; and that no one ever pays more than 10 percent of their income. She’s not promising free college for all, but easier access to tuition assistance by expanding on the existing Pell Grant system. Free college tuition for all, even if it were a good idea, would not only take an act of Congress, but would also require support from the leadership in all of the states.

Considering that Republican governors have been cutting state funds to colleges in almost all states they control, I doubt that we’ll see them gladly accept the fact that they have to provide even more funds.

Clinton supports the President’s DREAM Act, and she’ll work towards immigration reform. She promises to help families as much as a President can. No magic wand approach, here. No mention of walls, either.

No President has the capability of breaking up the banks, but Clinton has promised to strictly enforce rules against them. And work to strengthen the existing rules, and close loopholes—all actions an Executive can take if Congress resists any other effort.

Clinton won’t ban all fracking, because no President can ban all fracking.  Only an act of Congress can enforce a fracking ban.

But Clinton has promised to phase out fracking on public land, as well as more strictly enforce safety and environmental regulations. The important thing to remember, though, is we can’t  phase out fracking by dumping us back into a dependency on coal and coal mining. We don’t have an infrastructure in place where we can immediately replace non-renewable energy sources with renewable ones. It’s a complex problem with a lot of dependencies, none of which easily fits into a sound bite. But then, I’m not sure that voting for someone because of their sound bites is representative of good governance.

I have no doubts, though, that Clinton will honor the Paris Agreement, work to have Congress ratify it, and even expand on it, if possible. And that she’ll be a fervent proponent for both solar and wind energy.

Hillary Clinton is also capable of holding her own under pressure and attack. We’ve seen the Republicans mount a campaign to tear Clinton down that exceeds any other in modern history. The only person they’ve gone after more virulently is Obama. He’s only been the target for eight years—she’s been a target for decades.

Case in point is the infamous emails. Clinton establishes her own professionally maintained email server rather than use a popular email service like AOL (Colin Powell’s favorite), and next thing we know, according to our Republican friends, the fate of the country is undermined.

Partisan hyperbole aside, Clinton scrupulously maintained a set of her emails from when she was Secretary of State, turning them over when requested, and asking for all of the emails to be published so that people could see she had nothing to hide. No other person in the White House Administration has had every email they’ve ever received or sent be scrutinized with the level that Clinton’s emails have been scrutinized. They have passed through the Intelligence Community filter, and when consider the agencies involved, it’s almost unbelievable that any of them managed to survive relatively intact. Since the last of the emails have been delivered, we now know, for a fact, that none were classified at the time they were sent or received. We also know that emails Powell received in his private email have also suffered the same retro-classification, symptomatic more of inter-agency squabbling than a real threat to national security.

Yet here is the Republican National Committee, now suing the State Department for even more, trying to get emails from people connected to Hillary Clinton, many in private organizations; all in a desperate attempt to keep the manufactured scandal alive. It would be funny if it didn’t cost tax payers millions of dollars.

Thankfully, most of us see this for the desperate and despicable act it is. Clinton will survive it, as she’s survived so many of these contrived controversies. She survived the emails, she survived the Benghazi never-ending committees, which Republicans foolishly admitted were created to undermine her Presidential candidacy.

There are even those who would hold Clinton accountable for the actions of her husband when he was President, as if she’s nothing more than a faint echo of him. You’d think we’d be beyond considering women to be nothing more than appendages of their mates, but evidently not when it suits certain agendas.

Bluntly, in my opinion, Hillary Clinton will be a better President than Bill. She’s more experienced now than he was when he got the job. She’s also more open to hearing new ideas, more aware of all the factors in play that can cause havoc. As practical as she is, she’s also more idealistic—a little more empathetic to what’s happening to the average person.

I’ve also seen Clinton exercise her considerable intellect, her shrewdness, and her team-building skills in support of the United States and our current, much loved President, Barrack Obama. Team building doesn’t factor into our discussions about being a good President, but it’s an essential element. FDR managed to pull positive results out of disaster because he was able to swear in the majority of his cabinet the very day he was inaugurated. He had the complete backing, not only of Congress, but also most state governments. To hit the ground running, he knew he had to have a good team in place.

Over the years, Clinton has helped raise funds for Democratic candidates in Congress and down ballot races. She knows, as President, she can’t hope to bring about change all on her own. She needs a solid team behind her.

(We all need more Democrats in Congress if we hope to save this country—something we seem to forget every odd year election. We even need some independents. Among those who have received campaign funds directly from Clinton’s PAC is none other than Bernie Sanders, in his Senatorial race.)

The most recent attack against Clinton is that she’s not publishing transcripts for all her speeches given when she was a private citizen—something that’s never been asked of any Presidential candidate. Why do people want the transcripts? So they can cherry pick through the text, taking bits and pieces out of context in order to misrepresent who she is and what she believes. They did the same thing with her emails.

This isn’t transparency…this is fresh fodder for the  anti-Clinton machine, necessary because no matter how much she’s been bashed, she’s survived. The only difference now is that she’s being bashed from the left—an act that has caused deep fissures among those who have long fought in solidarity.

I don’t fault those on the left who don’t support Clinton because they believe Bernie Sanders has a better plan for the country. We each have our own interests and beliefs. I do, however, fault those on the left who repeat the GOP talking points, because they’re so obsessed over Sanders winning that they’ll burn anything in his path—and that includes the future of the country, because they’ll toss this next election to the GOP rather than work with the rest of us to keep what would be a complete disaster from happening.

Bernie or Bust is a shout from the privileged class, because they’re not a minority, or gay, or poor, or a woman who demands the freedom of choice, or Muslim, or an immigrant. They can call for a revolution, safe in the knowledge that they have nothing to lose.

The rest of us live in the real world. We know what’s at stake. I don’t believe Bernie Sanders has a chance for the nomination, but if he were to get it, I would support him.

I feel confident, though, that the Democratic nominee will be Hillary Clinton. She’s our best choice, as President, and our best chance at winning the election. She’ll be a good leader, building on what Obama started, but also adding her own personal touch, and achievements.

We’ll progress under a President Hillary Clinton presidency, and isn’t that what being a progressive is all about?

Ms. Squirrel is hiding in tree, until this is all over.

 

 

ProgrammableWeb7 Deadly Sins of a Microservices Architecture

At the end of 2014, Tareq Abedrabbo, CTO of OpenCredo, published a post titled “The Seven Deadly Sins of Microservices“ in which he identified seven common anti-patterns of developing in a microservices architecture. In January 2016, Danial Bryant from Voxxed published a Redux version of the post, updated to combine his experiences with the original post.

ProgrammableWebDaily API RoundUp: Tonzy, eSign Genie, PersistIQ, Slurplick, Gophish, Simple Movies API

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebEveryMatrix Updates Web API to Include Native Mobile Support

EveryMatrix announced the upcoming release of its new Web API. The updated API allows the platform to service mobile content. Accordingly, users can supply gaming content to native mobile apps.

Anne van Kesteren (Opera)Network effects affecting standards

In the context of another WebKit issue around URL parsing, Alexey pointed me to WebKit bug 116887. It handily demonstrates why the web needs the URL Standard. The server reacts differently to %7B than it does to {. It expects the latter, despite the IETF STD not even mentioning that code point.

Partly to blame here are the browsers. In the early days code shipped without much quality assurance and many features got added in a short period of time. While standards evolved there was not much of a feedback loop going on with the browsers. There was no organized testing effort either, so the mismatch grew.

On the other side, you have the standards folks ignoring the browsers. While they did not necessarily partake in the standards debate back then that much, browsers have had an enormous influence on the web. They are the single most important piece of client software out there. They are even kind of a meta-client. They are used to fetch clients that in turn talk to the internet. As an example, the Firefox meta-client can be used to get and use the FastMail email client.

And this kind of dominance means that it does not matter much what standards say, it matters what the most-used clients ship. Since when you are putting together some server software, and have deadlines to make, you typically do not start with reading standards. You figure out what bits you get from the client and operate on that. And that is typically rather simplistic. You would not use a URL parser, but rather various kinds of string manipulation. Dare I say, regular expressions.

This might ring some bells. If it did, that is because this story also applies to HTML parsing, text encodings, cookies, and basically anything that browsers have deployed at scale and developers have made use of. This is why standards are hardly ever finished. Most of them require decades of iteration to get the details right, but as you know that does not mean you cannot start using any of it right now. And getting the details right is important. We need interoperable URL parsing for security, for developers to build upon them without tons of cross-browser workarounds, and to elevate the overall abstraction level at which engineering needs to happen.

ProgrammableWeb: APIsBNC Digital Currency Exchange Rates

Brave New Coin (BNC) provides research and data services that offer trading insight to developers and companies. Its services include Bitcoin price and charts, historical data, and blockchain consulting. The Digital Currency Exchange Rates API offers over a hundred digital currency exchange rates. This REST API uses JSON data format, and requires API Keys for authentication.
Date Updated: 2016-04-29
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsThreat Stack Webhook

Threat Stack Webhook API is a cloud-based security solution that enhances the speed and synchrony of apps that resolve cross-platform events. Users can customize its configurations to automatically prioritize the detection of specified security events. The RESTful API sends requests and responses in JSON format while its access requires API Key authentication. The Threat Stack Webhook API can be employed in apps across different sectors including financial technology, education, health care, software services and media among others.
Date Updated: 2016-04-29
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesThey’re Here – Longer EBS and Storage Gateway Resource IDs Now Available

Last November I let you know that were were planning to increase the length of the resource IDs for EC2 instances, reservations, EBS volumes, and snapshots in 2016. Early this year I showed you how to opt in to the new format for EC2 instances and EC2 reservations.

Effective today you can now opt in to the new format for volumes and snapshots for EBS and Storage Gateway.

As I said earlier:

If you build libraries, tools, or applications that make direct calls to the AWS API, now is the time to opt in and to start your testing process! If you store the IDs in memory or in a database, take a close look at fixed-length fields, data structures, schema elements, string operations, and regular expressions. Resources that were created before you opt in will retain their existing short identifiers; be sure that your revised code can still handle them!

You can opt in to the new format using the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, or by calling the ModifyIdFormat API function.

Opting In – Console
To opt in via the Console, simply log in, choose EC2, and click on Resource ID length management:

Then click on Use Longer IDs for the desired resource types:

Note that volume applies to EBS volumes and to Storage Gateway volumes and that snapshot applies to EBS snapshots (both direct and through Storage Gateway).

For information on using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell, take a look at They’re Here – Longer EC2 Resource IDs Now Available.

Things to Know
Here are a couple of things to keep in mind as you transition to the new resource IDs:

  1. Some of the older versions of the AWS SDKs and CLIs are not compatible with the new format. Visit the Longer EC2 and EBS Resource IDs FAQ for more information on compatibility.
  2. New AWS Regions get longer instance, reservation, volume, and snapshot IDs by default. You can opt out for Regions that launch between now and December 2016.
  3. Starting on April 28, 2016, new accounts in all commercial regions except Beijing (China) and AWS GovCloud (US) will get longer instance and reservation IDs by default, again with the ability to opt out.
Jeff;

 

 

Amazon Web ServicesAutheos – At the Nexus of Marketing and E-Commerce

In today’s guest post, Leon Mergen, CTO of Autheos, reviews their company history and their move to AWS.

Jeff;

Adding video to a product page on an e-commerce site is perhaps the single most effective way to drive increased sales — studies have shown sales conversion rates can go up by more than two thirds. In addition, product video viewing data fills a gaping hole in a brand’s / supplier’s ability to assess the effectiveness of their online and offline marketing efforts at driving e-commerce sales. We had built an OK product video distribution platform… but we knew we couldn’t scale globally with the technology we were using. So, in September last year, we decided to transition to AWS, and, while doing so built an e-commerce marketing support tool for Brands which, judging by customer response, is a game changer. This is our story.

The Perils of Good Fortune
Autheos was founded in 2012 when the biggest Webshop in Holland and Belgium asked us to turn an existing piece of technology into a video hosting solution that would automatically find and insert product videos into their product sales pages.  A startup rarely finds itself in a better position to start, so we jumped right in and started coding.  Which was, in retrospect, a mistake for two reasons.

For one thing, we grew too fast.  When you have a great client that really wants your product, the natural reaction is to build it as fast as you can.  So, since there wasn’t a team in place, we (too) quickly on-boarded engineers and outsourced several components to remote development shops, which resulted in classic issues of communication problems and technical incompatibilities.

More importantly, however, since we already had an existing piece of technology, we didn’t take the time to think how we would build it if we were starting from scratch.  It seemed like it would be quicker to adapt it to the new requirements.  And kind of like a home-owner who opts for renovation instead of tear-down and rebuild, we had to make all sorts of compromises as a result.

However, thanks to many all-nighters we managed to meet the deadline and launch a platform that allowed brands such as Philips, LEGO, L’Oreal, and Bethesda to upload product videos (commercials, guides, reviews, and so forth) for free and tag them with a product code and language.

The webshops integrated a small piece of javascript code that enabled them to query our video database in real-time with a product code and language, display a custom button if a video was found, and pop up the right videos(s) for the product, in the desired language.

Click here to see an example video on Bol.com (the biggest webshop in Benelux); our video is behind the button.

The results: less work for the webshop (no more manual gathering of videos, decoding/encoding, hosting and matching them with the right products) and more sales. Our client convinced its Brands to start uploading their videos, and kickstarted our exponential growth. Soon we had so many Brands using our platform, and so many videos in our database, that nearly all major webshops in Benelux wanted to work with us as well (often pushed to do so by Brands, who didn’t want the hassle of interfacing / integrating with many different webshops).

This might sound great, but remember how we built the product in a rush with legacy code?  After three years of fire-fighting, interspersed with frequent moments of disbelief when we found out that certain features we wanted to offer were impossible due to limitations in our backend, we decided enough was enough… it was time to start over.

A New Beginning with AWS
Our key requirements were that we needed to seamlessly scale globally, log and process all of our data, and provide high performance access to our ever growing database of product videos. Besides this, we needed to make sure we could ship new features and products quickly without impacting wider operations. Oh, and we wanted to be up and running with the new platform in 6 months. As the de-facto standard for web applications, the choice of AWS was an easy one. However, we soon realized that it wasn’t just an easy decision, it was a really smart one too.

Elastic Transcoder was the main reason for us to decide to go with AWS. Before working with ET, we used a custom transcoding service that had been built by an outsourced company in Eastern Europe. As a result of hosting the service there on antiquated servers, the transcoding service suffered from lots of downtime, and caused many headaches. Elastic Transcoder allows us to forget about all these problems, and gives us stable transcoding service which we can scale on-demand.

When we moved our application servers to AWS, we also activated Amazon CloudFront. This was a no-brainer for us even though there are many other CDNs available, as CloudFront integrates unbelievably well within AWS. Essentially it just worked. With a few clicks we were able to build a transcoding pipeline that directly uploads its result to CloudFront. We make a single API call, and AWS takes care of the rest, including CDN hosting. It’s really that easy.

As we generate a huge number of log records every day, we had to make sure these were stored in a flexible and scalable environment. A regular PostgreSQL server would have worked, however, this would never have been cost-efficient at our scale. So we started running some prototypes with Amazon Redshift, the PostgreSQL compatible data warehousing solution by AWS. We set up Kinesis Firehose to stream data from our application servers to Amazon Redshift, writing it off in batches (in essence creating a full ETL process as a service), something that would have taken a major effort with a traditional webhost. Doing this outside of AWS would have taken months; with AWS we managed to set all of this up in three days.

Managing this data through data mining frameworks was the next big challenge, for which many solutions exist in market. However, Amazon has great solutions in an integrated platform that enabled us to test and implement rapidly. For batch processing we use Spark, provided by Amazon EMR. For temporary hooking into data streams – e.g. our monitoring systems – we use AWS Data Pipeline, which gives us access to the stream of data as it is generated by our application servers, comparable to what Apache Kafka would give you.

Everything we use is accessible through an SDK, which allows us to run integration tests effectively in an isolated environment. Instead of having to mock services, or setting up temporary services locally and in our CI environment, we use the AWS SDK to easily create and clean up AWS services. The flexibility and operational effectiveness this brings is incredible, as our whole production environment can be replicated in a programmable setup, in which we can simulate specific experiments. Furthermore, we catch many more problems by actually integrating all services in all automated tests, something you would otherwise only catch during manual testing / staging.

Through AWS CloudFormation and AWS CodeDeploy we seamlessly built our cloud using templates, and integrated this with our testing systems in order to support our Continuous Deployment setup. We could, of course, have used Chef or Puppet with traditional webhosts, but the key benefit in using the AWS services for this is that we have instant access to a comprehensive ecosystem of tools and features with which we can integrate (and de-integrate) as we go.

Unexpected Bounty
One month in, things were going so smoothly that we did something that we had never done before in the history of the company:  we expanded our goals during a project without pushing out the delivery date.  We always knew that we had data that could be really valuable for Brands, but since our previous infrastructure made it really difficult to access or work with this data, we had basically ignored it.  However, when we had just finished our migration to Redshift, one of our developers read an article about the powerful combination of Redshift and Periscope.  So we decided to prototype an e-commerce data analysis tool.

A smooth connection with our Redshift tables was made almost instantly, and we saw our 500+ million records visualized in a few graphs that the Periscope team prepared for us.  Jaws dropped and our product manager went ahead and built an MVP. A few weeks of SQL courses, IRC spamming and nagging the Periscope support team later, and we had an alpha product.

We have shown this to a dozen major Brands and the response has been all we could hope for… a classic case of the fabled product / market fit. And it would not have happened without AWS.

An example of the dashboard for one of our Founding Partners (a global game development company).

Jackpot
With a state of the art platform, promising new products, and the backend infrastructure to support global viral growth we finally had a company that could attract the attention of professional investors… and within a few weeks of making our new pitch we had closed our first outside investment round.

We’ve come a long way from working with a bare bones transcoding server, to building a scalable infrastructure and best-in-class products that are ready to take over the world!

Our very first transcoding server.

What’s Next?
Driving viral spread globally to increase network effects, we are signing up new Webshops and Brands at a tremendous pace.  We are putting the finishing touches on the first version of our ecommerce data analysis product for Brand marketers, and speccing out additional products and features for Brands and Webshops working with the Autheos Network.  And of course we are looking for amazing team members to help make this happen. If you would like to join us on the next stage of our journey, please look at our website for current openings — and yes, we are looking for DevOps engineers!

And lastly, since this is the Amazon Web Services blog, we can’t resist being cheeky and thus herewith take the opportunity to invite Mr. Bezos to sit down with us to see if we can become the global product video partner for Amazon.  One thing’s for sure: our infrastructure is the best!

— Leon Mergen, CTO – lmergen@autheos.com

Norman Walsh (Sun)Lucky number 13!

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Lucky number 13!

Volume 19, Issue 9; 28 Apr 2016

Thirteen years of random, irregular scribblings.

</header>

The secret anniversaries of the heart.

<footer>
H. W. Longfellow
</footer>

I happened to notice that I turned this thing on thirteen years ago today. A lot has changed since then, but a lot has also stayed the same.

I’m no longer on the W3C TAG, and there’s nothing new about weblogs. The infrastructure is now MarkLogic server running on Amazon Web Services. The latest incarnation uses semantics (again) to drive most of the navigation. The postings are still written in DocBook and transformed into HTML with the DocBook XSLT 2.0 Stylesheets.

In the future, I’m (still) planning to write more. I have a plan to go through the archives and clean up some of the more out-of-date posts. I also have an idea for publishing posts via one of the markdown flavors. We’ll see.

And Tim is still a more interesting read.

</article>

ProgrammableWebHow Government Meddling and Regulation is Impacting the API Economy

Mark Twain once said: “The mania for giving the Government the power to meddle with the private affairs of cities or citizens is likely to cause endless trouble”. If this is true, at the heart of this “endless trouble” is the role of regulation, a topic that has been dividing opinion for many years in nearly every country across the globe.

ProgrammableWebEmirates Launches IATA NDC Compliant B2B API

Emirates joins others in the airline industry with its launch of its IATA NDC compliant Emirates Online B2B API. The API allows distribution partners to connect to Emirates' host reservation system.

ProgrammableWebDaily API RoundUp: Google Consumer Surveys, Be Like Bill, Vendasta, MediaMath, Reckon One

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWeb: APIsExoscale DNS

Exoscale is a cloud service provider based in Switzerland that features SSD servers, object storage, and protection with the Swiss Federal Data Protection Act. Exoscale offers 7 monthly plans with different hardware specifications according to the developer's needs. The DNS REST API is used to program hosted zones and records. This API exchanges information in JSON format, and uses API Keys for authentication.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsExoscale Apps

Exoscale is a cloud service provider based in Switzerland that features SSD servers, object storage, and protection with the Swiss Federal Data Protection Act. Exoscale offers 7 monthly plans with different hardware specifications according to the developer's needs. The Apps REST API is used to plug continuation integration or other automation tools with a PaaS deployment. This API exchanges information in JSON format, and uses API Keys for authentication.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWorldbox

Worldbox provides reports about companies around the world. Information such as company credit reports, company profiles, management reports, and legal status is available. The Worldbox API is used to integrate a company index and information retrieval service. This REST API uses JSON for data exchange, API Keys and API Secret for authentication.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIseSign Genie

eSign Genie is an electronic signature service which allows documents to be signed securely. eSign Genie features document collaboration, template library, and 256-bit encryption. This service is UETA, ESIGN, HIPAA, FINRA, and CFR Part 11 compliant. Also, eSign Genie offers an API that allows developers to integrate these eSignature features to their applications. This API is REST based, uses OAuth 2 for authentication, and JSON for data exchange.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBitwage

Bitwage is an international payroll service that can deliver wages in digital or local currency. For individual wages, it offers Bitcoin payments, bank wire transfers, money savings in currency or commodities, and a refillable debit card. For team wages, it offers the mentioned services plus an API. The Bitwage API can be used to deliver international freelancer payments. This API exchanges data in JSON format, and uses API Keys for authentication.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPersistIQ

The PersistIQ API integrates lead generation features into web services. It is specially designed for business applications' developers, who can interact with the API via JSON and access data via Key. PersistIQ is an outbound sales platform.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsTozny

Tozny is an authentication platform that uses cellphones as unique identifiers. This platform can be used as a strong cryptography layer for financial services or to eliminate the need of passwords within a consumer application. The Tozny REST API is used to integrate authentication capabilities; it returns data in JSON format, and uses API Keys. Tozny is developed by Galois which is an R&D company based in Portland, Oregon.
Date Updated: 2016-04-28
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow to Create a Text-to-Speech Audio Broadcast with PubNub and Raspberry Pi

According to Gartner, there will be nearly 21 billion devices connected to the Internet by the year 2020. That’s a lot of devices. While I have no doubt these predictions are reasonably correct, that doesn’t make it any easier to build applications that connect these devices together in useful ways. Fortunately, there are solutions that make this job easier.

ProgrammableWebBitcoinAverage Announces Closed Beta of New Full-Featured API

BitcoinAverage, an aggregated Bitcoin price index, announced a closed beta of its latest, full-featured API. The new API is bundled with a new front-end, allows key generation for authenticated endpoints, provides scalability for enterprise needs, and more while continuing to provide consistent realtime Bitcoin data.

Matt Webb (Schulze & Webb)How my Twitter bot makes personalised animated GIFs

Ben Brown noticed that my bot @5point9billion made him a personalised animated GIF when it tweeted him yesterday (on the occasion of light that left Earth as he was born, right at that moment passing the star Iota Pegasi, a little over 38 light years away). And he was curious about how it did that. So:

There's a previous write-up about @5point9billion here. From that post:

My new bot is called @5point9billion which is the number of miles that light travels in a year. The idea is that you follow it, tweet it the date of your birth (e.g. here's my starter tweet), and then it lets you know whenever you reach Aldebaran or wherever.

You get tweets monthly, and then weekly, and for the last couple of days... and then you pass the star. It feels neat, don't ask me why.

Since that write-up, I've also added a website to the bot. In addition to getting the realtime notifications on Twitter, you can sign in on the site and see what stars you've already reached.

Check this out: There's also a public view, with an animation. This is a 3D animated map of all the star systems we can see from Earth, within 100 light years. It sits there and rotated. You can type in your date of birth, and it'll show you what stars you've already reached.

I made this public view as a "kiosk" mode when @5point9billion was exhibiting at the Art of Bots show earlier this month. The stars were laid out on the floor, fanning out from the Sun which was right by the kiosk. Here's a photo. It was good fun to walk out from the Sun till you find the star you've just passed. And then to walk out to about 80 light years and think, hey, most people die around this point, and look at the stars falling just further from you and think, hey, I probably won't reach those. Huh.

The star map is drawn and animated in Javascript and WebGL using three.js which I really like.

And doesn't it look kinda the same as the personalised star map that the bot made for Ben? Yup.

Making animated GIFs

I knew I wanted to tweet out personalised, animated star maps, whenever a bot follower passed a star (there are over 500 followers, and between 2 and 5 of them pass a star each day).

Routes I considered but discarded pretty fast:

  • Generating the star maps offline. For sketching on my Mac, I use a Python drawing package called PlotDevice -- this is what I used to make the first quick-and-dirty star map. I don't like generating graphics offline because I want the ability to tweak and change my mind
  • Drawing the graphics frame by frame using a dedicated package like Cairo. But I already have star maps in Javascript for the browser. I don't like the idea of having two routes to draw the same graphics for different outputs. Feel like a lot of work

This is the rendering pipeline I settled on:

  • The source animation is the same animation I use for the website... it's drawn in Javascript using three.js. It's just a page on my site
  • I already have queues and asynchronous processing on my website. The website is all Python because that's my preferred language, and I have a my own Twitter bot framework that I'm gradually building up (this is a whole other story)
  • When a user passes a star, the machine responsible for that task adds a tweet to the send queue, and flags it for requiring media
  • At the appropriate time, the queue runner loads the animation page using PhantomJS which is a web browser that can run headless on the server. It's possible to drive Phantom from Python using Selenium
  • Because the animation is created on demand, and generated just for this tweet, it can include personalised information like today's date, and the name of the user
  • The animation exposes a single Javascript function, step(), that renders the next frame. Phantom has the ability to reach into a page and make Javascript calls
  • Using Phantom, each frame of the animation is generated by calling step(), capturing as a screen shot (as a PNG) to an in-memory buffer, and then down-sampling to half its original dimensions (this makes the lines sharper)
  • Using images2gif (this is the Python 3 version of the library), the frames are assembled into an animated GIF, and saved as a temporary file
  • The GIF is optimised by shelling out to gifsicle, a command-line tool for that purpose
  • Finally, the media is uploaded to Twitter using Tweepy. Technically Twitter supports animated GIFs up to 5MB, but this is only available using a kind of chunked upload that Tweepy doesn't yet support, so the GIFs have to come in under 3MB. Twitter returns a media ID, which the code associates with the queued tweet in my send queue, and that is posted when its time comes round. (The send queue ticks every 40 seconds, because Twitter rate limits.)

If you're curious, here's the source animation on the website. And here's how it looks in a tweet.

If you want, knock the "draw=1" off the URL -- you'll get a blank page. Then call step() in your browser's Javascript console and see each frame being generated.

There's a wrinkle: Phantom doesn't support WebGL, so the star map animation in three.js had to be re-written to draw directly to canvas... which three.js supports but you have to add custom sprites and a few other things. It gets hairy, and I'm super happy to have worked with @phl on that side of things -- he looked after the Javascript drawing with his amazing code chops.

Another wrinkle: PhantomJS 2 (which this requires) installs on the Mac using Homebrew just fine, but is a pain to build on Ubuntu which is what my server runs. There's a pre-built binary here.

In summary, this is a rendering pipeline which:

  • Fits my web-first approach... there's no separate drawing package just for these animations, so debugging an image is as simple as opening a browser window
  • Minimises the number of moving parts: I've added the ability to create images using Phantom but that's it, there's no separate drawing package or offline rendering
  • Is agile: I can tweak and change last minute

What else am I using this for?

I prototyped this rendering pipeline with another Twitter bot, @tiny_gravity which just does a tiny particle simulation once every 4 hours. Sometimes it's pretty.

This animation doesn't use three.js for drawing, it uses processing.js, but the principle is the same. Again, the animation is just a webpage, so I can tweak the animated GIFs in the same way I tweak the rest of my website and bot behaviour. Here's that animation as a tweet.

One of the things I'm most enjoying about having multiple projects is how they cross-pollinate.

My main side project right now is my bookshop-in-a-vending-machine called Machine Supply. Here it is at Campus, Google's space for entrepreneurs in Shoreditch, London.

It tweets when it sells a book. Because of course it does.

The selection is changed over every Monday, and you'll notice that each of the books has a card on the front (here's a photo) because every book is recommended by a real human made of meat.

These cards and the shelf talkers (the label which says the item code and the price) are beautifully designed by my new friends at Common Works. But they're a pain to produce: For layout, the templates are in InDesign (which I don't have), then I have to send an Excel spreadsheet of the new stock over to Sam at Common Works, which he then puts into the template, and prints.

My new process comes straight out of the @5point9billion code. The browser is my layout tool.

So Sam moved from InDesign to the web, and here are this week's shelf talkers as HTML. This is part of my admin site, I've temporarily turned off permission checking to this page so you can see. The template is automatically populated with details from the weekly planogram. (A planogram is the merchandising layout for a set of shelves or a store.)

And here's the exact same page as a PDF. The pipeline is taken from @5point9billion: Phantom is used to grab the webpage, and this time render it to a PDF, complete with vector fonts and graphics. Because it's a PDF, it's super exact -- which it needs to be to print right and fit neatly on the shelf edge.

It's much quicker this way.

My rule for Machine Supply, as a side project, is that it should take the minimum of my time, never feel like an obligation, and I should be able to manage it on the hoof. As a hobby, it should be Default Alive.

So automation is helpful. I like that this mode of generating PDFs can be done without my laptop: I can do everything from my phone, and print wirelessly.

Anyway. You should follow @5point9billion! It's fun, and you get a personalised animated GIF every time you pass a star, generated with the most ludicrous rendering pipeline ever.

Norman Walsh (Sun)Barton Creek, April 2016

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Barton Creek, April 2016

Volume 19, Issue 8; 27 Apr 2016

Barton Creek avant le déluge et après le déluge.

</header>

One good flood is better than a hundred baskets of manure.

<footer>
Hindu Proverb
</footer>

As chance would have it, I strolled along the banks of Barton Creek just a few days before the rains came. It was gentle and bucolic.

<figure class="figure-wrapper" id="R.1.4">
Barton Creek Before
Barton Creek Before
</figure>

Bethan and I waded across it several times. We found a little Stinkpot. I didn’t take many pictures because it didn’t strike me as remarkable.

<figure class="figure-wrapper" id="R.1.6">
Stinkpot (Sternotherus odoratus)
Stinkpot (Sternotherus odoratus)
</figure>

And then it rained. And rained. And rained some more. Places like Houston got seriously, dangerously flooded. To the best of my knowledge there was nothing like that around here. I live up on a hill, so I never felt at risk.

When the sun came back out again, I took another walk along the creek. For comparison, here’s Twin Falls from a year ago:

<figure class="figure-wrapper" id="R.1.9">
Twin Falls, before
Twin Falls, before
</figure>

Here’s an “after” picture from a similar view:

<figure class="figure-wrapper" id="R.1.11">
Twin Falls, after
Twin Falls, after
</figure>

This is a spot where we waded across the creek in water probably never much more than calf high.

<figure class="figure-wrapper" id="R.1.13">
Crossover spot
Crossover spot
</figure>

I don’t think I want to wade in there now!

Just above Twin Falls is usually mostly dry except for the two channels of the creek that form the twin falls.

<figure class="figure-wrapper" id="R.1.16">
Above Twin Falls
Above Twin Falls
</figure>

Not dry so much after the rain!

<figure class="figure-wrapper" id="R.1.18">
Twin Falls
Twin Falls
</figure>

Finally, here’s another spot on the creek, just down the road from my apartment. It’s one of the places where the greenbelt trail crosses the creek. In high summer, this crossing is bone dry. Even when the creek is running, it’s generally possible to cross without getting your feet wet.

<figure class="figure-wrapper" id="R.1.20">
Twin Falls
Twin Falls
</figure>

I think you could swim across here, but I didn’t try. (Forgot to bring a waterproof bag for my phone!)

(More pics in the Barton Creek set.)

</article>

ProgrammableWeb: APIsGophish

The Gophish API integrates simulated phishing campaign features into applications. Developers can set template and targets, launch a campaign, and measure results. JSON format and Key authentication are required to interact with the API. Gophish is an open source framework available for download, and is used for phishing simulation training.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsProdigi Print

Prodigi is a U.K. based on demand printing service which offers framing, art consultancy, and worldwide fulfillment. The Print API can be used to integrate the international on demand fulfillment service to e-commerce platforms. Also, with the Print API, developers can place orders or obtain live status of orders with tracking information. This REST API responds in JSON format, and uses API Keys for authentication.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSimapi

Simapi (Simple Movies API) is a system that allows developers to integrate movie information retrieval with applications. Information such as movie title, movie year, actors and IMDb movie ID can be obtained in JSON format.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSlurplick

Slurplick is a recommendations platform which extracts information about product images. This platform features machine learning, analytics, and a cloud infrastructure. Slurplick offers its API which functions as a machine learning recommendations system. This API is REST based, and uses OAuth 2 for authentication.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIATACodes

IATACodes is a system that provides International Air Transport Association codes for airports, airlines, cities, and aircrafts. Also, it provides information about countries, routes, and timezones. This data can be used for SEO purposes or to improve the functionality of search and meta-search services. Paid users receive real-time flight statistics as well as access to the fr, de, es, it, ru, th, tr languages. This REST API uses API Keys for authentication and exchanges information in JSON format.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsZalando

Zalando is a fashion platform for the European market which integrates an expansive shopping store with secure transactions dedicated to its business partners and their third party applications. Zalando is based in Germany, and its strategic areas include "Consumer Products, Brand and Merchant Products, Intermediary Products, and Core Capabilities". The Zalando API is REST based, exchanges information in JSON format, and uses OAuth 2 for authentication.
Date Updated: 2016-04-27
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesMachine Learning, Recommendation Systems, and Data Analysis at Cloud Academy

In today’s guest post, Alex Casalboni and Giacomo Marinangeli of Cloud Academy discuss the design and development of their new Inspire system.

Jeff;

Our Challenge
Mixing technology and content has been our mission at Cloud Academy since the very early days. We are builders and we love technology, but we also know content is king. Serving our members with the best content and creating smart technology to automate it is what kept us up at night for a long time.

Companies are always fighting for people’s time and attention and at Cloud Academy, we face those same challenges as well. Our goal is to empower people, help them learn new Cloud skills every month, but we kept asking ourselves: “How much content is enough? How can we understand our customer’s goals and help them select the best learning paths?”

With this vision in mind about six months ago we created a project called Inspire which focuses on machine learning, recommendation systems and data analysis. Inspire solves our problem on two fronts. First, we see an incredible opportunity in improving the way we serve our content to our customers. It will allow us to provide better suggestions and create dedicated learning paths based on an individual’s skills, objectives and industries. Second, Inspire represented an incredible opportunity to improve our operations. We manage content that requires constant updates across multiple platforms with a continuously growing library of new technologies.

For instance, getting a notification to train on a new EC2 scenario that you’re using in your project can really make a difference in the way you learn new skills. By collecting data across our entire product, such as when you watch a video or when you’re completing an AWS quiz, we can gather that information to feed Inspire. Day by day, it keeps personalising your experience through different channels inside our product. The end result is a unique learning experience that will follow you throughout your entire journey and enable a customized continuous training approach based on your skills, job and goals.

Inspire: Powered by AWS
Inspire is heavily based on machine learning and AI technologies, enabled by our internal team of data scientists and engineers. Technically, this involves several machine learning models, which are trained on the huge amount of collected data. Once the Inspire models are fully trained, they need to be deployed in order to serve new predictions, at scale.

Here the challenge has been designing, deploying and managing a multi-model architecture, capable of storing our datasets, automatically training, updating and A/B testing our machine learning models, and ultimately offering a user-friendly and uniform interface to our website and mobile apps (available for iPhone and Android).

From the very beginning, we decided to focus high availability and scalability. With this in mind, we designed an (almost) serverless architecture based on AWS Lambda. Every machine learning model we build is trained offline and then deployed as an independent Lambda function.

Given the current maximum execution time of 5 minutes, we still run the training phase on a separate EC2 Spot instance, which reads the dataset from our data warehouse (hosted on Amazon RDS), but we are looking forward to migrating this step to a Lambda function as well.

We are using Amazon API Gateway to manage RESTful resources and API credentials, by mapping each resource to a specific Lambda function.

The overall architecture is logically represented in the diagram below:

Both our website and mobile app can invoke Inspire with simple HTTPS calls through API Gateway. Each Lambda function logically represents a single model and aims at solving a specific problem. More in detail, each Lambda function loads its configuration by downloading the corresponding machine learning model from Amazon S3 (i.e. a serialized representation of it).

Behind the scenes, and without any impact on scalability or availability, an EC2 instance takes care of periodically updating these S3 objects, as outcome of the offline training phase.

Moreover, we want to A/B test and optimize our machine learning models: this is transparently handled in the Lambda function itself by means of SixPack, an open-source A/B testing framework which uses Redis.

Data Collection Pipeline
As far as data collection is concerned, we use Segment.com as data hub: with a single API call, it allows us to log events into multiple external integrations, such as Google Analytics, Mixpanel, etc. We also developed our own custom integration (via webhook) in order to persistently store the same data in our AWS-powered data warehouse, based on Amazon RDS.

Every event we send to Segment.com is forwarded to a Lambda function – passing through API Gateway – which takes care of storing real-time data into an SQS queue. We use this queue as a temporary buffer in order to avoid scalability and persistency problems, even during downtime or scheduled maintenance. The Lambda function also handles the authenticity of the received data thanks to a signature, uniquely provided by Segment.com.

Once raw data has been written onto the SQS queue, an elastic fleet of EC2 instances reads each individual event – hence removing it from the queue without conflicts – and writes it into our RDS data warehouse, after performing the required data transformations.

The serverless architecture we have chosen drastically reduces the costs and problems of our internal operations, besides providing high availability and scalability by default.

Our Lambda functions have a pretty constant average response time – even during load peaks – and the SQS temporary buffer makes sure we have a fairly unlimited time and storage tolerance before any data gets lost.

At the same time, our machine learning models won’t need to scale up in a vertical or distributed fashion since Lambda takes care of horizontal scaling. Currently, they have an incredibly low average response time of 1ms (or less):

We consider Inspire an enabler for everything we do from a product and content perspective, both for our customers and our operations. We’ve worked to make this the core of our technology, so that its contributions can quickly be adapted and integrated by everyone internally. In the near future, it will be able to independently make decisions for our content team while focusing on our customers’ need.  At the end of the day, Inspire really answers our team’s doubts on which content we should prioritize, what works better and exactly how much of it we need. Our ultimate goal is to improve our customer’s learning experience by making Cloud Academy smarter by building real intelligence.

Join our Webinar
If you would like to learn more about Inspire, please join our April 27th webinar – How we Use AWS for Machine Learning and Data Collection.

Alex Casalboni, Senior Software Engineer, Cloud Academy
Giacomo Marinangeli, CTO, Cloud Academy

PS – Cloud Academy is hiring – check out our open positions!

ProgrammableWebDropbox Retires Sync and Datastore APIs

Last year Dropbox announced a preview of the new Dropbox API v2. The new version simplified the Dropbox developer experience. In hopes to move toward a more simplified platform, Dropbox announced deprecation of its Sync and Datastore APIs.

ProgrammableWebWho is the WordPress REST API Really Serving?

One of the most important questions to ask yourself when considering a new business venture is, “What problem does this solve?” Many of the most successful businesses resolve issues for users, or at least make some kind of process easier.

Daniel Glazman (Disruptive Innovations)First things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

ProgrammableWeb: APIsMediaMath

The MediaMath API allows developers to integrate reporting features of the company's TerminalOne Marketing Operating System into media marketing applications. Protocols are available in REST format with an API Key. MediaMath provides digital marketing services.
Date Updated: 2016-04-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsVendasta

The Vendasta API integrates sales and marketing features into business applications. As a platform, protocols are available in JSON and HTTP, while authentication is required via Key. Vendasta is an intelligent sales platform for vendors who target small and medium sized businesses.
Date Updated: 2016-04-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOrange Form Filling France

With Orange Form Filling France API, customers can automatically complete registration forms faster with reliable information from Orange, such as address and telephone numbers. Formats include HTTP, JSON, and REST. Authentication is required via OAuth2. Orange is a global telecommunication company that provides IT and telecommunications services.
Date Updated: 2016-04-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOrange Check ID France

By using the Orange Check ID API, customers can access extra security as their identity is double-checked prior to any transaction validation. Users can benefit from reduced fraud, reduced costs and faster transactions. Available in HTTP, JSON, and REST protocols with OAuth2. Orange is a global telecommunication company that provides IT and telecommunications services.
Date Updated: 2016-04-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOrange Authentication France

With Orange Authentication France API, developers can benefit from new innovative ways to authenticate users. Automatically recognized through the Orange mobile Network, they won’t have to remember another password, while benefiting from the utmost security. API available in HTTP, JSON, and REST formats with OAuth2 authentication. Orange is a global telecommunication company that provides IT and telecommunications services.
Date Updated: 2016-04-26
Tags: [field_primary_category], [field_secondary_categories]

Norman Walsh (Sun)Places

<article class="essay" id="R.1" lang="en"><header class="essay-titlepage">

Places

Volume 19, Issue 7; 18 Apr 2016; last modified 25 Apr 2016

Keeping track of where you want to go by writing an app. Because that’s what you do, right?

</header>

There's no place like 127.0.0.1. (Or ::1, I suppose.)

<footer></footer>

I collect stuff. Not so much in the physical world, but in the digital world, I accumulate all kinds of data and metadata (not that there’s any clear distinction between those). I geotag my photographs. I create web pages for all my travel itineraries. I record GPS tracks of hikes and other recreational outings. I can tell you every movie I’ve seen since 2003. I’ve scanned hundreds of business cards. [Stop now before they think you’re crazy, —ed]

I’m largely unsatisfied with how all of this information is collected and preserved: several calendars, Google contacts, Emacs Org files, scattered XML and RDFLinked data documents, and a bunch of Evernote notebooks. But that’s not what this posting is about.

One of the Evernote notebooks is “Travel - Places to go”: a collection of web clippings, magazine scans, and cryptic notes. I was looking at it the other day. Two thoughts struck me: first, the notes would be a lot more useful if they were on a map, and second, there are a lot of Wikipedia pages in that notebook.

Wikipedia pages. A lot of structured Wikipedia data is available in DBpedia. And thus an idea was born:

<figure class="figure-wrapper" id="R.1.7">
places.nwalsh.com
places.nwalsh.com
</figure>

It’d be easy:

  1. Grab the structured data from DBpedia: geo_coordinates_en.tql.bz2, geo_coordinates_mappingbased_en.tql.bz2, images_en.tql.bz2, instance_types_en.tql.bz2, labels_en.tql.bz2, mldbmirror-config.json, short_abstracts_en.tql.bz2.

  2. Write a couple hundred lines of Perl to de-normalize those files into JSON documents:

    {
        "uri": "https://en.wikipedia.org/wiki/Eiffel_Tower",
        "id": "wiki-221a0e",
        "type": "Building",
        "image": "http://en.wikipedia.org/wiki/Special:FilePath/Tour_Eiffel_Wikimedia_Commons.jpg",
        "coord": [
            48.858222,
            2.2945
        ],
        "title": "Eiffel Tower",
        "summary": "The Eiffel Tower (/ˈaɪfəl ˈtaʊər/ EYE-fəl TOWR; French: tour Eiffel [tuʁ‿ɛfɛl]
    About this sound listen) is an iron lattice tower located on the Champ de Mars in Paris,
    France. It was named after the engineer Alexandre Gustave Eiffel, whose company designed
    and built the tower."
    }
  3. Upload the roughly million or so JSON documents to MarkLogic and setup a couple of indexes.

  4. Bang out a surprisingly small amount of JavaScript to display an OpenStreetMap map with Leaflet.

  5. Write a few short XQuery modules to search for places within geospatial constraints and maintain document collections (which is how I chose to manage which places you want to see, went to, or want to see again).

  6. Write a little more XQuery and a little more JavaScript to display a popup box for each place.

<figure class="figure-wrapper" id="R.1.10">
The Eiffel Tower
The Eiffel Tower
</figure>

It took literally a couple hours, most of which was spent working out the format of, and groveling over, huge .bz2 files. MarkLogic is a goddamn Swiss Army Chainsaw: I didn’t expect it to be difficult, but I was genuinely surprised how quickly it came together. I built a useful, custom geospatial mapping application with all of Wikipedia in a couple of hours!

I’ve since then spent maybe a couple of days total adding a few more features: the ability to add new places, per-user notes, import and export, and geocoding for address searches.

It’d probably take another week or so to polish off the remaining rough edges, and someone with actual design skills to make it look nice, but that’s ok. It totally scratches my itch. Now I just have to figure out how to connect the places to Evernote pages. Hmmm, maybe I should use Trello instead. Ooh, now there’s an idea…

</article>

ProgrammableWebDaily API RoundUp: Doarama, PayTraq, Avaza, Plus Tradable, Joysticket SDKs

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

Amazon Web ServicesAWS Week in Review – April 18, 2016

Let’s take a quick look at what happened in AWS-land last week:

Monday

April 18

Tuesday

April 19

Wednesday

April 20

Thursday

April 21

Friday

April 22

Saturday

April 23

Sunday

April 24

New & Notable Open Source

New SlideShare Presentations

New Customer Success Stories

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

ProgrammableWebApple Warns Developers To Update Watch Apps

Prepare ye thy Watch apps with the latest watchOS SDK, sayeth Apple, lest they be counted unworthy and cast outside the iTunes App Store. Seriously, folks; If you haven't, it's time to upgrade your Apple Watch app.

ProgrammableWebW3C Publishes First Public Working Drafts of Payment Specifications

The W3C Web Payments Working Group has published the first public drafts of payment specifications which include a Payment Request API, payment method identifiers, and basic card payment. The basic idea behind the specifications is to make payments easier and more secure for end users.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>