ProgrammableWebWhy and How Every Organization With APIs Must React Immediately to the Yahoo! Breach

Last week, it was revealed that over 500 million Yahoo! accounts were breached.  At first blush, a great many IT personnel are likely to overlook the recent breach as having little or no impact on their organizations. But that is not the case and it is particularly important for just about any organization -- especially those that are currently providing APIs -- to take action immediately.

Micah Dubinko (Yahoo!)Quantified bronchitis

Lots of people are using Fitbit to get into better shape. But I haven’t heard of anyone using trackers to better measure what happens when you get sick.

As it turns out, I’ve had a nasty case of bronchitis over the last week. This is the sickest I’ve been in a while, and I find it fascinating to look at the data. I hope folks from Fitbit corporate, and other interested individuals will be able to make use of the data.

First off, here’s what shortness of breath and fever do to you: they raise your heart rate.


My normal resting heart rate is in the low 60s. But as you can see here, it hardly dropped below 90, even while I was asleep. Fitbit considers a rate of 89 or above to be the “fat burn” zone, and I had an incredible 20 hours less two minutes in that zone. Interestingly, Fitbit considered my resting heart rate to be only 76 bpm. I think it uses some kind of rolling average over many days.


Here’s what the resting heart rate looked like:


And here’s the overall time in various zones.



The highest spike is the day we were just talking about. But days leading up to that had significantly raised heart rate too. After that, it dropped off fast, but the scale can be deceptive. That data point 3rd from the end is still more than four hours in the “fat burn” zone, which is a lot for lying in bed.

Next up, step counts:


I muddled through Wednesday not feeling well, but activity crashed the way you’d expect, spending entire days in bed.

Lastly calories:


The leftmost column is the Wednesday I muddled through, with a burn of 3004 calories, which is pretty typical. But the next day, despite my step count dropping by 90%, Fitbit recorded a burn of almost 3,500 calories. True, this was with 14-and-a-half hours in the “fat burn” zone, but I don’t think it could make that much of a difference. This has to be a bug. Someone, my set of physiological conditions triggered some defect in the algorithms that made it badly overestimate my burn.

[From what research I could find, running a fever of 102 indeed increases your basal metabolic rate. But not that much. It probably less-than-compensates for the decreased physical activity while sick.]

Friday, the day I had 20 hours in the “fat burn” zone, I recorded a more realistic 2438 calorie burn.

What do you think? If you have fitness tracker data from when you’re sick, I’d love to see it.

Thanks, -m

Amazon Web ServicesAWS Answers – Architect More Confidently & Effectively on AWS

After an organization decides to move to the AWS Cloud and to start taking advantage of the benefits that it offers, one of the next steps is to figure out how to properly architect their applications. Having talked to many of them, I know that they are looking for best practices and prescriptive design patterns, along with some ready-made solutions and some higher-level strategic guidance.

To this end, I am pleased to share the new AWS Answers page with you:

Designed to provide you with clear answers to your common questions about architecting, building, and running applications on AWS, the page includes categorized guidance on account, configuration & infrastructure management, logging, migration, mobile apps, networking, security, and web applications. The information originates from well-seasoned AWS architects and is presented in Q&A format. Every contributor to the answers presented on this page has spent time working directly with our customers and their answers reflect the hands-on experience that they have accumulated in the process.

Each answer offers prescriptive guidance in the form of a high-level brief or a fully automated solution that you can deploy using AWS CloudFormation, along with a supporting Implementation Guide that you can view online or download in PDF form. Here are a few to whet your appetite:

How can I Deploy Preconfigured Protections Using AWS WAF? – The solution will set up preconfigured AWS WAF rules and custom components, including a honeypot, in the configuration illustrated on the right.

How do I Automatically Start and stop my Amazon EC2 Instances? – The solution will set up the EC2 Scheduler in order to stop EC2 instances that are not in use, and start them again when they are needed.

What Should I Include in an Amazon Machine Image? This brief provides best practices for creating images and introduces three common AMI designs.

How do I Implement VPN Monitoring on AWS? – The solution will deploy a VPN Monitor and automatically record historical data as a custom CloudWatch metric.

How do I Share a Single VPN Connection with Multiple VPCs? This brief helps you minimize the number of remote connections between multiple Amazon VPC networks and your on-premises infrastructure.


Amazon Web ServicesAWS Week in Review – September 19, 2016

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.


September 19


September 20


September 21


September 22


September 23


September 24


September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Amazon Web ServicesExpanding the M4 Instance Type – New M4.16xlarge

EC2’s M4 instances offer a balance of compute, memory, and networking resources and are a good choice for many different types of applications.

We launched the M4 instances last year (read The New M4 Instance Type to learn more) and gave you a choice of five sizes, from large up to 10xlarge. Today we are expanding the range with the introduction of a new m4.16xlarge with 64 vCPUs and 256 GiB of RAM. Here’s the complete set of specs:

Instance Name vCPU Count
Instance Storage Network Performance EBS-Optimized
m4.large 2 8 GiB EBS Only Moderate 450 Mbps
m4.xlarge 4 16 GiB EBS Only High 750 Mbps
m4.2xlarge 8 32 GiB EBS Only High 1,000 Mbps
m4.4xlarge 16 64 GiB EBS Only High 2,000 Mbps
m4.10xlarge 40 160 GiB EBS Only 10 Gbps 4,000 Mbps
m4.16xlarge 64 256 GiB EBS Only 20 Gbps 10,000 Mbps

The new instances are based on Intel Xeon E5-2686 v4 (Broadwell) processors that are optimized specifically for EC2. When used with Elastic Network Adapter (ENA) inside of a placement group, the instances can deliver up to 20 Gbps of low-latency network bandwidth. To learn more about the ENA, read my post, Elastic Network Adapter – High Performance Network Interface for Amazon EC2.

Like the m4.10xlarge, the m4.x16large allows you to control the C states to enable higher turbo frequencies when you are using just a few cores. You can also control the P states to lower performance variability (read my extended description in New C4 Instances to learn more about both of these features).

You can purchase On-Demand Instances, Spot Instances, and Reserved Instances; visit the EC2 Pricing page for more information.

Available Now
As part of today’s launch we are also making the M4 instances available in the China (Beijing), South America (Brazil), and AWS GovCloud (US) regions.


ProgrammableWebVoximplant Web SDK 4.0 Features Microsoft Edge Audio Calls Support

Voximplant, a cloud-based voice and video communications platform provider, has announced the release of Voximplant Web SDK 4.0, an SDK (beta) that developers can use to add voice and video communications capabilities to Web applications. Voximplant Web SDK 4.0 includes a number of key features such as Microsoft Edge audio calls support, the ability to enable video during an existing audio call, modification of audio and video streams via filters, and h.264 video codec "high priority" setting.

ProgrammableWebAmazon API Gateway Updates Aim to Simplify API Development

AWS API Gateway is a managed service that allows users to develop, host and monitor API backends on AWS infrastructure. In a move that aims to simplify API development while tightening integration with other AWS services, Amazon API Gateway has announced several updates.

Amazon Web ServicesAWS Hot Startups – September 2016

Tina Barr is back with this month’s hot startups on AWS!


It’s officially fall so warm up that hot cider and check out this month’s great AWS-powered startups:

  • Funding Circle – The leading online marketplace for business loans.
  • Karhoo – A ride comparison app.
  • nearbuy – Connecting customers and local merchants across India.

Funding Circle (UK)
Funding Circle is one of the world’s leading direct lending platforms for business loans, where people and organizations can invest in successful small businesses. The platform was established in 2010 by co-founders Samir Desai, James Meekings, and Andrew Mullinger as a direct response to the noncompetitive lending market that exists in the UK. Funding Circle’s goal was to create the infrastructure – similar to a stock exchange or bond market – where any investor could lend to small businesses. With Funding Circle, individuals, financial institutions, and even governments can lend to creditworthy small businesses using an online direct lending platform. Since its inception, Funding Circle has raised $300M in equity capital from the same investors that backed Facebook, Twitter, and Sky. The platform expanded to the US market in October 2013 and launched across Continental Europe in October 2015.

Funding Circle has given businesses the ability to apply online for loans much faster than they could through traditional routes due in part to the absence of high overhead branch costs and legacy IT issues. Their investors include more than 50,000 individuals, the Government-backed British Business Bank, the European Investment Bank, and many local councils and large financial institutions. To date, more than £1.4 billion has been lent through the platform to nearly 16,000 small businesses in the UK alone. Funding Circle’s growth has led independent experts to predict that it will see strong growth in the UK business lending market within a decade. The platform has also made a huge impact in the UK economy – boosting it by £2.7 billion, creating up to 40,000 new jobs, and helping to build more than 2,000 new homes.

As a regulated business, Funding Circle needs separate infrastructure in multiple geographies. AWS provides similar services across all of Funding Circle’s territories. They use the full AWS stack from the top, with Amazon Route 53 directing traffic across global Amazon EC2 instances, to data analytics with Amazon Redshift.

Check out this short video to learn more about how Funding Circle works!

Karhoo (New York)
Daniel Ishag, founder and CEO of Karhoo, found himself in a situation many of us have probably been in. He was in a hotel in California using an app to call a cab from one of the big on-demand services. The driver cancelled. Daniel tried three or four different companies and again, they all cancelled. The very next day he was booking a flight when he saw all of the ways in which travel companies clearly presented airline choices for travelers. Daniel realized that there was great potential to translate this to ground transportation – specifically with taxis and licensed private hire. Within 48 hours of this realization, he was on his way to Bombay to prototype the product.

Karhoo is the global cab comparison and booking app that provides passengers with more choices each time they book a ride. By connecting directly to the fleet dispatch system of established black cab, minicab, and executive car operators, the app allows passengers to choose the ride they want, at the right price with no surge pricing. The vendor-neutral platform also gives passengers the ability to pre-book their rides days or months in advance. With over 500,000 cars on the platform, Karhoo is changing the landscape of the on-demand transport industry.

In order to build a scalable business, Karhoo uses AWS to implement many independent integration projects, run an operation that is data-driven, and experiment with tools and technologies without committing to heavy costs. They utilize Amazon S3 for storage and Amazon EC2, Amazon Redshift, and Amazon RDS for operation. Karhoo also uses Amazon EMR, Amazon ElastiCache, and Amazon SES and is looking into future products such as a mobile device testing farm.

Check out Karhoo’s blog to keep up with their latest news!

nearbuy (India)
nearbuy is India’s first hyper-local online platform that gives consumers and local merchants a place to discover and interact with each other. They help consumers find some of the best deals in food, beauty, health, hotels, and more in over 30 cities in India. Here’s how to use them:

  • Explore options and deals at restaurants, spas, gyms, movies, hotels and more around you.
  • Buy easily and securely, using credit/debit cards, net-banking, or wallets.
  • Enjoy the service by simply showing your voucher on the nearbuy app (iOS and Android).

After continuously observing the amount of time people were spending on their mobile phones, six passionate individuals decided to build a product that allowed for all goods and services in India to be purchased online. nearbuy has been able to make the time gap between purchase and consumption almost instant, make experiences more relevant by offering them at the user’s current location, and allow services such as appointments and payments to be made from the app itself. The nearbuy team is currently charting a path to define how services can and will be bought online in India.

nearbuy chose AWS in order to reduce its time to market while aggressively scaling their operations. They leverage Amazon EC2 heavily and were one of the few companies in the region running their  entire production load on EC2. The container-based approach has not only helped nearbuy significantly reduce its infrastructure cost, but has also enabled them to implement CI+CD (Continuous Integration / Continuous Deployment), which has reduced time to ship exponentially.

Stay connected to nearbuy by following them at

Tina Barr

Amazon Web ServicesNow Available – Amazon Linux AMI 2016.09

My colleague Sean Kelly is part of the team that produces the Amazon Linux AMI. He shared the guest post below in order to introduce you to the newest version!


The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We offer new major versions of the Amazon Linux AMI after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and we welcome feedback on them.

Launching 2016.09 Today
Today we launching the 2016.09 Amazon Linux AMI, which is supported in all regions and on all current-generation EC2 instance types. The Amazon Linux AMI supports both HVM and PV modes, as well as both EBS-backed and Instance Store-backed AMIs.

You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running the following commands:

$ sudo yum clean all
$ sudo yum update

And then rebooting the instance.

New Features
The Amazon Linux AMI’s roadmap is driven in large part by customer requests. We’ve added a number of features in this release in response to these requests and to keep our existing feature set up-to-date:

Nginx 1.10 – Based on numerous customer requests, the Amazon Linux AMI 2016.09 repositories include the latest stable Nginx 1.10 release. You can install or upgrade to the latest version with sudo yum install nginx.

PostgreSQL 9.5 – Many customers have asked for PostgreSQL 9.5, and it is now available as a separate package from our other PostgreSQL offerings. PostgreSQL 9.5 is available via sudo yum install postgresql95.

Python 3.5Python 3.5, the latest in the Python 3.x series, has been integrated with our existing Python experience and is now available in the Amazon Linux AMI repositories. This includes the associated virtualenv and pip packages, which can be used to install and manage dependencies. The default python version for /usr/bin/python can be managed via alternatives, just like our existing Python packages. Python 3.5 and the associated pip and virtualenv packages can be installed via sudo yum install python35 python35-virtualenv python35-pip.

Amazon SSM Agent – The Amazon SSM Agent allows you to use Run Command in order to configure and run scripts on your EC2 instances and is now available in the Amazon Linux 2016.09 repositories (read Remotely Manage Your Instances to learn more). Install the agent by running sudo yum install amazon-ssm-agent and start it with sudo /sbin/start amazon-ssm-agent.

Learn More
To learn more about all of the new features of the new Amazon Linux AMI, take a look at the release notes.

Sean Kelly, Amazon Linux AMI Team

PS – If you would like to work on future versions of the Amazon Linux AMI, check out our Linux jobs!


ProgrammableWebBeyondTrust Announces Password Management API

BeyondTrust, global information security company dedicated to preventing privilege abuse, announced a free API that enables users to call stored credentials from its PowerBroker Password Safe.

Amazon Web ServicesAWS Pop-up Loft and Innovation Lab in Munich

I’m happy to be able to announce that an AWS Pop-up Loft is opening in Munich on October 26th, with a full calendar of events and a brand-new AWS Innovation Lab, all created with the help of our friends at Intel and Nordcloud. Developers, entrepreneurs, students come to AWS Lofts around the world to learn, code, collaborate, and to ask questions. The Loft will provide developers and architects in Munich with access to local technical resources and expertise that will help them to build robust and successful cloud-powered applications.

Near Munich Königsplatz Station
This loft is located at Brienner Str 49, 80333 in Munich, close to Königsplatz Station and convenient to Stiglmaierplatz. Hours are 10 AM to 6 PM Monday through Friday, with special events in the evening.

During the day, you will have access to the Ask an Architect Bar, daily education sessions, Wi-Fi, a co-working space, coffee, and snacks, all at no charge. There will also be resources to help you to create, run, and grow your startup including educational sessions from local AWS partners, accelerators, and incubators.

Ask an Architect
Step up to the Ask an Architect Bar with your code, architecture diagrams, and your AWS questions at the ready! Simply walk in. You will have access to deep technical expertise and will be able to get guidance on AWS architecture, usage of specific AWS services and features, cost optimization, and more.

AWS Education Sessions
During the day, AWS Solution Architects, Product Managers, and Evangelists will be leading 60-minute educational sessions designed to help you to learn more about specific AWS services and use cases. You can attend these sessions to learn about Serverless Architectures, Mobile & Gaming, Databases, Big Data, Compute & Networking, Architecture, Operations, Security, Machine Learning, and more, all at no charge.

Startup Education Sessions
AWS startup community representatives, incubators, accelerators, startup scene influencers, and hot startup customers running on AWS will share best-practices, entrepreneurial know-how, and lessons learned. Pop in to learn the art of pitching, customer validation & profiling, PR for startups & corporations, and more.

Innovation Lab
The new AWS Innovation Lab is adjacent to the Munich Loft. With over 350 square meters of space, the Lab is Designed to be a resource for mid-market and enterprise companies that are ready to grow their business. It will feature interactive demos, videos, and other materials designed to explain the benefits of digital transformation and cloud-powered innovation, with a focus on Big Data, mobile applications, and the fourth industrial revolution (Industry 4.0).

Come in and Say Hello
We look forward to using the Loft to meet and to connect with our customers, and expect that it will be a place that they visit on a regular basis. Please feel free to stop in and say hello to my colleagues at the Munich Loft if you happen to find yourself in the city!


ProgrammableWeb: APIsW3C Proximity Sensor

The W3C Proximity Sensor API is a specification that defines a sensor interface for monitoring the presence of nearby objects. It is designed to extend the W3C Generic Sensor API to provide proximity level information. This information is reported as the distance from the sensor to the nearest visible surface and is given in centimeters.
Date Updated: 2016-09-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWonderPush Management

WonderPush enables developers to integrate push notifications with browser and mobile applications. This platform supports 350K push notifications per second, geo-targeting, and advanced segmentation.The Management API allows developers to perform administrative tasks such as modifying campaigns, users, and installations. This API exchanges information in JSON format.
Date Updated: 2016-09-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSuggestGrid

SuggestGrid is a recommendations platform dedicated to developers for easy integration. This platform can be used to create targeted advertisements and promotions, show personalized content, and recommend products. SuggestGrid has applications in eCommerce, content publishing, online marketing, and advertising. The SuggestGrid REST API supports POST and GET methods.
Date Updated: 2016-09-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsAcuant Web Services

The Web Services API offers document processing that auto-populate its information into an application. Acuant is a Los Angeles based identity solutions provider. Automotive, eCommerce, healthcare, and any other type of application that may take advantage of identity verification can integrate with Acuant.
Date Updated: 2016-09-27
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebDaily API RoundUp: Oracle Cloud Stack Manager, Wapack Labs, ArubaOS, ChatBottle, Knewin

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebWhat's the Difference Between a Monolith and Microservices?

Application architectures have become a widely discussed and often debated topic in technology circles these days. The number of companies building complex applications requiring an architecture that will allow for high scalability, availability, and speed are growing rapidly. Many technology companies including Netflix, Amazon, Google, IBM, and Twitter have built applications that are extremely large and complex.

WHATWG blogSunsetting the JavaScript Standard

Back in 2012, the WHATWG set out to document the differences between the ECMAScript 5.1 specification and the compatibility and interoperability requirements for ECMAScript implementations in web browsers.

A specification draft was first published under the name of “Web ECMAScript”, but later renamed to just “JavaScript”. As such, the JavaScript Standard was born.

Our work on the JavaScript Standard consisted of three tasks:

  1. figuring out implementation differences for various non-standard features;
  2. filing browser bugs to get implementations to converge;
  3. and finally writing specification text for the common or most sensible behavior, hoping it would one day be upstreamed to ECMAScript.

That day has come.

Some remaining web compatibility issues are tracked in the repository for the ECMAScript spec, which now redirects to. The rest of the contents of the JavaScript Standard have been upstreamed into ECMAScript, Annex B.

This is good news for everyone. Thanks to the JavaScript Standard, browser behavior has converged, increasing interoperability; non-standard features got well-defined and standardized; and the ECMAScript standard more closely matches reality.


  • The infamous “string HTML methods”: String.prototype.anchor(name), String.prototype.big(), String.prototype.blink(), String.prototype.bold(), String.prototype.fixed(), String.prototype.fontcolor(color), String.prototype.fontsize(size), String.prototype.italics(),, String.prototype.small(), String.prototype.strike(), String.prototype.sub(), and String.prototype.sup(). Browsers implemented these slightly differently in various ways, which in one case lead to a security issue (and not just in theory!). It was an uphill battle, but eventually browsers and the ECMAScript spec matched the behavior that the JavaScript Standard had defined.

  • Similarly, ECMAScript now has spec text for String.prototype.substr(start, length).

  • ECMAScript used to require a fixed, heavily outdated, version of the Unicode Standard for determining the set of whitespace characters, and what’s a valid identifier name. The JavaScript Standard required the latest available Unicode version instead. ECMAScript first updated the Unicode version number and later removed the fixed version reference altogether.

  • ECMAScript Annex B, which specifies things like escape and unescape, used to be purely informative and only there “for compatibility with some older ECMAScript programs”. The JavaScript Standard made it normative and required for web browsers. Nowadays, the ECMAScript spec does the same.

  • The JavaScript Standard documented the existence of HTML-like comment syntax (<!-- and -->). As of ECMAScript 2015, Annex B fully defines this syntax.

  • The __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ methods on Object.prototype are defined in ECMAScript Annex B, as is __proto__.

So long, JavaScript Standard, and thanks for all the fish!

Shelley Powers (Burningbird)Why Debate Moderators Should Not Fact Check Candidates


Well, the debate happened. Hillary Clinton cleaned Trump’s clock. Trump fell apart. And, by all indications, Lester Holt did a good job.

See? This is the way it’s done:

In both cases, Holt called out Trump’s lies in advance. He called the questions Trump raised about President Barack Obama’s birthplace “false claims” and noted in his question about the Iraq War that Trump had supported the war when it began. Holt then followed up to reiterate contradictions with Trump’s past statements.


When Fox News Chris Wallace stated he would not be fact checking the Presidential candidates during the debate he moderated, people’s heads seemed to spontaneously explode.

It is a given that one of the Presidential candidates this year is known for the sheer number of lies he can state within a surprisingly short period of time. The New York Time clocked him at 31 lies in just the last week. And his ability to tell a lie is only matched by his breathtaking level of ignorance about governance and foreign policy.

Combined, the two should generate, at a minimum a great deal of misinformation, and at a maximum, some real whoppers.

It is important to fact-check this and every candidate; even the ones we like and adore. But it is not the job of debate moderators to insert themselves into the debate. If they do, the debate can only result in a “You said this”, “I did not”, “Did, too”, “Did not” point and counter-point.  Not only does this disrupt the debate, it makes the debate about the moderator and the candidate(s), rather than just the candidates.

It is the job of the other candidate to fact-check their opponent. It then becomes the moderator’s job to ensure that such fact-checking doesn’t degenerate into a case of “Did, too”, “Did not”, “Did so”, “No you didn’t”, which benefits no one.

Now, what a skilled moderator can do is be aware of a candidate’s proclivity to stretch the truth on various issues, and coach questions in such a way that they short-circuit this tendency. So, instead of asking a candidate a broad, open-ended question such as “Did you support the invasion of Iraq?”, they can ask, “When you were on Howard Stern’s show in 2002, you said you supported the invasion of Iraq. Why did you feel this was a good move at the time?”

It reminds the candidate of previous fact-checking and forces them to respond to a specific event. If the candidate lies at that point, then it’s obvious. It gives them no room to run, no doorway in which to escape.

In addition, though it may not be the moderator’s job to fact-check, it is most definitely the job of the media. Not only should journalists and pundits point out inconsistencies and fabrications, they should do so in whatever immediate manner they can. This can include TV banners, post-debate discussions that include fact-checking, and even annotating the debate transcripts with asides providing additional information.

However, when a journalist is a debate moderator, they stop being a journalist at that point.

A debate moderator’s job is to ask pertinent, relevant questions that elicit information voters need in order to make an informed choice. To enable this they have to ensure that the point/counter-point between the candidates stays on point and is relevant. And they have to ask questions specific to the individual’s qualifications for the position. It becomes, in effect, a job interview. It provides a way for people to compare the candidates on the issues, not their personalities.

This also means that the debate moderator must avoid pandering to whatever is popular, or focusing on past events that have already been talked to death. In other words, they should avoid asking questions about specifics to the individual and focus, instead, on specifics of the job of President of the United States. This may include asking questions about issues related to past events, and using the event as a locus, but the focus shouldn’t be on the past event, itself.

Yeah, fat chance on that one.

The image is of turkey vultures. It seemed appropriate to the discussion.

The post Why Debate Moderators Should Not Fact Check Candidates appeared first on Burningbird.

Matt Webb (Schulze & Webb)Upcoming chances to meet in Amsterdam, Berlin, etc

So I'm heading up this startup accelerator for IoT and connected hardware. Applications close 14 November. I've just been in the states seeing how previous programs have run. It's all pretty excellent. More on that later.

Right now I'm in outreach mode. I'm meeting as many startups as possible in order to spread the word, and to get a better sense of what the current challenges and opportunities are.

In return, I'm happy to share my take on the business and product, make connections to potential partners and investors where I can, and answer questions about how this particular accelerator works.

All of this is usually quite ad hoc, but there are a few convenient times coming up:

  • Amsterdam. I'll be at Makerversity in Amsterdam on Friday 7 October. We'll be hanging out and having coffee in the afternoon, sign up here. More focused meetings also possible.
  • Berlin. I'm in Berlin for a few days, and will be running office hours at Betahaus on Friday 14 October. Got an Internet of Things or hardware startup? Sign up to meet.
  • Skype. The problem with coffee meetings is you only spend time with startups who are nearby. So on Wednesday afternoons though October, I'll be at my laptop ready to speak. Choose a time here.

Of course I'm always up for meeting over coffee. Drop me a line if you want to set something up: matt at interconnected dot org

Jeremy Keith (Adactio)Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.


The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:{url}

I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

ProgrammableWebResearcher Raises Privacy Concerns Regarding W3C Proximity Sensor API

In June of this year, W3C released the first draft of the Proximity Sensor API based on the Generic Sensor API specification. The W3C Generic Sensor API specification aims to define a framework for exposing sensor data and promote consistency across sensor APIs.

ProgrammableWeb: APIsBotlytics

The Botlytics API is a REST API that provides developers tools for tracking messages and conversations that your bots send and receive. It allows for counting, addressing context, tracking with specific queries, and more. The public RAML file for use with the Botlytics API can be found at:
Date Updated: 2016-09-26
Tags: [field_primary_category], [field_secondary_categories]

Bob DuCharme (Innodata Isogen)Semantic web semantics vs. vector embedding machine learning semantics

It's all semantics.

Home and semantics

When I presented "intro to the semantic web" slides in TopQuadrant product training classes, I described how people talking about "semantics" in the context of semantic web technology mean something specific, but that other claims for computerized semantics (especially, in many cases, "semantic search") were often vague attempts to use the word as a marketing term. Since joining CCRi, though, I've learned plenty about machine learning applications that use semantics to get real work done (often, "semantic search"), and they can do some great things.

Semantic Web semantics

To review the semantic web sense of "semantics": RDF gives us a way to state facts using {subject, predicate, object} triples. RDFS and OWL give us vocabularies to describe the resources referenced in these triples, and the descriptions can record semantics about those resources that let us get more out of the data. Of course, the descriptions themselves are triples, letting us say things like {ex:Employee rdfs:subClassOf ex:Person}, which tells us that any instance of the ex:Employee class is also an instance of ex:Person.

That example indicates some of the semantics of what it means to be an employee, but people familiar with object-oriented development take that ability for granted. OWL can take the recording of semantics well beyond that. For example, because properties themselves are resources, when I say {dm:locatedIn rdf:type owl:TransitiveProperty}, I'm encoding some of the meaning of the dm:locatedIn property in a machine-readable way: I'm saying that it's transitive, so that if {x:resource1 dm:locatedIn x:resource2} and {x:resource2 dm:locatedIn x:resource3}, we can infer that {x:resource1 dm:locatedIn x:resource3}.

A tool that understands what owl:TransitiveProperty means will let me get more out of my data. My blog entry Trying Out Blazegraph from earlier this year showed how I took advantage of OWL metadata to query for all the furniture in a particular building even though the dataset had no explicit data about any resources being furniture or any resources being in that building other than some rooms.

This is all built on very explicit semantics: we use triples to say things about resources so that people and applications can understand and do more with those resources. The interesting semantics work in the machine learning world is more about inferring semantic relationships.

Semantics and embedded vector spaces

(All suggestions for corrections to this section are welcome.) Machine learning is essentially the use of data-driven algorithms that perform better as they have more data to work with, "learning" from this additional data. For example, Netflix can make better recommendations to you now than they could ten years ago because the additional accumulated data about what you like to watch and what other people with similar tastes have also watched gives Netflix more to go on when making these recommendations.

The world of distributional semantics shows that analysis of what words appear with what other words, in what order, can tell us a lot about these words and their relationships--if you analyze enough text. Let's say we begin by using a neural network to assign a vector of numbers to each word. This creates a collection of vectors known as a "vector space"; adding vectors to this space is known as "embedding" them. Performing linear algebra on these vectors can provide insight about the relationships between the words that the vectors represent. In the most popular example, the mathematical relationship between the vectors for the words "king" and "queen" is very similar to the relationship between the vectors for "man" and "woman". This diagram from the TensorFlow tutorial Vector Representations of Words shows that other identified relationships include grammatical and geographical ones:

TensorFlow diagram about inferred word relationships

The popular open source word2vec implementation of this developed at Google includes a script that lets you do analogy queries. (The TensorFlow tutorial mentioned above uses word2vec; another great way to get hands-on experience with word vectors is Radim Rehurek's gensim tutorial.) I installed word2vec on an Ubuntu machine easily enough, started up the script, and it prompted me to enter three words. I entered "king queen father" to ask it "king is to queen as father is to what?" It gave me a list of 40 word-score pairs with these at the top:

     mother    0.698822
    husband    0.553576
     sister    0.552917
        her    0.548955
grandmother    0.529910
       wife    0.526212
    parents    0.512507
   daughter    0.509455

Entering "london england berlin" produced a list that began with this:

   germany     0.522487
   prussia     0.482481
   austria     0.447184
    saxony     0.435668
   bohemia     0.429096
westphalia     0.407746
     italy     0.406134

I entered "run ran walk" in the hope of seeing "walked" but got a list that began like this:

   hooray      0.446358
    rides      0.445045
ninotchka      0.444158
searchers      0.442369
   destry      0.435961

It did a pretty good job with most of these, but obviously not a great job throughout. The past tense of walk is definitely not "hooray", but these inferences were based on a training data set of 96 megabytes, which isn't very large. A Google search on phrases from the text8 input file included with word2vec for this demo shows that it's probably part of a 2006 Wikipedia dump used for text compression tests and other processes that need a non-trivial text collection. More serious applications of word2vec often read much larger Wikipedia subsets as training data, and of course you're not limited to using Wikipedia data: the exploration of other datasets that use a variety of spoken languages and scripts is one of the most interesting aspects of these early days of the use of this technology.

The one-to-one relationships shown in the TensorFlow diagrams above make the inferred relationships look more magical than they are. As you can see from the results of my queries, word2vec finds the words that are closest to what you asked for and lists them with their scores, and you may have several with good scores or none. Your application can just pick the result with the highest score, but you might want to first set an acceptable cutoff value so that you don't take the "hooray" inference too seriously.

On the other hand, if you just pick the single result with the highest score, you might miss some good inferences, because while Berlin is the capital of Germany, it was also the capital of Prussia for over 200 years, so I was happy to see that get the second-highest score there--although, if we put too much faith in a score of 0.482481 (or even of 0.522487) we're going to get some "king queen father" answers that we don't want. Again, a bigger training data set would help there.

If you look at the script itself, you'll see various parameters that you can tweak when creating the vector data. The use of larger training sets is not the only thing that can improve the results above, and machine learning expertise means not only getting to know the algorithms that are available but also learning how to tune parameters like these.

The script is simple enough that I saw that I could easily revise it to make it read some other file instead of the text8 one included with it. I set it to read the Summa Thelogica, in which St. Thomas Aquinas laid out all the theology of the Catholic Church, as I made grand plans for Big Question analogy queries like "man is to soul as God is to what?" My eventual query results were a lot more like the "run ran walk hooray" results above than anything sensible, with low scores for what it did find. With my text file of the complete Summa Thelogica weighing in at 17 megabytes, I was clearly hoping for too much from it. I do have ideas for other input to try and I encourage you to try it for yourself.

An especially exciting thing about the use of embedding vectors to identify potentially previously unknown relationships is that it's not limited to use on text. You can use it with images, video, audio, and any other machine readable data, and at CCRi, we have. (I'm using the marketing "we" here; if you've read this far you're familiar with all of my hands-on experience with embedding vectors.)

Embedding vector space semantics and semantic web semantics

Can there be any connection between these two "semantic" technologies? RDF-based models are designed to take advantage of explicit semantics, and a program like word2vec can infer semantic relationships and make them explicit. Modifications to the scripts included with word2vec could output OWL or SKOS triples that enumerate relationships between identified resources, making a nice contribution to the many systems using SKOS taxonomies and thesauruses. Another possibility is that if you can train a machine learning model with instances (for example, labeled pictures of dogs and cats) that are identified with declared classes in an ontology, then running the model on new data can do classifications that take advantage of the ontology--for example, after identifying new cat and dog pictures, a query for mammals can find them.

Going the other way, machine learning systems designed around unstructured text can often do even more with structured text, where it's easier to find what you want, and I've learned at CCRi that RDF (if not RDFS or OWL) is much more popular among such applications than I realized. Large taxonomies such as those of the Library of Congress, DBpedia, and Wikidata have lots of synonyms, explicit subclass relationships, and sometimes even definitions, and they can contribute a great deal to these applications.

A well-known success story in combining the two technologies is IBM's Watson. The paper Semantic Technologies in IBM Watson describes the technologies used in Watson and how these technologies formed the basis of a seminar course given at Columbia University; distributional semantics, semantic web technology, and DBpedia all play a role. Frederick Giasson and Mike Bergman's Cognonto also looks like an interesting project to connect machine learning to large collections of triples. I'm sure that other interesting combinations are happening around the world, especially considering the amount of open source software available in both areas.

Please add any comments to this Google+ post.

ProgrammableWebHow Dropbox Scaled and Secured their API

DropBox veterans Leah Culver and Chris Varenhorst recently sat down with Gordon Wintrob over at to reveal how they built their APIs from primitive beginnings to handling over 500 billion calls a year.

ProgrammableWebHigh-Tech Bridge Updates its Unified SSL Assessment API

High-Tech Bridge has announced a new release of its free SSL security testing service that companies and organizations can use to test their Web, email, VPN and other SSL/TLS-based services. The new release tests for known vulnerabilities in SSL/TLS implementations (e.g. Heartbleed) and in encryption protocols (e.g. POODLE), as well as checking if a SSL/TLS configuration is compliant with PCI DSS requirements, HIPAA guidance and NIST guidelines. 

ProgrammableWebHow Accusoft Migrated From Monolithic Applications to a Microservice Architecture

In July 2015, Accusoft's SaaS Applications team was tasked with integrating the recently acquired edocr application with our existing Prizm Share community for publishing and sharing documents. While this integration offered numerous challenges with data migration, feature parity, and cohesive branding, we are going to focus on the architectural changes that resulted from the project.

ProgrammableWeb: APIsFuck Yeah Markdown

The Fuck Yeah Markdown API allows developers to access and integrate the functionality of Fuck Yeah Markdown with other applications and websites. The main API method is retrieving text or HTML marked text editor documents. Fuck Yeah Markdown provides markdown and editing functionality for text and HTML documents and content.
Date Updated: 2016-09-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMicrosoft IIS

The Microsoft IIS REST API allows developers to access and integrate the functionality of Microsoft IIS with other applications and websites. Some example API methods include creating websites, managing websites, and retrieving websites. Microsoft IIS provides web server and website hosting services and functionalities.
Date Updated: 2016-09-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRuter Reise

The Reise API offers integration with Ruter's journey planner service. GET methods are used for requests. Ruter is a Norway based transportation firm that provides information about several related services including departures, fares, and route maps. Additionally, it offers real-time data based on how long the bus, tram, metro, train or ferry has traveled since the last registration point.
Date Updated: 2016-09-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsEmailLabs

The EmailLabs API integrates email analytics with the aim to filter targeted messages and avoid spam. It is available in REST HTTP format and HTTPS protocol with JSON responses.
Date Updated: 2016-09-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWapack Labs Cyberwatch

The Wapack Labs Cyberwatch API integrates domain search into applications. It aims to be useful for security purposes, monitoring web activity. Available in JSON and CSV formats with API Key. Wapack Labs provides cyberthreat security services.
Date Updated: 2016-09-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebDaily API RoundUp: WikiLeaks Hillary Clinton Email Archive, Bitport, AppMonsta, PredictHQ, Bitcoin Block Explorer

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebBox Announces Four New Security and Governance APIs

Box has introduced a set of new security and governance APIs that organizations can use to incorporate content governance and compliance capabilities into applications and workflows. This set of new Box APIs includes a Retention Policy API, Legal Hold Policy API, Watermarking API, and Folder Metadata API.

Amazon Web ServicesAWS Enterprise Support Update – Training Credits, Operations Review, Well-Architected

I often speak to new and potential AWS customers in our EBC (Executive Briefing Center) in Seattle. The vast majority of them have already bought in to the promise of the cloud and are already making plans that involve a combination of  “lifting and shifting” existing applications and building new, cloud-native ones. Because their move to AWS is often part of a larger organizational transformation and modernization, the senior leaders that I talk to want to make sure that their technical team is properly equipped to skillfully design, build, and operate cloud-powered systems.

Many of these customers are taking advantage of the AWS Enterprise Support plan as they move their mission-critical applications to the cloud. Like traditional support plans, this one provides access to technical support people and resources in the event that an issue surfaces. However, unlike traditional plans, it also focuses on helping them to build applications that are robust, cost-effective, easily maintained, and scalable. Our customers tell me that they enjoy the unique combination of hands-on, concierge-quality support and the automated, data-driven recommendations provided to them by AWS Trusted Advisor.

New Enterprise Support Benefits
Today we are making the AWS Enterprise Support Plan even better, adding three new benefits that are available to new and existing plan subscribers at no additional charge:

Training Credits – In conjunction with our training partner qwikLabs, each Enterprise Support customer is entitled to receive 500 qwikLabs training credits annually, along with a 30% discount on additional credits. The qwikLabs courses address a wide range of AWS topics; introductory courses are free and the remainder cost between 1 and 15 credits each (read the course catalog to learn more):

If you have an Enterprise Support plan and would like to gain access to your credits and discounts, please contact your AWS Technical Account Manager (TAM).

Cloud Operations Review – Enterprise Support customers are eligible for a Cloud Operations Review designed to help them to identify gaps in their approach to operating in the cloud. Originating from a set of operational best practices distilled from our experience with a large set of representative customers, this program provides you with a review of your cloud operations and the associated management practices. The program uses a four-pillared approach with a focus on preparing, monitoring, operating, and optimizing cloud-based systems in pursuit of operational excellence.

You can work with your TAM to set up a Cloud Operations Review.

Well-Architected Review – Enterprise Support customers are also eligible for a Well-Architected Review of their mission-critical workloads. While the Cloud Operations Review focuses on people and processes, this review allows our customers to measure their architecture against AWS best practices. Our goal is to help our customers to construct architectures that are secure, reliable, performance, and cost-effective. For more information about our Well-Architected program, read Are You Well-Architected?



ProgrammableWebDwolla Updates API to Support Automated Clearing House&#039;s New Same Day Payment Rules

On September 23rd, 2016, a new and important Automated Clearing House rule goes into effect -- one that allows users of the ACH network to opt-into the same day processing of payments that might have otherwise taken two to four days to settle. According to NACHA (the Electronic Payments Association; the administrator of ACH), "There are many uses of ACH payments for which businesses and consumers could benefit from same-day processing.

ProgrammableWeb: APIsOracle Cloud Stack Manager

The Oracle Cloud Stack Manager REST API allows developers to access and integrate the functionality of Oracle Cloud Stack Manager with other applications and websites. Some example API methods include creating stacks, managing stacks, retrieving lists of stacks, and account management. Oracle Cloud Stack Manager provides cloud stack management services and functionalities.
Date Updated: 2016-09-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsArubaOS

The ArubaOS REST API allows developers to access and integrate the functionality of ArubaOS with other applications. The API allows for controlling wireless and LAN access points. Some example API methods include logging in, using the access points, and controlling the ArubaOS network access points. ArubaOS is a product of Hewlett Packard and is an operating system for wireless local area networks (LANs).
Date Updated: 2016-09-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNativefier

The Nativefier API allows developers to turn any site into a native application. Authored by Jia Hao, Nativefier makes it easier to create a desktop application for any website with minimal configuration using a command line tool.
Date Updated: 2016-09-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsUnofficial Google Trends Python

The Unofficial Google Trends Python API integrates the solutions of analytical reports, keyword suggestions, and hot trends. Parameters return JSON responses.
Date Updated: 2016-09-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow to Access the Tickspot API with cURL

VIA Studio's Mark Biek recently demonstrated the benefits of using cURL to bypass the typical code writing exercise needed to pull app data. cURL is a utility that's commonly used by developers to interact with Web servers from the command line of an operating system like Unix, Linux, and Mac OS X. Because of how it can send and receive Web requests, it is ready-made for poking around with Web-based APIs.

WHATWG blogDRM and Web security

For a few years now, the W3C has been working on a specification that extends the HTML standard to add a feature that literally, and intentionally, does nothing but limit the potential of the Web. They call this specification "Encrypted Media Extensions" (EME). It's essentially a plug-in mechanism for proprietary DRM modules.

Much has been written on how DRM is bad for users because it prevents fair use, on how it is technically impossible to ever actually implement, on how it's actually a tool for controlling distributors, a purpose for which it is working well (as opposed to being to prevent copyright violations, a purpose for which it isn't working at all), and on how it is literally an anti-accessibility technology (it is designed to make content less accessible, to prevent users from using the content as they see fit, even preventing them from using the content in ways that are otherwise legally permissible, e.g. in the US, for parody or criticism). Much has also been written about the W3C's hypocrisy in supporting DRM, and on how it is a betrayal to all Web users. It is clear that the W3C allowing DRM technologies to be developed at the W3C is just a naked ploy for the W3C to get more (paying) member companies to join. These issues all remain. Let's ignore them for the rest of post, though.

One of the other problems with DRM is that, since it can't work technically, DRM supporters have managed to get the laws in many jurisdictions changed to make it illegal to even attempt to break DRM. For example, in the US, there's the DMCA clauses 17 U.S.C. § 1201 and 1203: "No person shall circumvent a technological measure that effectively controls access to a work protected under this title", and "Any person injured by a violation of section 1201 or 1202 may bring a civil action in an appropriate United States district court for such violation".

This has led to a chilling effect in the security research community, with scientists avoiding studying anything that might relate to a DRM scheme, lest they be sued. The more technology embeds DRM, therefore, the less secure our technology stack will be, with each DRM-impacted layer getting fewer and fewer eyeballs looking for problems.

We can ill afford a chilling effect on Web browser security research. Browsers are continually attacked. Everyone who uses the Web uses a browser, and everyone would therefore be vulnerable if security research on browsers were to stop.

Since EME introduces DRM to browsers, it introduces this risk.

A proposal was made to avoid this problem. It would simply require each company working on the EME specification to sign an agreement that they would not sue security researchers studying EME. The W3C already requires that members sign a similar agreement relating to patents, so this is a simple extension. Such an agreement wouldn't prevent members from suing for copyright infringement, it wouldn't reduce the influence of content producers over content distributors; all it does is attempt to address this even more critical issue that would lead to a reduction in security research on browsers.

The W3C is refusing to require this. We call on the W3C to change their mind on this. The security of the Web technology stack is critical to the health of the Web as a whole.

- Ian Hickson, Simon Pieters, Anne van Kesteren

ProgrammableWebMcKesson Launches Interoperability Platform to Build FHIR API Apps

McKesson Health Solutions (MHS) recently unveiled the McKesson Intelligence Hub, a new technology platform for enabling interoperability and sharing business intelligence among healthcare apps.

ProgrammableWebHow Defining Some API Standards and Best Practices Might Benefit Enterprises

There is no way to stop or reverse the accelerating pace and influence of data in business life.

ProgrammableWebTwilio Launches Voice Insights API to Monitor WebRTC Performance

On Tuesday Twilio announced the launch of their Voice Insights API, which allows developers to monitor network and device performance during WebRTC calls. With the API, developers can now programmatically adjust their applications in response to varying network and device conditions.

ProgrammableWebJust Because Github Has a GraphQL API Doesn’t Mean You Should Too

Recently there has been a lot of talk about Facebook’s GraphQL specification, and exactly how it transforms the way applications are now able to interact with each other.

Amazon Web ServicesAdditional At-Rest and In-Transit Encryption Options for Amazon EMR

Our customers use Amazon EMR (including Apache Hadoop and the full range of tools that make up the Apache Spark ecosystem) to handle many types of mission-critical big data use cases. For example:

  • Yelp processes over a terabyte of log files and photos every day.
  • Expedia processes streams of clickstream, user interaction, and supply data.
  • FINRA analyzes billions of brokerage transaction records daily.
  • DataXu evaluates 30 trillion ad opportunities monthly.

Because customers like these (see our big data use cases for many others) are processing data that is mission-critical and often sensitive, they need to keep it safe and sound.

We already offer several data encryption options for EMR including server and client side encryption for Amazon S3 with EMRFS and Transparent Data Encryption for HDFS. While these solutions do a good job of protecting data at rest, they do not address data stored in temporary files or data that is in flight, moving between job steps. Each of these encryption options must be individually enabled and configured, making the process of implementing encryption more tedious that it need be.

It is time to change this!

New Encryption Support
Today we are launch a new, comprehensive encryption solution for EMR. You can now easily enable at-rest and in-transit encryption for Apache Spark, Apache Tez, and Hadoop MapReduce on EMR.

The at-rest encryption addresses the following types of storage:

  • Data stored in S3 via EMRFS.
  • Data stored in the local file system of each node.
  • Data stored on the cluster using HDFS.

The in-transit encryption makes use of the open-source encryption features native to the following frameworks:

  • Apache Spark
  • Apache Tez
  • Apache Hadoop MapReduce

This new feature can be configured using an Amazon EMR security configuration.  You can create a configuration from the EMR Console, the EMR CLI, or via the EMR API.

The EMR Console now includes a list of security configurations:

Click on Create to make a new one:

Enter a name, and then choose the desired mode and type for each aspect of this new feature. Based on the mode or the type, the console will prompt you for additional information.

S3 Encryption:

Local disk encryption:

In-transit encryption:

If you choose PEM as the certificate provider type, you will need to enter the S3 location of a ZIP file that contains the PEM file(s) that you want to use for encryption. If you choose Custom, you will need to enter the S3 location of a JAR file and the class name of the custom certificate provider.

After you make all of your choices and click on Create, your security configuration will appear in the console:

You can then specify the configuration when you create a new EMR Cluster. This feature is available for clusters that are running Amazon EMR release 4.8.0 or 5.0.0. To learn more, read about Amazon EMR Encryption with Security Configurations.



Shelley Powers (Burningbird)Republicans Desperate Attempt to Create a New Clinton Email Scandal


I weep for humanity, I really do.

Now Snopes has a long posting on this story, as if it’s all something incredibly profound. The Hill has decided to double-down on it, like it’s discovered the holy grail. And the House Oversight Committee has ordered Reddit to preserve the posts, even though every single one is already preserved on the Wayback Machine.

Every member of these organization’s IT departments is laughing their heads off right now. Why?

Because all it was, was a simple question asking how to delete an email address in meta date—the only component of Hillary Clinton’s emails that wasn’t relevant then, and isn’t relevant now.


An anonymous Twitter user has cracked a new Clinton email scandal.


A new story in The Hill about the Clinton emails appeared on my radar today. Evidently the House Oversight Committee is, in all seriousness, investigating a Reddit post.

A deleted Reddit post.

This post was dug up out of the archives by an anonymous Twitter account.

Yeah, a deleted, anonymous Reddit account, dug up by an anonymous Twitter Account. What passes for Deep Throat in the social media age.

The Reddit post is from a person with a handle of stonetear. Our intrepid reporters at The Hill and their little pundit minions have loosely connected the stonetear handle to Paul Combetta.

Paul Who?

Paul Combetta is an IT specialist currently employed by Platte River Networks. He was involved in the maintenance of the Clinton email server when it moved to PRN. He’s the guy who told the FBI, “Oh sh..” because he didn’t establish a new protocol to only save emails for 60 days, and when he realized it later, deleted the emails.

Remember BleachBit and Trump’s acid wash? Yeah, that IT guy.

Anyway, the recovered Reddit post is asking for tech help:

“Hello all- I may be facing a very interesting situation where I need to strip out a VIP’s (VERY VIP) email address from a bunch of archived email that I have both in a live Exchange mailbox, as well as a PST file,” stonetear wrote. “Basically, they don’t want the VIP’s email address exposed to anyone, and want to be able to either strip out or replace the email address in the to/from fields in all of the emails we want to send out. I am not sure if something like this is possible with PowerShell, or exporting all of the emails to MSG and doing find/replaces with a batch processing program of some sort. Does anyone have experience with something like this, and/or suggestions on how this might be accomplished?”

The date on the post is July 23, 2014, the day after the House Benghazi Committee and State reached an agreement on producing Clinton’s emails.

All the little tin hats are just having a field day with this. So much so that I hate to burst their bubble.

But I’m going to burst their bubble.

Whether stonetear is Combetta or not, all this post tells us is that an IT person was trying to strip out an email address from a bunch of emails.

Were they Clinton emails? Probably not, but it doesn’t matter. I can say this because a) Hillary Clinton was no longer using the email address at the time the emails were turned over, b) Clinton’s email address was already known at the time, and c) the Clinton emails published by State all display Clinton’s old email address. The email address wasn’t stripped.

You strip out an email address because you don’t want the public to get access to it. Other than that, there’s no reason to do so.

It made absolutely no difference in the Clinton emails.

That isn’t to say that our friends at Judicial Watch didn’t do their usual misrepresentation of the non-story.

Notice the reference to Delete ‘Very VIP’ Emails. No longer deleting email address…emails.

Don’t these people have a life?

The post Republicans Desperate Attempt to Create a New Clinton Email Scandal appeared first on Burningbird.

ProgrammableWeb: APIsBlock Explorer Web Socket

The Block Explorer Web Socket API offers real-time Bitcoin transactions, status and block information. The API provides objects for transactions on the Bitcoin block chain. The "Status" event is published in the sync room, and returns current block syncing information. This API is served using
Date Updated: 2016-09-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBlock Explorer REST

The Block Explorer REST API offers Bitcoin blockchain information. This allows developers to view real-time information about blocks, addresses, and transactions. Call types include obtaining a block hash by height, address properties, transactions by block and address, and transaction broadcasting.
Date Updated: 2016-09-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsHillary Clinton Email Archive

The Hillary Clinton Email Archive API contains the 30,322 emails and email attachments sent to and from Hillary Clinton's private email server while she was Secretary of State. The API uses GET for requests, and JSON arrays for responses. All requests require a Token.
Date Updated: 2016-09-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsBeMyGuest

BeMyGuest is a booking platform specialized in Asian destinations. The API offers interface options such as locations, supported languages and currencies, product types and categories, and default user settings management. JSON is used for responses, and requests are made over HTTPS. In order to use the API, developers must obtain an API Key.
Date Updated: 2016-09-21
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesAPI Gateway Update – New Features Simplify API Development

Amazon API Gateway allows you to quickly and easily build and run application backends that are robust and scalable. With the recent addition of usage plans, you can create an ecosystem of partner developers around your APIs. Let’s review some terminology to start things off:

Endpoint – A URL (provided by API Gateway) that responds to HTTP requests. These requests use HTTP methods such as GET, PUT, and POST.

Resource – A named entity that exists (symbolically) within an endpoint, referred to by a hierarchical path.

Behavior – The action that your code will take in response to an HTTP request on a particular resource, using an HTTP method.

Integration – The API Gateway mapping from the endpoint, resource, and HTTP method to the actual behavior, and back again.

Today we are extending the integration model provided by API Gateway with support for some new features that will make it even easier for you to build new API endpoints and to port existing applications:

Catch-all Path Variables – Instead of specifying individual paths and behaviors for groups of requests that fall within a common path (such as /store/), you can now specify a catch-all route that intercepts all requests to the path and routes them to the same function. For example a single greedy path (/store/{proxy+}) will intercept requests made to /store/list-products, /store/add-product, and /store/delete-product.

ANY Method – Instead of specifying individual behaviors for each HTTP method (GET, POST, PUT, and so forth) you can now use the catch-all ANY method to define the same integration behavior for all requests.

Lambda Function Integration – A new default mapping template will send the entire request to your Lambda function and then turn the return value into an HTTP response.

HTTP Endpoint Integration – Another new default mapping template will pass the entire request through to your HTTP endpoint and then return the response without modification. This allows you to use API Gateway as an HTTP proxy with very little in the way of setup work.

Let’s dive in!

Catch-all Path Variables
Suppose I am creating a new e-commerce API. I start like this:

And then create the /store resource:

Then I use a catch-all path variable to intercept all requests to any resource within /store (I also had to check Configure as proxy resource):

Because {proxy+} routes requests for sub-resources to the actual resource, it must be used as the final element of the resource path; it does not make sense to use it elsewhere. The {proxy+} can match a path of any depth; the example above would also match /store/us/clothing, /store/us/clothing/children, and so forth.

The proxy can connect to a Lambda function or an HTTP endpoint:

ANY Method
I no longer need to specify individual behaviors for each HTTP method when I define my resources and the methods on them:

Instead, I can select ANY and use the same integration behavior for all of the methods on the resource:

This is cleaner, simpler, and easier to set up. Your code (the integration point for all of the methods on the resource) can inspect the method name and take an appropriate action.

The ANY method is created automatically when I use a greedy path variable, as shown above. It can also be used for individual resources. You can override the configuration for an individual method (perhaps you want to handle DELETE differently), by simply creating it and changing the settings.

Lambda Function Integration
It is now easier than ever to implement a behavior using a Lambda function. A new, built-in Lambda integration template automatically maps the HTTP request elements (headers, query parameters, and payload) into a form directly consumable by the function. The template also maps the function’s return value (an object with status code, header, and body elements) to a properly structured HTTP response.

Here’s a simple function that I copied from the documentation (you can find it in Lambda Function for Proxy Integration):

I connected it to /store like this:

Then I deployed it (not shown), and tested it out like this:

The function ran as expected; the console displayed the response body, the headers, and the log files for me. Here’s the first part:

Then I hopped over to the Lambda Console and inspected the CloudWatch Logs for my function:

As you can see, line 10 of my function produced the message that I highlighted in yellow.

So, to sum it all up: you can now write Lambda functions that respond to HTTP requests on your API’s resources  without having to spend any time setting up mappings or transformations. In fact, a new addition to the Lambda Console makes this process even easier! You can now configure the API Gateway endpoint as one of the first steps in creating a new Lambda function:

HTTP Function Integration
You can also pass API requests through to an HTTP endpoint running on an EC2 instance or on-premises. Again, you don’t have to spend any time setting up mappings or transformations. Instead, you simply select HTTP for the integration type, click on Use HTTP Proxy integration, and enter the name of your endpoint:

If you specify an HTTP  method of ANY, the method of the incoming request will be passed to the endpoint as-is. Otherwise, the method will be set to the indicated value as part of the call.

Available Now
The features described above are available now and you can start using them today at no extra charge.




ProgrammableWebDaily API RoundUp: Nexmo Voice,,, Heap, MediaWiki, Vizdum

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebFinicity Announces New ACH Account Verification API

Finicity recently announced the release of its new ACH Account Verification API. The service enables payment and investment app developers to verify account details required to initiate inbound ACH account transfers. Finicity provides financial app developers more than 1,400 direct integrations to help improve new client onboarding and ongoing funds verification sourcing.

ProgrammableWebApplication Architecture for Box Platform Apps

This is the third article of a three-part series on building a custom application with Box Platform.

ProgrammableWebGoogle Releases Final Version Of Android Studio 2.2

Android Studio 2.2 has been available in preview form since May, but this week Google made the final version available to all developers. Android Studio is the integrated development environment that Android app writers use to work their magic in creating smartphone and tablet applications. 

Google says Android Studio 2.2 focuses on improving three key areas: speed, smarts, and Android platform support. How does it do that?

ProgrammableWebYelp Introduces Overhauled Developer Program and New Fusion API

In addition to taking its bug-bounty program public, and revamping its API, Yelp made more developer news today with a brand new Yelp Fusion API, and an overhauled developer portal.

ProgrammableWeb: APIsSharethrough

Sharethrough is an advertising firm based in San Francisco. The Sharethrough platform offers monetization management, engagement analytics, and on-demand native advertisement supply with the Sharethrough Exchange. Native advertising yields twice the visual focus and more consumer attention than their non-native counterparts. Developers need to register to access API documentation.
Date Updated: 2016-09-20
Tags: [field_primary_category], [field_secondary_categories]


Updated: .  Michael(tm) Smith <>