Impala is an open source query tool for Hadoop. You can use familiar SQL-like statements to activate Impala's distributed in-memory query engine, allowing you to quickly and efficiently process large amounts of data. In many cases, Impala is significantly faster than Hive, allowing you to interact with your data in real-time. Impala can process data stored in HDFS and HBase tables, work with unstructured data in a variety of formats, and supports user defined functions. It’s a great application for ad hoc queries, and is compatible with many popular Business Intelligence (BI) tools.
Today we are making Impala available as part of Amazon Elastic MapReduce. You can now launch a cluster with Impala preinstalled, load new data or access existing data, and run fast queries using a SQL-like language. Clusters can be launched from the command line using the Elastic MapReduce tools, or from the AWS Management Console:
Note that you must launch an Amazon distribution containing Hadoop 2.x (AMI version 3.0.2) if you want to make use of Impala.
I spent some time working through the new Impala tutorial in the Elastic MapReduce Developer Guide. Despite my lack of experience with Hadoop or Impala, I was able to create, load and query a sample data set without too much trouble. The sample data set consists of text files named customers, books, and transactions, each containing randomly generated data items of the appropriate type.
I started out with 1 gigabyte files and then scaled up to 5 gigabytes. Here's what that means in terms of line counts (equivalent to database records):
I was able to import each of these files in less than 2.5 seconds on an 11 node cluster composed of m1.large instances. There's no indexing phase and the tables can be queried immediately after the import. For example, the following memory-intensive query joins three tables:
It ran in a little over 30 seconds on my 11 node cluster:
Because Impala is part of the Hadoop ecosystem, it is easy to scale in order to accommodate ever-growing data sets. You can scale out by adding additional nodes to your Amazon EMR cluster. If you need additional memory per node, you can easily create a new cluster that uses instance types with additional RAM.
Impala is available today and you can start using it now!
As you know, we launched our new compute optimized instance family (C3) a few weeks ago, and wow, are we seeing unprecedented demand across all sizes and all Regions! As one of our product managers just told me, these instances are simply "fast in every dimension." They have a high performance CPU, matched with SSD-based instance storage and EC2's new enhanced networking capabilities, all at a very affordable price.
We believed that this instance type would be popular, but would not have imagined just how popular they've been. The EC2 team took a look back and found that growth in C3 usage to date has been higher than they have seen for any other newly introduced instance type. We're not talking about some small percentage difference here. It took just two weeks for C3 usage to exceed the level that the former fastest-growing instance type achieved in twenty-two weeks! This is why some of you are not getting the C3 capacity you're asking for when you request it.
In the face of this growth, we have enlarged, accelerated, and expedited our orders for additional capacity across all Regions. We are working non-stop to get it in-house, and hope to be back to more normal levels of capacity in the next couple of weeks.
The Deskero API offers access to a multi-channel ticket system for customer service. Square’s Connect API now available for creating custom data solutions. Plus: Fleksy pairs a new API with an alternative keyboard for apps, TinyPass updates its API and adds a library for Ruby developers, and 9 new APIs.
Deskero’s API Brings Brandable Customer Support with Native Integrations
How can a company go all-out on customer service when there are so many social media outlets to monitor and manage, on top of phone and email? Deskero has the answer: a multi-channel ticket system that has native integrations for each outlet, including Facebook, Linkedin and Twitter. The Deskero API allows third party developers access to its platform of organizing and tagging tickets.
A key ingredient in great customer service is speed. That’s rarely easy in customer service departments that can be chaotic at best. Deskero has streamlined their system to help solve customer problems fast. They have a one-click reply system for fast answers, and the ability to convert any incoming email into a ticket, to name just two features. This is on top of instant chat tickets and post its that mean you never forget.
The API documentation lives up to their claims of being organized, featuring clear instructions on everything from configuration and authentication to tickets and knowledge base areas.
Square’s Connect API Opens Up to Support Customized Data
Square, the makers of the revolutionary payment method for mobile devices, has opened it’s Connect API to third party developers.
As Ken Yeung reports in The Next Web, there are some important limitations:
With the new API, merchants are able to retrieve activity reports for their processed payments, refunds and deposits. Square says that what it will not provide is the ability to accept payments — it’s only for reporting purposes. It’s also limited to pulling data from a single account, so no market research capabilities are available.
Documentation covering endpoint paths, parameters, and the error types, among other features, is provided for merchants and developers who want to connect their data to third party apps.
API News Your Shouldn’t Miss
- Square opens up its Connect API to let merchants create custom solutions with their data
- Fleksy rolls out API, debuts as alternative keyboard in third-party apps
- Tinypass Updates API; Adds Library for Ruby Developers
9 New APIs
Today we had 22 new APIs added to our API directory including a semantic text analysis service, an automated text labeling tool, a mobile applications analytics platform and an application stack add-on provisioning service. Below are more details on each of these new APIs.
Ai Applied Data Miner API: Ai Applied provides technologies and services that help users obtain valuable insights from their texts, social media, and other web data. Ai offers a suite of APIs allowing developers to interact with this data in a variety of ways.
The Ai Applied Data Miner API provides a system for continuously tracking conversations in the news, on Facebook, and on Twitter, using a set of specified keywords. The API delivers structured data on all keyword-relevant messages from specified sources. Data types include source, language, age and gender of author, sentiment, and prominent phrases. The Data Miner API uses several of the other Ai Applied APIs on the backend to deliver this data.
Ai Applied Sentiment Analysis API: Ai Applied provides technologies and services that help users obtain valuable insights from their texts, social media, and other web data. Ai offers a suite of APIs allowing developers to interact with this data in a variety of ways.
The Ai Applied Sentiment Analysis API is able to extract the attitude, opinion, or feeling toward a specified entity in any given text. The API returns a text’s sentiment class and intensity in real-time. The API is able to use predefined sentiment classes (positive/neutral/negative) or custom tailored classes.
Ai Applied Text Label API: Ai Applied provides technologies and services that help users obtain valuable insights from their texts, social media, and other web data. Ai offers a suite of APIs allowing developers to interact with this data in a variety of ways.
The Ai Applied Text Label API extracts meaning from any given text in the form of labels. The API is able to extract labels from texts of any length in several languages.
Applause Analytics API: Applause is an analytics tool that measures mobile app quality and user satisfaction. Applause grades apps across ten attributes, enabling companies to compare their apps version to version and against the competition.
The Applause API provides developer access to the analytics platform. The API is able to search for apps and deliver descriptive or factual information, reviews, and aggregated Applause statistics for a given app.
Broadstack API: Broadstack enables developers to append services to their applications by creating stacks on Broadstack and adding one-click add-ons to them. Broadstack then provides a webhook containing the new configuration and the details required to connect the application to the add-ons.
The Broadstack API is aimed at developers who have provisioned add-ons. It allows them to retrieve a list of stacks that have installed a given add-on, get information on a specific stack that has installed an add-on, and update the config variables on provisioned resources.
Deskero API: Deskero is a brandable customer support desk that comes with native integrations for Twitter, Facebook, Google+, LinkedIn, and YouTube. It uses a multi-channel ticket system, which allows Deskero to accept tickets from a wide range of sources. Once collected, tickets can be organized into areas or groups and tagged with custom labels. Deskero also enables users to establish a knowledge base that customers can browse for solutions to their issues. Deskero’s reporting features include 20 kinds graphic reports; users can create custom reports if they prefer.
Ghana Stock Exchange API: The Ghana Stock Exchange API allows users to query real-time trading statistics, get in-depth market data and analysis, and authentic company information. It also allows users to quickly and easily integrate the stock exchange’s market data into third-party applications using REST-style calls. The API returns JSON, JSONP, and XML. The service is free to use.
Hummingbird API: Hummingbird is a social site for tracking, sharing, and discovering new anime. Hummingbird’s API can be used to programmatically access information about an anime work, information from a user’s profile, or to manage a particular user’s online anime library.
LuxCloud API: Luxcloud is a channel centric marketplace. The LuxCloud marketplace contains a Cloud Service Layer (CSL). CSL provides neutrality across the various cloud services’ platforms allowing sales partners to add one or multiple cloud services to platform instances. LuxCloud CSL provides developers with access to the LuxCloud services’ platform. CSL gives customers and resellers the possibility to use and manage their services and to create services on the LuxCloud platform and follow a provisioning process. Prospective partners can also view more information here: http://www.luxcloud.com/WHMCS-module/hosted-exchange-mail
Related ProgrammableWeb Resources
It seems at every API conference, there is a new feature being released by the team at OAuth.io. In October, at API Strategy and Practice in San Francisco, OAuth.io released a mobile SDK. Now after APIDays in Paris, OAuth.io has released a ‘code request’ feature to abstract usage tokens in the authentication process. Co-Founder Mehdi Medjaoui spoke with ProgrammableWeb about the service that provides a unified API for any OAuth implementation.
“OAuth is completely fragmented on the web”, Medjaoui told ProgrammableWeb. “There are multiple specs and workflows that are either respected or not, so we decided to make a glue for OAuth. We made a simple JSON configuration that describes any OAuth workflow in a simple way. It’s a straightforward way to make any OAuth into a simple API. And it’s open source. With this as our basis, we have then built up a service that makes all the OAuth flows function easily.”
Already, there are over 1900 running applications using OAuth.io in their authentication process, and a full breadth of startup developers using the OAuth.io tool. While Medjaoui is pleased that the service is letting developers get on with building new products, one of his greatest achievements is how OAuth.io has been used recently amongst US Federal Government departments. “I’m most proud of how Kin Lane is using OAuth.io for his White House project, so there is authentication on the client side. Now we are making our terms of service comply with US Government standards to enable it to be used on Government servers,” Medjaoui said.
Part of OAuth.io’s appeal amongst developers is how it handles security issues, says Medjaoui. “We have a flow that also goes on the server side, so we don’t store access tokens. In this way, we also become a single point of failure, so we are an OAuth backend, but we are open source: you can have OAuth.io on your own server, for example. And we avoid all attacks for any known CSRF exploits.”
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
“This latest feature… it’s like instead of going through those airport security checkpoints, you get to walk straight through.” Medjaoui pauses for a minute to make sure the analogy holds up. “Oh, and it makes your luggage 10 kilos lighter!”
Developers can trial the service via the OAuth.io developer portal.
Rainforest, QA solution provider, uses the power of humans to test website quality. The Rainforest API allows developers to programmatically trigger the execution of tests. Rainforest finds that nothing tests a user interface like a good old-fashioned human being. Thus, users set up tests in plain English and Rainforest takes the customer through the steps of the tests.
Co-Founder, Fred Stevens-Smith explained:
“Why did we build Rainforest? QA sucks. But we all have to do it. Like payments pre-Stripe, QA is a process that every developer hates. Yet for some reason nobody is solving this problem. Every part of our development workflow has been totally reimagined in the past few years. Startups that have taken a design-driven approach and introduced a 10x faster / simpler / cheaper product have dominated. There’s tons of innovation. Except in testing.”
Rainforest is a Y Combinator company. As Stevens-Smith mentioned, Rainforest was built straight out of frustration with the manual QA process. Rainforest has started to see traction as QA has become a given for a website to enjoy success. Zapier uses Rainforest on every single deployment to ensure there is no gap in service.
The Rainforest API uses Amazon Mechanical Turk to make calls for people instead of machines. Developers create a series of yes/no questions referred to as steps as the parameters of the test. The API call pushes the steps to the human testing base. For more information, visit the API Docs.
QA has long remained the boring necessary evil that every developer must look in the eye before deployment. Rainforest looks to take the pain out of QA with simple, plain English tests and an API to scale tests across browsers. Before launching your next app, consider integrating Rainforest.
One of the most efficient routes to market for any developer is through an application store that is used by a lot of other applications. For that reason there’s naturally a lot of interest in the AppStore from Apple, GooglePlay or app stores associated with specific cloud computing platform such as Salesforce.com. Essentially, all of these function as central points of application distribution.
But beyond certifying that applications work, Apple, Google and providers of cloud computing services don’t do much in terms of promoting the adoption of third-party applications. By way of an alternative, a new class of application distributors has emerged in the form of organizations such as AppDirect, which not only showcases cloud applications to potential customers but also helps market them, provide financing, and manage the billing services needed to actually monetize them.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
As AppDirect heads into 2014, co-CEO Daniel Saks says the distributor is now reaching enough critical mass to enter the next phase of cloud application adoption. As corporate customers get more comfortable in the cloud, many of them are beginning to move to standardize the custom user interface they use to access cloud applications. Rather than having users move between disparate application experiences, the goal is to allow end users to seamlessly move between “headless application services” delivered via the cloud.
Saks says that AppDirect is moving in the direction of providing the integration capabilities needed to enable those kinds of services. The basic idea is to allow software developers to participate in next-generation cloud application environments where, for example, capabilities such as analytics are delivered as a service.
The end result, says Saks, should be a network effect that acts as a force multiplier when it comes to adoption of application services. Each application service in turn exposes customers to another application service that they previously would probably not even been aware existed.
It will take a while for this next evolution of cloud application services to fully mature. But once it does, AppDirect wants to be in a position to enable it, which means providing an integration capability that will allows organizations to more easily mix and match application services, Saks says.
All this integration, of course, will be enabled by existing and new API and integration platforms being developed in the cloud. The challenge facing developers at this juncture is figuring out how to go about isolating the back end of their applications from the presentation layer to make sure that as the cloud application ecosystem continues to evolve, their applications remain relevant.
Related ProgrammableWeb Resources
As I promised a few weeks ago, Amazon DynamoDB now supports Global Secondary Indexes. You can now create indexes and perform lookups using attributes other than the item's primary key. With this change, DynamoDB goes beyond the functionality traditionally provided by a key/value store, while retaining the scalability and performance benefits that have made it so popular with our customers.
You can now create up to five Global Secondary Indexes when you create a table, each referencing either a hash key or a hash key and a range key. You can also create up to five Local Secondary Indexes, and you can choose to project some or all of the table's attributes into each of the table’s indexes.
Creating Global Secondary Indexes
The AWS Management Console now allows you to specify any desired Global Secondary Indexes when you create the table:
As part of the table creation process you can also provision throughput for the table and for each of the Global Secondary Indexes:
Local or Global
If you have been following the continued development of DynamoDB, you may recall that we launched Local Secondary Indexes earlier this year. You may be wondering why we support both models while also trying to decide where each one is appropriate.
Let's quickly review the DynamoDB table model before diving in. Each table has a specified attribute called a hash key. An additional range key attribute can also be specified for the table. The hash key and optional range key attribute(s) define the primary index for the table, and each item is uniquely identified by its hash key and range key (if defined). Items contain an arbitrary number of attribute name-value pairs, constrained only by the maximum item size limit. In the absence of indexes, item lookups require the hash key of the primary index to be specified.
The Local and Global Index models extend the basic indexing functionality provided by DynamoDB. Let’s consider some use cases for each model:
- Local Secondary Indexes are always queried with respect to the table's hash key, combined with the range key specified for that index. In effect (as commenter Stuart Marshall made clear on the preannouncement post), Local Secondary Indexes provide alternate range keys. For example, you could have an Order History table with a hash key of customer id, a primary range key of order date, and a secondary index range key on order destination city. You can use a Local Secondary Index to find all orders delivered to a particular city using a simple query for a given customer id.
- Global Secondary Indexes can be created with a hash key different from the primary index; a single Global Secondary Index hash key can contain items with different primary index hash keys. In the Order History table example, you can create a global index on zip code, so that you can find all orders delivered to a particular zip code across all customers. Global Secondary Indexes allow you to retrieve items based on any desired attribute.
Both Global and Local Secondary Indexes allow multiple items for the same secondary key value.
Local Secondary Indexes support strongly consistent reads, allow projected and non-projected attributes to be retrieved via queries and share provisioned throughput capacity with the associated table. Local Secondary Indexes also have the additional constraint that the total size of data for a single hash key is currently limited to 10 gigabytes.
Global Secondary Indexes are eventually consistent, allow only projected attributes to be retrieved via queries, and have their own provisioned throughput specified separately from the associated table.
As I noted earlier, each Global Secondary Index has its own provisioned throughput capacity. By combining this feature with the ability to project selected attributes into an index, you can design your table and its indexes to support your application's unique access patterns, while also tuning your costs. If your table is "wide" (lots of attributes) and an interesting and frequently used query requires a small subset of the attributes, consider projecting those attributes into a Global Secondary Index. This will allow the frequently accessed attributes to be fetched without expending read throughput on unnecessary attributes.
This feature is available now and you can start using it today!
There’s nothing quite as adventurous in all of IT as spending months building an application that depends on a third-party API for its success. For the most part, developers usually don’t know for sure how much stress the IT infrastructure supporting any given API can stand. If their application is wildly successful, it could suddenly slow to a crawl when that third-party API gets overwhelmed by requests.
To help developers identify what threshold of requests any given API might be able to withstand, Telerik this week released the latest upgrade to Test Studio, which includes the ability to simulate real-world user loads against a site or a web service.
According to Chris Eyhorn, executive vice president of the Application Lifecyle Management (ALM) division at Telerik, that load-testing capability can be used to model the point at which any given number of users might degrade the performance of a particular API.
In addition to load-testing capabilities, Telerik includes the capability to fully record sessions in all browsers to better facilitate testing of applications that will be accessed by multiple browsers.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
To make the Telerik testing tool more financially appealing, Telerik this week also implemented some new pricing options. Developers can now opt to download Test Studio and run it locally, but pay for it on a monthly usage basis. Eyhorn says this not only makes Test Studio more affordable, but it allows developers to only pay for testing software when they are actually using it. Of course, Telerik will continue to offer existing annual and perpetual licensing options. But Eyhorn says this new licensing option should make subscribing to testing tools more appealing to smaller teams of developers that have limited budget resources.
With the rise of agile development application testing is getting the short shrift more often these days. Developers are under more pressure to deliver applications faster, which almost invariably results in testing short cuts. By making testing tools more accessible, Eyhorn says it’s more likely that developers will avail themselves of a testing process that can eliminate some very costly fixes to production applications later on.
Of course, there’s no shortage of application testing tools these days inside and out of the cloud. Eyhorn says most developers still prefer to test locally because it allows them to work without having to always have an Internet connection while at the same time do a better job of actually replicating the end user experience. As for open source tools, Eyhorn says they are not only time-consuming to set up, but they also typically don’t provide as comprehensive testing environment.
Regardless of the approach taken, testing can be the difference between success and failure, especially in a world where any application is only as good as its weakest API link. The challenge is finding a way of reducing the cost of application testing that already consumes a fair amount of the typical IT budget.
Ducksboard, a real-time data monitoring and visualization platform, will be adding new services to the widgets directory and extending existing services in a matter of weeks. The new integrations will include LinkedIn, Flurry, Realtime Google Analytics and Trello. The company is also working on a brand new interface and are planning a closed beta the first quarter of 2014. The Ducksboard platform features the Ducksboard API which allows developers to send their own data, retrieve data stored, and manipulate all objects found in Ducksboard.
Ducksboard is a real-time data monitoring and visualization platform that allows users to gather data from other SaaS applications which is then visualized and displayed on customizable dashboards. The primary audience for Ducksboard is small-to-medium-size businesses (SMBs) that use the platform to track key performance indicators (KPIs), monitor data in real-time, and use the Ducksboard API to import company data and create custom widgets. Although the primary audience is SMBs, the company has recently started moving upmarket to Enterprise sales.
Customized Ducksboard dashboard created using ready-made widgets, data from social media accounts and other data sources. – Click Image for Larger View
Big Data has been one of the big trends of the past several years and has led to another, possibly even greater trend, data visualization. Wikipedia describes data visualization as:
“The main goal of data visualization is to communicate information clearly and effectively through graphical means… To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key-aspects in a more intuitive way.”
Data visualization has been a rapidly growing trend and with that, the number of data visualization APIs have been increasing at a rapid pace. At the time of this writing, a little over half of the 64 Data Visualization APIs listed in the ProgrammableWeb Directory were added just in the past two years. News stories featuring data visualization are also becoming more and more frequent. ProgrammableWeb has recently published a number of news stories featuring data visualization including a post about the Splunk-analytics-for-Hadoop product Hunk, a post about the redesigned NYC Open Data Portal that showcases interactive visualizations, and a post about the integration of Splunk Dashboards with the Regulations.gov API.
The Ducksboard platform not only allows companies to gather and visualize data from other SaaS applications, but the platform also allows companies to upload their own data and create custom dashboard widgets. The types of custom data that can be uploaded to Ducksboard includes numbers, images, text and spreadsheets (Google spreadsheets). Many of Ducksboard’s customers are using the API and uploading data themselves to create their own custom visualizations. Jan Urbański, Ducksboard Technical Co-Founder, told ProgrammableWeb that:
“Businesses often complement our prebuilt integrations by uploading data themselves through or API, currently over 50% of our inbound traffic is custom data sent by customers. We’re ingressing over 25 GB of new data each day.”
Developers are also using the Ducksboard API to build apps for different devices. Jan Urbański explained to ProgrammableWeb about different types of use cases and described a few examples:
“Apart from sending data, our APIs are being used by digital agencies to automatically create and manage dashboards for their clients. We’ve also seen developers write apps using our APIs for different devices. For example, displaying values from a Ducksboard widget on a Pebble smartwatch or sending photos from an Android phone directly to a Ducksboard panel.”
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
At the time of this writing, Ducksboard features 306 widgets which include services such as Basecamp, bitly, Facebook, Foursquare, LinkedIn, SalesForce Desk, Twilio, Twitter, Zendesk and many others. In upcoming weeks, Ducksboard will be releasing new integrations including Jira, LinkedIn, Flurry, Magento, Realtime Google Analytics, SendGrid, and Trello. Please note that the upcoming new integrations will not come with a similar offer as Twilio and Zendesk (a customized board for the service, free for life). However, the new integrations will cover the most important metrics and KPIs.
Jaime Jimenez, Ducksboard Head of Marketing, told ProgrammableWeb that the company is “shifting focus to allowing users to understand their data (both internal and from third party providers).” Ducksboard is working on a new interface that will emphasize the analytics layer and a closed beta is scheduled for Q1 2014.
Developers interested in using the Ducksboard platform and API, can find more information at Ducksboard.com.
By Janet Wagner. Janet is a Data Journalist and Full Stack Developer based in Toledo, Ohio. Her focus revolves around APIs, open data, data visualization, and data-driven journalism. Follow her on Twitter, Google+, and LinkedIn.
Open data—from both government and private sources—has great potential for creating new products, reducing the costs of doing business, and improving people’s lives. But for open data to truly benefit both business and local communities, there are still some questions that will need to be answered.
Two central sticking points at present are how to ensure ongoing supply to government open data sources when this may be affected by outside politics or internal inertia, and how to define a viable business model when open data is the key raw material. By providing a reliable, up-to-date API for monitoring U.S. Court decisions, the CourtListener API team are forging a path that is helping resolve these two major barriers to seizing the open data opportunity.
The CourtListener website is a not-for-profit project managed by the Free Law Project. It collates data from court websites and other sources, aiming to provide a comprehensive database of all court law opinions made in the United States. So far, the database covers all Federal Appeals Courts decisions and is increasingly adding state courts decisions. (Some financial barriers prevent full data extraction, for example, the Federal District Court charges 10 cents a page, preventing the not-for-profit from extracting decisions from this court via web-scraping.) CourtListener started with the Bulk Data API that provides downloadable access to the full database in XML format, while the newly released Courtlistener REST API includes seven endpoints to be able to query court decision data.
“With our bulk API, it is a giant XML file that people have been using for a couple of years now. It’s used a lot in research, and we track the number of downloads to get a feel for its use,” the CourtListener co-founders, Mike Lissner and Brian Carver, told ProgrammableWeb.
“With the new REST API, we did a soft release a few weeks ago and we’ve had 3 or 4 [early adopters] working on it. One used it as part of his Y-Combinator pitch for example, it’s been used in conjunction with the State Decoded Project to pull in relevant data, and a developer with the Sunlight Foundation is using our data. In general, the trend for using our data is up, our traffic goes up every week.”
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Originally, CourtListener was focused on providing a daily update type-service to alert subscribers to new Federal Court decisions, but as the project built a clearer understanding of it’s potential audience, it realized historical data would be just as important. Lissner and Carver explain: “One of the end user groups we thought of was journalists, you can do a search on a topic within your beat, for example, or you can set up alerts to receive details of particular court decisions. But for the daily alerting feature, that meant we only had court opinions from the date we started going forward, but for trend analysis needed in journalism, for example, we really needed to get the back catalog, so the project became about putting all the historical data online as well. Our hope is that it makes the law more accessible, and more accessible to analyze amongst not-for-profits.”
In this way, CourtListener is data mining available government open data sources and making them more accessible for end users. By building an independent database from the source material, the CourtListener team are also ensuring more reliable access to the data outside of depending on government support – financial or political.
On data scraping: CourtListener uses some web-scraping tools to collate court decision and opinions, but has also used a network of volunteers to help clean the data. As with other web-scraping projects, a key barrier can be the lack of scalability in collating and cleaning data. “We use our own CourtListener web-scraping tool, called Juriscraper that will dish out python code via a custom library. There aren’t really readymade tools for this type of web-scraping: there are some general problems, but way more specific problems. Sometimes other people have done the heavy-lifting in scraping the data, but when we looked at it we had to do things like correct the spelling of the word ‘September’, which for some reason, people tend to spell incorrectly. So there’s a certain point where you can spend an hour coding a solution, or 45 minutes to go through the data and correct each line.”
On analytics: For now, the CourtListener team is not heavily invested in monitoring data around how the CourtListener API is being used: “We know there’s a big push towards analytics but from our perspective, we don’t really do much,” Lissner and Carver said. “We throttle at 1,000 hits on an endpoint in an hour, and we monitor general usage patterns. That’s probably where we will let it sit for now.”
On creating the REST API: Lissner and Carver said: “We used the Tastypie toolkit to create the API. It helps you split your data into models and schemas. Tastypie is an extension of Django that can help you create an API in about 20 minutes work. It also let us include a search-powered endpoint.”
Some of CourtListener’s current users include private businesses that mine the data for specific industry verticals. SumoBrain, for example, use the APIs to enhance their search products used by patent attorneys, corporate researchers and inventors.
The State Decoded project – aimed at making legal documents across the States more accessible in API format – also draws on CourtListener.
“We use CourtListener’s API to show site visitors the most prominent court decisions that have cited a given law,” State Decoded Founder Waldo Jaquith told ProgrammableWeb. “When somebody comes to a State Decoded site, and looks up a law, this will provide them with the context that they need in order to understand how that law is actually interpreted by courts. Some laws have even been struck down by courts, but remain on the books because legislators are unwilling to remove them. For these sorts of laws, it’s enormously important to be able to give people immediate access to the relevant court opinion.
“Implementing their API was very easy, and it’s extremely lightweight for folks using The State Decoded—just a few lines of code. CourtListener’s bulk download of court decisions is, necessarily, an enormous file, and some non-trivial computing power would be required to provide the same information that their API provides in a fraction of a second.
“As far as folks visiting State Decoded sites know, there is no API. It’s completely seamless. CourtListener’s API allows people to get all relevant legal information about a single law in one place, without having to pay LexisNexis a subscription fee. That’s very powerful. Nothing like this has been done before.”
After unlocking the data, unlocking the potential
“CourtListener is making legal opinions more accessible on a number of fronts,” Raymond Yee, visiting scholar/lecturer in the School of Information at the University of California at Berkeley, told ProgrammableWeb. Yee runs an annual course in open data where students are encouraged to design commercially viable products built off open data sources.
“First, CourtListener provides a single point of access so someone can come to this one site to find decisions for many decisions without having to personally hunt them down on myriad web sites.
“Second, by providing a single point of access, CourtListener lets users see a larger, unifying context in which individual decisions and courts can fit (getting a feel for the overall structure of the legal system is especially important to the non-specialist public).
“Finally, CourtListener is making all this data available for bulk download as well as through its new API to accommodate a range of data analysis scenarios.
“Everyone in a free society should be able to know and understand the laws that govern that society. CourtListener lowers some of the barriers for that access: financial, intellectual, and computational.”
Yee also believes that access to this legal data via API can go beyond fostering a more aware, participatory civic society.
“I have naive understandings about the American legal system, but given how court decisions (especially at the Supreme Court level) can fundamentally restructure our country and its economic/business life, there must be a lot of money riding on understanding, predicting, and influencing how the courts make decisions. What an opportunity to computationally compare legal decisions across jurisdictions with the CourtListener aggregated data, which is actually also near-real time! I imagine someone would want to put in some seed investments to develop machine learning algorithms based on the CourtListener dataset to uncover latent patterns in the history of Supreme Court decisions. (I wouldn’t be surprised if this has been attempted already, but CourtListener opens that game to many more people.) Concretely, as I’ve heard from Brian Carver, we should even be able to compute something from this data to help us win at FantasySCOTUS!
“On a more prosaic level, there might be business opportunities around building tools to assist jurists to find relevant decisions in their own decision making. For journalists monitoring legal decisions around the country, the alert system could come in handy. Entrepreneurs familiar with the legal system should look immediately at the CourtListener data and API and start daydreaming. “
Waldo Jaquith agrees. He believes the impact CourtListener will have on how future open data projects are approached is enormous: “The open data movement need less talk and more action. Combining legal codes and court decisions is a patently obvious thing to do. Surely people have envisioned this for decades. What’s different about CourtListener and The State Decoded is that we actually did it. It’s not perfect, it’s not comprehensive, but it exists, and that’s better than anything else that anybody else has done. That’s how we make people aware of the power of tools like the CourtListener API: by implementing those tools, and telling everybody about it. In 24 months, it will seem quaint that this was considered interesting in 2013,” Jaquith said.
The recent release of the CourtListener API demonstrates how APIs are instrumental in unlocking key data sources. From its value in enhancing civil liberties, to providing a powerful resource for journalists and business, to its potential in helping entrepreneurs create new commercial products, the CourtListener API is a good example of what we can expect from open data projects providing access to source materials via API.
Related ProgrammableWeb Resources
Initial interest in Mobstac’s new HTML5stac offering is growing among early adopters keen to use the platform to design composable web apps, all built off REST APIs. Ravi Pratap, CTO and Co-Founder of MobStac, spoke with ProgrammableWeb about what HTML5stac will mean for web app developers.
HTML5stac provides a cloud platform aimed at allowing developers to create responsive websites and web services that function across a range of devices. A key focus has been on ‘future proofing’ the websites so developers can compose and re-compose the data and content elements as they scale or widen their device reach. HTML5stac is currently in private beta release, with a more public beta stage planned for 2014, but developers are encouraged to start trialing the platform now.
HTML5stac early adopters
“The initial response has been encouraging and we’re seeing a very healthy level of interest in what HTML5stac has to offer,” Pratap told ProgrammableWeb.
“At this point, we’ve released access to a small number of developers to fine-tune a few elements of the developer portal and documentation, and to also gather feedback to help us plan our public beta launch.
“At this early stage, small teams that have dabbled with building responsive web apps, and are looking to do more complex mobile work for their clients, will get the most value out of HTML5stac. Existing businesses will derive more value from HTML5stac at a later stage, when the platform is more mature and has a more complete mBaaS feature set.
“We’ve also had a high degree of interest from our solution partners, looking to leverage our platform for the mobile solutions they’re currently selling to their clients, which is excellent for us because this was the thesis for our thrust into creating a developer platform.”
On HTML5stac, every piece of content, object and file is accessed by REST APIs. Pratap explains the benefit of this approach:
“The Web thus far has really been a page-centric world, with web pages being the final output of web content management systems. With the advent of mobile apps (Android, iOS, HTML5), the key architectural elements necessary to drive data into and out of these apps have turned out to be RESTful APIs that enable the separation of data from presentation. Unfortunately, existing web systems built over the last decade are woefully inadequate in exposing the APIs necessary to power mobile apps, which is why developers have traditionally had to write tedious amounts of code to retro-fit REST APIs onto legacy systems.
“With MobStac, you get a future-proof architecture on which to build all your mobile apps, instead of picking different ones for different projects. Essentially, this is an mBaaS for HTML5 web apps.”
HTML5Stac in demonstration
Pratap points to a demo app that shows how current developers are monetizing products built on the HTML5Stac platform.
“The demo app is displaying content from 2 distinct feeds: First, there is a mobile news feed from our customers at intomobile.com. It showcases how any content feed (RSS/XML/JSON) plugged into HTML5stac can be seamlessly transformed into a responsive content experience. Second, we have also added a YouTube video feed of all TED videos.
“What the demo shows is that customers can take our template app and easily plug in content coming from any content management system (Drupal, Joomla, Sharepoint, etc.) and tailor the solution to create a responsive website. Monetization is enabled through ad integration with mobile ad networks and real-time bidding (built into our platform). Developers would merely need to choose, say, Google AdX and configure their settings.
“The compelling thing about building an HTML5stac-based solution is that it enables the possibility of creating seamless, next-generation, fast responsive websites without having to tear down everything and rebuild from scratch. Developers using HTML5stac can now sell sophisticated mobile solutions to their new and existing customers and increase revenue, without having to do all the heavy lifting. This also allows them to execute many more turnkey projects with the same workforce.”
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Pratap is hoping HTML5stac will be an ideal platform for developers designing web service interfaces for new SmartTV devices. Worldwide, about one in 10 TVs are internet-connected, but this number is expected to grow to one in four TVs by 2018, with fastest growth expected in international markets like China. Developers who want to provide apps and websites that are displayed on TV screens can use the HTML5stac platform to design their product. Pratap explains the benefits:
“Designing websites to be viewable on SmartTVs is quite a challenge, and we know that existing tools like Bootstrap don’t really factor in really large screens and the necessary UI changes that need to be affected. For example, if one were to adopt a purely CSS-based responsive approach to SmartTV websites, you’d end up with a very poor experience for a number of reasons.
Developers wanting to test the HTML5stac during the private beta release are invited to sign up at the Mobstac developer portal.
I received a notice from UPS on a package shipped from California. "Train derailment" was all it said.
So I have to backtrack to discover that a coal train derailed in Nevada on Sunday, damaging the tracks. The tracks weren't fixed until Tuesday, which delayed all other trains using these same tracks. My package arrived in Kansas today, where it was most likely unloaded from one of the delayed trains. My package will probably be arriving Monday rather than Friday.
At least I now know my package isn't one of many, strewed across some desolate wilderness somewhere amidst twisted and torn metal—most likely being pawed by bears or wooly men.
My colleague Abhishek Singh sent along a guest post to introduce a really important new feature for AWS Elastic Beanstalk.
You can now launch Worker Tier environments in Elastic Beanstalk.
These environments are optimized to process application background tasks at any scale. Worker tiers complement the existing web tiers and are ideal for time consuming tasks such as report generation, database cleanup, and email notification.
For example, to send confirmation emails from your application, you can now simply queue a task to later send the email while your application code immediately proceeds to render the rest of your webpage. A worker tier in your environment will later pick up the task and send the email in the background.
A worker is simply another HTTP request handler that Beanstalk invokes with messages buffered using the Amazon Simple Queue Service (SQS). Elastic Beanstalk takes care of creating and managing the queue if one isn’t provided. Messages put in the queue are forwarded via HTTP POST to a configurable URL on the local host. You can develop your worker code using any language supported by Elastic Beanstalk in a Linux environment: PHP, Python, Ruby, Java, or Node.js.
You can create a single instance or a load balanced and auto-scaled worker tier that will scale based on the work load. With worker tiers, you can focus on writing the actual code that does the work. You don't have to learn any new APIs and you don't have to manage any servers. For more information, read our new documentation on the Environment Tiers.
Use Case - Sending Confirmation Emails
Imagine, you’re a startup with a game changing idea or product and you’d like to gauge customer interest.
You create a simple web application that will allow potential customers to register their email address to be notified of updates. As with most businesses, you decide that once the customer has provided their email address you will send them a confirmation email informing them that their registration was successful. By using an Elastic Beanstalk worker tier to validate the email address and to generate and send the confirmation email, you can make your front-end application non-blocking and provide customers with a more responsive user experience.
The remainder of the post will walk you through creating a worker tier and deploying a sample Python worker application that can be used to send emails asynchronously. If you do not have a frontend application, you can download a Python based front-end application from the AWS Application Management Blog.
We'll use the Amazon Simple Email Service (SES). Begin by adding a verified sender email address as follows:
- Log in to the SES Management Console and select Email Addresses from the left navigation bar.
- Click on Verify a New Email Address.
- Type in the email address you want to use to send emails and click Verify This Email Address. You will receive an email at the email address provided with a link to verify the email address. Once you have verified the email address, you can use the email address as the “SOURCE_EMAIL_ADDRESS”.
Next, download and customize the sample worker tier application:
- Download the worker tier sample application source bundle and extract the files into a folder on your desktop.
- Browse to the folder and edit the line that reads “SOURCE_EMAIL_ADDRESS = 'email@example.com’” in the default_config.py file so that it refers to the verified sender email address, then save the file.
- Select all of the files in the folder and add them to a zip archive. For more details on creating a source bundle to upload to Elastic Beanstalk, please read Creating an Application Source Bundle.
Now you need to create an IAM Role for the worker role. Here's what you need to do:
- Log in to the IAM Management Console and select Roles on the left navigation bar.
- Click the Create New Role button to create a new role.
- Type in WorkerTierRole for the “Role Name.”
- Select AWS Service Roles and select Amazon EC2.
- Select Custom Policy and click Select.
- Type in WorkerTierRole for the Policy Name, paste the following snippet as the Policy Document, and click Continue:
- Click Create Role to create the role.
You are now ready to create the Elastic Beanstalk application which will host the worker tier. Follow these steps:
- Log in to the AWS Elastic Beanstalk Web Console and click on Create New Application
- Enter the application name and description and click Create
- Select Worker for the “Environment tier” drop down, Python for the “Predefined configuration” and Single instance for “Environment type”. Click Continue.
- Select Upload your own and Browse to the source bundle you created previously.
- Enter the environment name and description and click Continue.
- On the “Additional Resources” page, leave all options unselected and click Continue.
- On the “Configuration Details” page, select the WorkerTierRole that you created earlier from the “Instance profile” drop down and click Continue.
- On the “Worker Details” page, modify the “HTTP path” to “/customer-registered” and click Continue.
- Review the configuration and click Create.
Once the environment is created and its health is reported as “Green”, click on View Queue to bring up the SQS Management Console:
Then Click Queue Actions and select Send a Message.
Type in messages in the following format; click Send Message to send a confirmation email:
This new feature is available now and you can start using it today.
-- Abhishek Singh, Senior Product Manager
Chrome OS could be going password-free thanks to a new API. Riot’s League of Legends data is now available through an API. Plus: Rainforest uses API for QA testers as an on demand service, and 12 new APIs.
Chrome to Try Password-Free Security System
Google is creating a password-free security system that allows trusted devices to unlock screens, and lets trusted apps wake machines as well. It is currently a proposed API dubbed, chrome.screenlockPrivate.
Emil Protalinski at The Next Web cautions it will be some time before this nifty API sees the light of day:
“As its name implies, the API aims to let Chrome apps lock or unlock the screen on Chrome OS. It would also let apps monitor when the screen is locked or unlocked by other means as well as show messages to the user if an app decides not to unlock the screen for some reason.”
By opening up security to trusted devices, the API proposal explains, a Chrome device could be activated by a phone, ring, watch, or badge, among a list of almost limitless possibilities. Comments are collecting at a blog post by François Beaufort.
League of Legends Data Available Through Riot’s API
The online game League of Legends, created by Riot, is now making its data available through an API. And it’s looking to the community for feedback to further refine it.
Jeffrey Grubb reports at Venturebeat that it:
“tracks info about recent games, players statistics, and more. Developers can use that data to make new programs, websites, and apps based on what is happening in League of Legends.”
The name of the game is to boost user experience by unleashing the developer community to create apps for the game. The API, now in beta, was created after requests from both players and developers.
API News Your Shouldn’t Miss
- Google is building password-free locking and unlocking into Chrome OS; use a phone, ring, watch, or badge instead
- Riot turns League of Legends data over to the community with new API
- Rainforest Launches On Demand Service That Uses API To Spin Up QA Testers For Web Sites And Apps
12 New APIs
Today we had 12 new APIs added to our API directory including a uk advertising service, a bitcoin to gold exchange service, an asset-backed digital currency payment service, a latin american e-commerce platform and an automated deployment for .net. Below are more details on each of these new APIs.
Adzuna API: Adzuna is a UK based job, property and car advertising service. The Adzuna API allows users to incorporate Adzuna’s up-to-the-minute employment data to power user websites, and perform reporting and data visualizations. The API allows users to query to get ads, get employment information, get categories, and check which version is currently being used. The API uses REST calls, and returns JSON, JSONP, or XML. An account and API Key are required with service.
Coinabul API: Coinabul is the first service that allows users to purchase gold using bitcoins. Gold and bitcoin prices are updated every minute, ensuring that the prices buyers see on the website are the prices they’ll be paying. Coinabul uses special shipping infrastructure that includes insured shipping and nondescript packaging to ensure that orders arrive safely. The Coinabul API allows users to retrieve a data feed of bitcoin and gold prices, both in terms of each other and in U.S. dollars. Users can also place orders via API.
Evergreen API: Evergreen is a asset-backed digital currency that allows users to exchange euros for evergreens, and use them as payment. The evergreens are exchanged at a 6:1 ratio with Euros, and are backed by a basket of currencies and environmental investments. The Evergreen API uses REST calls and returns JSON. Users can make calls to interface between mobile apps and websites to manipulates wallets containing evergreens. An account is required with service.
KuroBase API: KuroBase is a cloud-based Database as a Service (DbaaS) that can be set up quickly and scales easily from shared to dedicated instances. KuroBase monitors database capacity and software status continuously, allocating, patching, and updating instances without causing any downtime or compromising end user experience. Users can schedule automatic database backups at any frequency they wish. KuroBase’s web standard interface allows users to develop and deploy their databases across multiple platforms without having to recode or use a client SDK. Developers can access their database records using the RESTful KuroBase API.
MercadoLibre API: MercadoLibre is an e-commerce platform for Latin American countries. The platform covers all aspects of online retail, including building an online store, offering a range of products and services for sale, advertising those products and services, and sending and receiving payments online. MercadoLibre offers a RESTful API for integrating the various aspects of its platform into custom applications.
OctopusDeploy API: OctopusDeploy is an automated deployment system for .NET. The scale of deployments can range in size: some deploy to a handful of servers, while our largest installations deploy hundreds of projects to hundreds of servers. The OctopusDeploy API uses REST calls, returns JSON, and is largely read-only aside from a couple of calls. An account is required with service.
Offline Geolocation API: Offline Geolocation allows cellphones to derive a location without being connected to the internet or data networks. The service is designed as a failover system when a user has no Internet connection, due to roaming or coverage issues, and GPS has not been turned on, or is not available indoors. The Offline Geolocation developer kit allows users to access a global database of mobile phone cell tower locations, along with data such as MCC, MNC, LAC and cellid information. The service is free to use.
Pearson Food & Drink API: As the publishing company behind Penguin Books, Financial Times, and multiple education businesses, Pearson is one of the world’s largest learning companies. Pearson provides developer access to information from its catalog through a series of APIs.
The Pearson Food & Drink API provides developer access to content from Pearson’s inventory of educational cookbooks. The API accesses more than 4,000 recipes, provides several search options, and supports JSONP callbacks.
PsychSignal API: PsychSignal allows users to create a customized list of stock symbols for PsychSignal to monitor using their proprietary sentiment engine. They track the emotions and attitudes of people across the internet towards a company and provide an array of charts, graphs, and statistics designed to show the conditions of both the general market and individual securities at any given time. Users can follow their chosen symbols on their PsychSignal dashboard or get alerts sent to them automatically by email or SMS when conditions change.
Screenshot Shark API: Screenshot Shark is a screenshot service that allows users to build screenshot features into applications as well as make calls to capture websites. The base level subscription comes with 4,000 unique renders, and 1,000,000 requests. The API requires an account, and API key is used for authentication. The site offers several vendor kits and will create custom libraries for clients.
Twinword Language Scoring API: Twinword creates tools to collect data describing the ways in which people associate concepts. The tools include word association tests, the word graph dictionary, e-commerce recommendations, document detection, and more. Twinword provides a series of APIs that expose some of these tools.
The Twinword Language Scoring API is able to evaluate a word or string of text for difficulty. Developers pass a word or string of text with a simple GET call, and the API responds with a difficulty score and value.
Twinword Topic Tagging API: Twinword creates tools to collect data describing the ways in which people associate concepts. The tools include word association tests, the word graph dictionary, e-commerce recommendations, document detection, and more. Twinword provides a series of APIs that expose some of these tools.
The Twinword Topic Tagging API uses contextual language understanding to deliver programmatically generated tags for any given string of text. Developers simply pass a GET request containing up to 200 words or 3000 characters of text, and the API responds with a list of keywords and topics weighted for relevancy.
Our API directory now includes 58 VoIP APIs. The newest is the Rebtel Voice Platform API. The most popular, in terms of mashups, is the Broadsoft Xtended API. We list 2 Calais mashups. Below you’ll find some more stats from the directory, including the entire list of VoIP APIs.
For reference, here is a list of all 58 VoIP APIs.
3NGNetworks API: VoIP services provider
4PSA VoipNow API: Integrated communications system
8×8 Click to Dial API: Contact Center Outbound Call Service
8×8 CRM API: Contact Center Customer Management Service
8×8 External IVR API: Contact Center IVR Server Interaction Service
8×8 Recordings API: Contact Center Recording Interaction Service
8×8 Reporting API: Contact Center Statistics Reporting Service
8×8 Web Callback API: Contact Center Web Callback Service
AIM Phoneline API: Voice Over IP Internet telephony services
Alianza API: Cloud-based voice communication service
AOL Instant Messenger API: Instant messaging chat service
Azuralis API: Norwegian hosted telecommunication service
Broadsoft Xtended API: VoIP telephony services
Cloudvox API: Voice application platform
Cloudvox Digits API: Telephone number look up service
DIDWW API: Phone and call forwarding service
DIDx API: Global DID exchange
Digium Swithchvox API: Unified Communications Service
EuroIAX API: VoIP services provider
FCC Form 499 Filer Database API: Common carrier reporting information service
foneAPI API: VoIP and internet telephony service
Global IP Sound API: Voice Processor for VoIP Services
grnVoIP API: VoIP Service
innovaphone PBX API: Hosted VoIP telephony service
Junction Networks Web Services API: Business VoIP Services
Lypp API: VoIP Callback & Teleconferencing API
Massphoning API: Mass Calling Service
MorrisComGroup API: VOIP Communication Service
MumbleBoxes API: Voice over IP for gaming
MyDivert API: International VoIP and call forwarding service
NovelASPect API: Application hosting services
OnSIP VoIP API: Telephony service
OnSIP XMPP API: Real-time notification service
Open Voice API: VoIP telephony services
Orange Click-to-Call API: Inactive – Click to call Internet voice services
PennyTel Open API: VoIP provider in Australia
Phaxio API: Fax delivery service
Rebtel API: App communications on the cloud
Rebtel Voice Platform API: International calling platform
Ribbit API: Flash based VoIP service
Setu Infocom API: VoIP callback and click-to-call service
SimpleVox API: Cloud-based VoIP solution
Simwood eSMS API: SMS Service and Management Tools
Sipgate API: Internet telephone service provider
SIPmly API: Voice over IP service
Sippy SoftSwitch API: VoIP network management platform
Speap API: International phone service
SureVoIP API: VoIP service provider
Sylantro API: Hosted VoIP services
Synergy-IPV API: Voice over IP service
Tall Umbrella API: VoIP notifications service
Telsolutions API: Multi-channel communications services
TringMe API: VoIP telephony services
Voice-Jump API: VOIP Communication Service
Voicebuy VoIP API: VoIP service
VoiceMeUp API: VoIP service provider
Voxbone API: VoIP provisioning service
woopla API: Automatic alert calls
As we left our hotel in Addis Ababa the final morning of an amazing trip to Ethiopia with charity: water and Will Smith, we headed to the local market. Goods of every kind were offered: artwork, scarves, jewelry, clothing, housewares, and so on.
At the market, a boy, probably 8 or 9 years old, began following us. The right side of his face was badly disfigured, as if it had been burned by fire. He wasn’t shy and immediately starting asking for things in very broken English. Money was the first request, and we hesitated to hand any out, as we were informed that doing so had the potential to create chaos in a busy marketplace.
I had on a small backpack, and soon this young boy began pointing to it. In broken but comprehensible English, he simply said, “Food, please.”
The first few times he said it, I couldn’t figure out why he was thinking I had food. And then I realized that just before we left the car to tour the market, I had placed a large, clear ziplock bag with several food items in it—nuts, granola bars, beef jerky—in the outer mesh pocket of the bag. It was easily visible to anyone.
“Food, please” he continued. “Food, please.”
Again I was hesitant to offer any, as it had the potential to create a swarm of children around us if not done carefully. At this point, we were done shopping and needed to leave for the airport. He continued to follow me. Suzanne and I returned to the car. I was stuck in a quandary. I knew if I left that little boy without giving him something, my conscience would haunt me unceasingly in the coming days and weeks.
Literally as our driver began to pull away, I quickly removed the bag from my backpack, rolled down the window, and handed the food to that young boy. By now he had a companion with him, about the same age, and probably just as famished. As soon as he perceived I was handing the bag to him, he snatched it as quickly as he could.
As we drove off, I watched the two of them run to an alley in the marketplace. They disappeared behind one of the stores, undoubtedly to devour their gain. I had a difficult time holding back the tears as I contemplated what had just taken place. How grateful I am, as is my conscience, that I didn’t stay my hand that day.
Our trip to Ethiopia could not have been filled with more insight into the lives of people in Tigray, more experiences to be touched and affected by the people of the area, and more opportunities to see just how blessed many of us are.
But there is something even more essential than food; even more vital to the famished. It is water.
We’re so close to surpassing $100,000 raised for clean water. I come to you with hat in hand, requesting your help one last time. This Monday I’ll be attending the 2013 charity: ball. How incredible it would be to personally thank Scott Harrison on your behalf for allowing us to participate in the global fight for clean, safe drinking water.
I’ve just contributed another $500 of my personal funds to our campaign. If you can do the same, please join me. If not, any amount you can afford will do amazing things for clean water.
Merry Christmas to all, and may clean water be one of the greatest gifts we give this year.
In a rare API conference event appearance, Twitter graced the stage at last week’s APIDays Paris. The social giant shared some insight into current API usage among third-party developers and gave some read-between-the-lines signs of how it intends to work with API partners in future.
Paris may have been a strategic location for Twitter to present on its API, given its absence from other recent API-specific conferences including API Strategy and Practice, API World and even the Twilio conference, which had been able to snare some notable keynote presenters for their November API leadership event. Romain Huet, from Platform Relations at Twitter, spoke last week about how Twitter is “Connecting to the pulse of the planet” in a decidedly non-U.S focused presentation.
Acknowledging that the majority (77%) of Twitter users are located outside of the United States, Huet walked the audience through some of the social media giant’s recent international engagement achievements. In particular, he referenced how Twitter was one of the only on-the-ground news sources during the Turkey riots, and was a key media player helping connect relief efforts to areas of need in the recent Philippines disaster. Keeping the global theme, Huet went on to provide current examples of how European and UK businesses are making use of Twitter APIs in their business model.
Reading between the lines of the presentation, it appears some of the take-home messages for API developers wanting to make use of Twitter’s data “firehose” are best placed to create high value-add products for specific target audiences. The move away from encouraging developers to create Software-as-a-Service or other truly scalable solutions based primarily on the Twitter API is now complete. Developers, it seems, are encouraged instead to identify opportunities that access Twitter data streams in realtime to provide valuable insights for specific industry verticals.
While some examples showcased by Huet did have a scalable component to them, for the most part what the Twitter rep seems to be saying is that using the Twitter APIs to create an automated service is not what the company is looking for in new API partnerships. Following the model being used by Pinterest’s recently released APIs, Twitter’s newest API product, custom timelines, is only available by submitting an application detailing how prospective partners would like to make use of the API to add value to their existing industry relationships, rather than being provided as an open API and encouraging the creation of a slew of new third-party apps.
How companies are using the Twitter API
Huet pointed to several examples of how the Twitter API is being used by global startups.
Vigiglobe: Vigiglobe uses social media analysis, linguistics algorithms and data mining to create real-time dashboards and analytics products for brands, sports, political observers, TV broadcasters and other specific verticals. The business model seems to be based on a consultancy fee structure.
Electionista: Electionista aims to help users understand political trends by analyzing political tweets in real time against other data mining to provide context. While a scalable, automated service is available via its website (with a pro version also creating a business revenue stream), the bulk of Electionista’s business model seems to be based on providing project-based data-driven products, consultancy and training services for specific customers, including party election teams and media.
Comenta: Comenta targets TV and live event broadcasters. It offers a paid service to customers that allows them to enhance engagement during live events, sports, performances and broadcasts of specific TV programs. Like Vigiglobe and Electionista, the service uses the Twitter Streaming API to provide higher-level, sophisticated analysis of social media conversations alongside other data mining to create analytics tools and visualizations for clients.
None of these services has a business model that primarily focuses on scaling an automated service from the Twitter API (although Electionista’s Pro service has that option in part, it seems it is just as much about creating a gateway service to build relationships with potential consultancy clients). Instead, each is focused on targeting a particular market segment for detailed analytics, using the Twitter Streaming API to provide real-time analysis, often when combined with other data sets that provide a greater context to what is happening.
Continuing a global case study theme, Huet also showed how UK startup Style on Screen is using the Twitter API. Here, monetization appears to be from affiliate sales links for product details shared via Twitter (we have requested confirmation of this business model from Style on Screen, but at time of publication haven’t heard back from the startup). Where a TV viewer wants more information about the fashions worn on their favorite TV program, they can tweet Style on Screen for more product details and will be sent a reply link via Twitter indicating where the item can be purchased.
In a life-imitating-art moment, Huet gave an example of how an outfit worn by a character on The Mindy Project drove new retail clothing sales based on the Twitter API-enabled Style on Screen app. Some of the writers of The Mindy Project previously worked on 30 Rock, where the lead character once quipped that she wanted an app to buy things she saw on TV: “Like if you’re watching Sex and The City and you just have to have Mr. Big’s spaghetti.”
How developers can position themselves to maintain Twitter API access
Huet describes the Twitter APIs as having two distinct purposes: “Streaming APIs help you ingest what is happening right now while our REST APIs allow you to perform actions and review what has happened previously on the platform,” Huet said.
Huet points to specific parameters that developers can use with the Streaming API. For example, the follow option lets developer-consumers filter all tweets on a specific subject by filtering by Twitter ID; or to filter by track, i.e. hashtag. Developers can also set streaming limits to only return tweets sent within a defined set of location borders. The Twitter Streaming API also includes the entities parameter which enables data to be returned on mentions, retweets, un-shortened URL links (for more details on how to use Twitter APIs, check out our recent Twitter API tutorials).
For developers who want to analyze all Twitter conversations instead of starting with a set of parameter filters, it is possible to conduct your own big data analysis by streaming what Huet calls “a sample of the firehose” (internally, the Twitter API team has been calling this the ‘garden hose’). This is a “digestible amount of big data” (1% of all tweets per day are provided via this API feature). Finally, Huet did indicate that some partners – like Topsy Analytics and DataSift – are able to access “the full firehouse” of all tweets, but these are specifically arranged business deals unlikely to be opened up to anyone who wants to have a play with big data and the social media platform.
Huet recommended API developers familiarize themselves with github.com/twitter for open source contributions of how people are streaming Twitter data.
Huet also pointed to the new custom timelines API product now in private beta release. Huet encouraged developers attending APIDays to submit an application for access to this new data service for those who had a clear use case in mind. Similar to how Pinterest is opening its API to select partners, the approach seems to be that applicants need to first demonstrate where they are positioned in their specific industry vertical and to explain how they plan to make use of the API service.
Entrepreneurs will need to assess how to share this information while possibly still maintaining first-mover advantage, although the focus seems to be on working with established brands who are looking for additional engagement strategies. Given the theme of Huet’s talk, being able to demonstrate global reach or accenting use of the custom timelines feature with markets outside the United States may be a leverage ploy to consider using if making an application. Again, developers applying for access may want to focus on how they would use the functionality to provide greater value to their existing customer base, or to deepen analytics insights in specific verticals, rather than creating a Storify-type automated service that can scale.
The session ended with what we expect to be a 2014 conference trend of finishing your presentation with a crowd-pleasing demo of how to use an API to connect your data to a drone.
With other discussions at APIdays focusing on exploring what exactly is the true nature of open APIs, the partnership model being proposed by Twitter (and Pinterest) may be the way some bigger API providers will move to provide “open” access to their data stream in the future. In such cases, understanding how the API provider is framing the supply of data via API will be critical in submitting an application for access that can demonstrate alignment with the API providers future business goals.
The vast majority of analytics software is going to be consumed within another application, rather than exist as a stand-alone application. Recognizing this fact, analytics application vendors are racing to deliver robust sets of APIs around their applications that make it easier for developers to embed those capabilities within their applications.
A leading advocate of this approach to analytics has been KXEN, a provider of a predictive analytics software that was acquired earlier this year by SAP. Rather than simply burying KXEN with the much larger SAP application portfolio, SAP today reiterated the KXEN commitment to providing an open API.
KXEN is already available as an application that can be easily embedded within, for example, a Salesforce.com application or a Teradata warehouse. Shekhar Iyer, SAP global vice president of business intelligence and predictive analytics, says SAP is committed to embedding KXEN within its own applications and allowing developers to continue to embed KXEN within their own applications.
SAP today announced SAP InfiniteInsight, which packages KXEN with existing SAP analytics and visualization tools. But the overarching goal, says Iyer, is to provide a seamless experience by providing access to analytics at the point of application consumption, rather than having to always invoke a separate analytics application that sports a completely different user interface. Ultimately, those analytics capabilities will both be embedded within the application and delivered as a cloud service.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
The implication of those kinds of capabilities is significant for organizations of all sizes. Business executives will have access to relevant information on demand, instead of having to rely on an analyst to query a dedicated data warehouse to find an answer, which might take hours or days. As predictive analytics becomes more accessible to the average user, business analysts will be required to increase their skills to take on more of the role of a data scientist, which right now there is a chronic shortage of in most organizations. The end result should be data scientists discovering deeper, more meaningful business insights, while on a daily basis the average end user is invoking predictive analytics to make more reliable business decisions.
In fact, a recent survey of 309 business end users conducted by research firm LoudHouse on behalf of SAP found that 90 percent believe that predictive analytics provides a competitive advantage for their business. Sixty percent of those surveyed said predictive analytics is a major IT priority. A full 80 percent said they believe predictive analytics will be a crucial investment within the next five years.
This new generation of data-driven applications will be taking advantage of predictive analytics to leverage the massive investments being made in Big Data. This approach promises to cost-effectively give organizations direct access to all the raw data they need using platforms such as Hadoop or Cassandra, versus the subsets of data that most organizations can practically store in data warehouses built on a SQL relational database.
TaskTop, provider of a framework that leverages APIs to integrate application lifecycle management (ALM), is promoting the concept of a software integration lifecycle bus. Providing capabilities similar to what an enterprise service bus (ESB) does for applications, Tasktop is making a case for the integration of ALMs within and without an organization.
Img Credit: Tasktop.com
With the release today of Tasktop Sync 3.0, the company is extending that integration to IT operations management in the form of integration with IT service management (ITSM) and helpdesk tools—initially in the form of offerings from ServiceNow and Atlassian, respectively.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
In addition, Tasktop is extending its ALM support in this release to include integration with ALM platforms from Serena Software and Rally Software. Tasktop is also prepping a beta program from a Tasktop Sync SDK, and has released Tasktop Configuration Templates that define configurations of synchronizations that can be reused across multiple projects. As part of that simplification effort, Tasktop has previously launched an open source project through which run-time code can be directly embedded within an ALM framework.
According to Tasktop CEO Mik Kersten, the issue that many organizations have to day is that there are an incredible number of known issues with applications and systems that are getting rediscovered every day. This situation exists because there’s no mechanism in place for passing that information between disparate ALM and IT operations management tools, says Kersten.
Rather than trying to integrate all that information database level Kersten says Tasktop Sync provides an integration platform that leverages APIs and Web Services to seamlessly integrate all the information required to create a truly comprehensive approach to managing DevOps.
The primary benefit of Tasktop Sync is that it doesn’t force IT organizations to adopt one set of tools over another. Each unit within the enterprise or outside it can continue to use whatever set of tools they like best, while being able to share all the relevant information needed with other release management or IT operations teams.
Organizations are obviously in different states of maturity when it comes to DevOps. But it’s clear they are all heading in the same general direction. The challenge they face is deciding between a forklift approach to DevOps that makes a giant leap forward, or taking a more gradual approach that doesn’t introduce as much of shock to the software delivery system within their organization.
J'aime bien ma chère khâmarade NKM. Si, si, sérieusement. Bon, je ne suis pas toujours d'accord avec elle, mais il y a chez elle à la fois un animal politique et un animal totalement non-politique et ce mélange des genres est parfois très sympathique. Mais pas toujours. Aujourd'hui par exemple, elle vient de nous pondre un communiqué de presse « Mesures d’urgence pour la qualité de l’air à Paris » qui fait très Sarkozy-pondant-une-loi-après-chaque-évènement-quotidien...
NKM propose donc deux mesures d'urgence pour lutter contre la dégradation, cette semaine, de la qualité de l'air à Paris. Bon, certes, Paris est polluée mais m'enfin on n'est pas à Pekin ou Shangai hein... Commentons donc ces deux mesures :
- Cesser immédiatement toute acquisition de nouveaux véhicules diesel, et limiter au maximum l‘utilisation des véhicules diesel par la mairie
Voui, voui. Bon. Combien de véhicules dans Paris par jour, combien de la mairie et services municipaux. Sur ces véhicules municipaux, combien de neufs répondant aux dernières normes environnementales, combien de vieux et sales. Comment va être financée l'augmentation de la valeur du parc si on passe progressivement à l'électrique. Quels sont les soucis liés à l'autonomie des véhicules électriques. Dans le cas de véhicules au gaz, à l'essence ou hybrides, quel est le surcoût lié au changement de carburant. Comment tout cela sera-t-il financé. Et surtout comment cette « mesure d'urgence » espère-t-elle une seule seconde impacter sérieusement « en urgence » la qualité de l'air parisienne ?
- Mettre en œuvre une ZAPA (zone d'action prioritaire pour l'air), à laquelle se refuse l’exécutif, pour interdire l’entrée dans Paris aux poids lourds et aux cars de tourisme les plus polluants
Pourquoi pas... Mais quel est l'impact projeté sur le commerce, les livraisons, l'industrie ? Quel est l'impact sur le tourisme et où propose-t-on de garer ou faire patienter les cars de tourisme interdits dans Paris intra-muros ? Quelle est la banlieue défavorisée qui va accepter les particules fines dont les bobos parisiens ne veulent plus ? Comment va-t-on transporter les touristes concernés entre les portes de Paris et leurs destinations ? Et leurs bagages ? Quel sera l'impact, la surcharge, sur les transports en commun ? Les stations concernées sont-elles adaptées au passage de centaines de touristes armés de grosses valises ? Quel est l'impact d'image sur notre ville ?
Bon, tout cela m'a l'air de participer d'une maladie bien connue, la cystite du communiqué de presse. Étoffées, étudiées, détaillées, planifiées, ces mesurettes feraient peut-être un peu de sens. Mais en l'état, c'est surtout de l’esbroufe sans beaucoup d'intérêt, désolé.
Nota bene: j'espère que le type qui gèrera cette « ZAPA » s'appellera Franck
In the early days of Twitter, occasionally when you tried to log in you got a “fail whale” – a picture of a whale held up by a flock of birds — as a way of telling you the system was overcapacity, and you should simply try again later.
We tolerated it then because, after all, it was only social media. But there are certain types of services, for example, financial exchanges, where you never want a fail whale. So if you have thoughts of building a Bitcoin exchange, where people can buy and trade the popular virtual currency in real-time, you want an infrastructure that can handle high volume trading spikes without going down.
That’s the point Todd Greene, CEO of PubNub, drives home when he talks about his company’s latest prepackaged solution: a toolkit for building Bitcoin exchanges. For those unfamiliar, PubNub offers data push and other building blocks for real-time apps through its PubNub API. (Another similar service is Pusher.)
Greene says that PubNub got the idea for its latest kit when it noticed Bitcoin exchanges launching in local markets everywhere. Financial exchanges have unique challenges. They need to be able to handle real-time prices and trades as they happen. With its 13 data centers relaying messages from different apps around the world, PubNub says its service is uniquely designed to handle those types of challenges.
PubNub says its Bitcoin kit offers everything developers need to build a reliable, scalable exchange. “The kit is simply a way that we have taken core building blocks of PubNub and added the visual components needed for a trading exchange. Our solution allows developers to easily bring to market a solution that is going to be rock solid,” Greene said.
The visual components in the kit show Bitcoin current, high, and low prices, along with a graph of price fluctuations over time. One widget shows the most recent trade and social widgets allow for interactions with other exchange users. All of the visuals in the template are powered by PubNub building blocks.
Among those building blocks are data push for delivering real-time numbers and the PubNub Access Manager for streaming private trades to individual traders. Another building block, storage and playback, lets users store streams of data for reviewing trade histories and see price change histories of individual currencies. (For example, the price history of a Bitcoin against the ruble, dollar, yuan and so on.) And finally, the last PubNub building block, presence, lets exchange users see how many people are currently trading, or if you have social features on your trading site, such as chat or customer support, it shows you who is signed on so you can interact with them.
PubNub’s Bitcoin solution is one of many the San Francisco startup is releasing in an effort to target various markets. Last month, for example, PubNub released a WebRTC solution for creating Skype-like voice and video in the browser.
“We solve a huge number of business problems,” said Greene. “By putting together kits, we make it easier for developers to connect the dots themselves.”
One Hour Translation launches API for its Translation Memory Cloud (TMC). The G Adventures API integrates small group tours with third party apps. Plus: SmartBear Software Acquires Lucierna, and 11 new APIs.
One Hour Translation’s API Brings Developers Together
One Hour Translation, the human translation service for business has integrated with Google Play and offers an API for its Translation Memory Cloud (TMC). The goal is efficiency: with the TM API, developers can translate string by string. These in turn are collected more efficiently, localized and in context.
Oren Yagev, cofounder and vice president pointed out that the average localization cost is less than $100 per language:
The important thing about our new TM API is future application text updates.By using the API, developers can easily translate only the strings that are not already translated, thus saving time and money when the application is updated.
The API documentation covers every thing needed to get going, from API keys to client libraries and more.
G Adventures API Launches Small Group Tours to an App Near You
G Adventures specializes in getting off the beaten path to put travelers up close with local culture and do it in a sustainable way. The goal of G Adventures transcends tourism; their mission is to change people’s lives. The Gadventures API gives developers access to trip information.
The documentation for the REST API covers the usual items a developer comes to expect, like auth keys, tour and booking resources, use cases, and libraries. But it also discusses something new, the Sherpa Agency Code. That may sound vaguely like cloak and dagger speak, but it is a G Adventures’ specific agent identifier, issued to every agency authorized to book G Adventure tours.
API News Your Shouldn’t Miss
- Personal Finance Interview with Wendell Santos on the Internet of Things
- One Hour Translation Integrates with Google Play Offering a New Translation Memory API to Application Developers
- SmartBear Software Acquires Lucierna | Business Wire
- Home appliance makers connect with open source ‘Internet of things’ project
11 New APIs
Today we had 11 new APIs added to our API directory including an online appointment booking service, a facebook sdk enhancements service, an online database of ant species images and information and a qr code creation and scanning service. Below are more details on each of these new APIs.
Acuity Appointment Scheduling API: Acuity Appointment Scheduling is an online appointment booking service that can be accessed via mobile devices by both users and clients. Clients can book their own appointments or classes online and even pay in advance. Acuity users can accept appointments and payments through their own websites or through the free scheduling pages Acuity provides. They may also create discount codes and gift certificates for appointments through Acuity. Acuity Appointment Scheduling automatically accounts for timezone differences, records appointments in Google, notifies users of new appointments, and sends reminder emails to clients.
Acuity users can easily export data, such as client information and appointment schedules, from the service. Client information that has been collected can be used to create custom forms that will take less time to fill out. Acuity can also generate reports about appointments, revenue, staff, or locations over any time range. Developers can interact with their appointment schedule programmatically via REST API.
Amplifier API: Amplifier is provided by a user account on packagist. Amplifier is an extension of Facebook PHP SDK. It was designed to cut down development time on some tasks like whether a person liked a page or provides support for a particular extended permission. The API information is available through a user account in Packagist and is hosted on github.
AntWeb API: AntWeb is a community driven, online database of ant images, specimen records, and natural history information. The site’s goal is to publish high quality images of all of the world’s ant species. AntWeb exposes its data to the public through its API. The API allows developers to programmatically query for specimens by taxonomy, specimen code, decimal coordinates, or by days since the specimen was entered into the database.
BeeTagg QR API: BeeTagg is a mobile tagging service that allows users to tag physical objects and link them with internet resources. Users can create, organize, and track QR codes in a variety of formats. BeeTagg works on many platforms and has excellent detection capabilities. Developers can access BeeTagg’s functions for creating, editing, managing, and rendering codes programmatically using the BeeTagg QR API.
G Adventures API: G Adventures provides small group tours around the globe with a focus on delivering authentic adventures in a responsible and sustainable manner. The G Adventures API provides real-time developer access to trip information. This information can be used to create trip research and booking solutions.
Privacy CA API: The Trusted Computing Group defines specifications that allow computers to exchange information about their hardware and software configurations via the internet. This allows computers to trust each other more, but it also raises some concerns over privacy. Privacy CA allows users to protect their privacy while still establishing remote trust by letting them obtain certificates verifying their trustworthiness. There are three levels of certification that Privacy CA offers and validates. Certificates can be requested and validated programmatically via REST API.
Silobreaker API: Silobreaker provides workflow support and content management services for all kinds of users, including corporate, government, and military organizations. It provides both front-end and back-end services for handling structured and unstructured content. Silobreaker can aggregate content of many kinds from both internal and external sources and then analyze this content using semantic and statistical text mining services. Results can be served in a variety of ways. Users may also search through content using advanced filters to reduce the number of results and improve their relevance.
Twinword Category Recommendation API: Twinword creates tools to collect data describing the ways in which people associate concepts. The tools include word association tests, the word graph dictionary, e-commerce recommendations, document detection, and more. Twinword provides a series of APIs that expose some of these tools.
The Twinword Category Recommendation API is a web service for e-commerce sites. Developers use a simple GET call to pass a product category, and the API responds with a weighted list of related categories. The API can also be mapped to partner product categories.
Twinword Visual Context Graph API: Twinword creates tools to collect data describing the ways in which people associate concepts. The tools include word association tests, the word graph dictionary, e-commerce recommendations, document detection, and more. Twinword provides a series of APIs that expose some of these tools.
The Twinword Visual Context Graph API is able to deliver diagram information for visualizing concepts or creating mind maps. Developers pass a word with a simple GET call, and the API responds with a structured list of contexts and word relationships.
Twinword Word Association Quiz API: Twinword creates tools to collect data describing the ways in which people associate concepts. The tools include word association tests, the word graph dictionary, e-commerce recommendations, document detection, and more. Twinword provides a series of APIs that expose some of these tools.
The Twinword Word Association Quiz API delivers customized word association quizzes intended for games and e-learning software. The API allows calls to specify a level of difficulty for the quiz, as well as a grade or exam (SAT, GRE, GMAT, etc) upon which to base the quiz.
Validas API: Validas is a service for mobile operators and resellers that helps select the best plans for customers who are upgrading devices or switching operators. They provide a Customer Data Acquisition API, which imports and normalizes customers’ billing data. From their wireless bill, Validas extracts the current plan’s features and contracts, details on the user’s device, current usage data, and the user’s demographic information.
drchrono, a leading cloud and web based Electronic Health Record (EHR) access platform provider, has just announced the launch of the drchrono API which will allow developers to create applications that enhance the drchrono platform as well as create third-party healthcare industry applications.
The drchrono platform was originally created to remind patients about their appointments. However over time, users requested that many additional features be added and the platform was expanded to include Electronic Health Record access, scheduling, patient reminders, and billing system.
drchrono offers free and paid accounts and also develops free EHR and healthcare apps for iPad and iPhone such as drchrono Mobile EHR and OnPatient Medical Record PHR. drchrono Mobile EHR is an app that connects doctors with patients allowing healthcare providers to manage their practices anywhere and at any time. OnPatient is an app that allows patients to complete doctor check-in forms (created with the drchronos platform) using their iPad, iPhone or online. The OnPatient app also allows patients to share their medical information with doctors, schedule appointments, view medical bills, and perform other healthcare tasks.
When it comes to technological innovation, the healthcare industry lags far behind most other industries due to government regulations and the “walled gardens” of healthcare data providers. Most healthcare data providers charge developers exorbitant fees for access to healthcare data APIs or deny developers access because their applications do not yet have a large number of users.
Technological innovators have largely avoided developing healthcare applications and other healthcare systems due to strict regulations and the difficulties in accessing healthcare data. Daniel Kivatinos, COO and co-founder of drchrono spoke with ProgrammableWeb and explained that:
“There is something very wrong when healthcare companies charge developers to get access to an API, this stifles innovation. Developers also get the question, how many users do you have? If the developer doesn’t have users, most healthcare companies won’t talk to them or allow an engineer access to an API to build on. This question also stifles innovation. We at drchrono are removing both of those barriers with our API. We want developers to be able to build on a healthcare platform without an access setup fee. Also developers should be able to access an API with or without users, we are allowing developers to build on top of drchrono without asking for user numbers.”
These technological obstacles have left the healthcare industry in a state of “innovative dysfunction,” where doctors are often left using antiquated healthcare software incapable of connecting to mobile applications and external healthcare systems. The debacle of the Healthcare.gov website launch is a high profile example of the innovative dysfunction of the healthcare industry as well as the need for government IT procurement reform.
The launch of the drchrono API provides developers, DICOM, physicians, medical billing, and other healthcare industry companies the opportunity to create applications that enhance the drchrono platform as well as create applications that benefit doctors and patients. The API also helps speed up the development process of healthcare applications and provides quick access to patient healthcare information. The current procedure for accessing the APIs of healthcare institutions involves working with the business development team of each institution, which can cause the development of an application to take months and sometimes years to complete.
The drchrono API and developer program were designed to spark a new wave in healthcare application development and the launch of new healthcare industry startups. drchrono is seeking developers to partner with the company to create quality applications that provide an improved user experience for both doctors and patients. The process works similar to the Apple App Store; developers request access to the drchrono API and develop a healthcare application. The application is then submitted to drchono where it will be reviewed and if approved, will be made available in the drchrono healthcare app store.
The API returns responses in JSON format and API calls require OAuth. API endpoints available include (but not limited to):
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
- Doctors – First and last name, specialty of the doctor, job title, the doctor’s website, phone number, etc.
- Patients – First and last name, name of their primary care doctor, patient’s personal information, etc.
- Appointments – Doctor, patient, office, exam room, scheduled time, reason for appointment, etc.
- Offices – Doctor, name, city, address, etc.
At the time of this writing, the drchrono platform has attracted 53,000 physicians and over 2.6 million patients. Developers interested in using the drchrono API and becoming a selected drchrono partner, can request access at drchrono.com/api.
By Janet Wagner. Janet is a Data Journalist and Full Stack Developer based in Toledo, Ohio. Her focus revolves around APIs, open data, data visualization, and data-driven journalism. Follow her on Twitter, Google+, and LinkedIn.
Mobile messaging, a technology that is often associated with smartphones, is now being more broadly applied within the enterprise. Application of this technology is increasing in areas typically dominated by legacy instant messaging applications. One providers of such a service is HeyWire Business, which today launched a channel partner program designed to reward partners that leverage the company’s API.
Image Credit: Heywire.com
Meredith Flynn-Ripley, HeyWire Business CEO, says that their cloud platform is a software-as-a-service (SaaS) application that service providers or developers can invoke to add additional functionality. This functionality creates an additional revenue stream for customers without them having to invest in building out that capability themselves. The HeyWire Business service provides an alternative to relying on mobile messaging software provided by telecommunications carriers at a time when many customers want to rely less on traditional instant messaging solutions that have not been optimized for mobile computing platforms.
Based on a RESTful API created by HeyWire Business, Flynn-Ripley says rather than asking customers to give out their mobile computing numbers, the HeyWire Business service is designed to send messages to mobile computing devices via the office phone owned by the organization paying for the service. That not only gives that organization visibility into those messages; it means employees don’t necessarily have to give out their personal mobile phone numbers to, for example, corporate customers.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Flynn-Ripley says it takes about 24 hours for an office phone to become registered before the service can begin forwarding mobile messages to any designated mobile computing device. HeyWire Business, says Flynn-Ripley, is part of a new wave of easily programmable telecommunications services that are being delivered via the cloud. That approach not only makes those services more accessible; it prevents customers from becoming locked into any particular service.
HeyWire Business already has existing OEM partnerships and Flynn-Ripley says that HeyWire expects to make additional vendor partnerships in 2014, which should help expand the base of end users accessing the HeyWire Business service. The current total base of end users of the HeyWire Business service is over 20 million.
Mobile messaging should be a standard element of just about every mobile computing application. The challenge is that building in that capability is beyond the resources most developers have at hand. The opportunity is delivering that capability in a way that actually generates additional revenue for the developer.
The AllSeen Alliance has launched as the 11th Linux Foundation Collaborative Project. The AllSeen Alliance represents an open source consortium dedicated to democratizing and expanding the Internet of Everything. At the heart of the technology lies a a series of APIs that allows hardware devices to communicate with each other regardless of operating system, manufacturer, or other technology mote.
Linux Foundation Executive Director, Jim Zemlin, commented:
“Once the APIs that comprise the interoperability layer are opened up, there will be all kinds of opportunities to add services on top….Engineers already are implementing this code in products being sold today. We look forward to more product announcements at CES.”
Premier members of the consortium include Haier, Sharp, LG, Panasonic, Qualcomm, Silicon Image, and TP-Link. As with many conversations regarding the Internet of Everything, the ideas spinning around the AllSeen Alliance are mind boggling and game changing. However, the AllSeen Alliance has an advantage when it comes to such conversations, because it is backed by some of the world’s largest device manufacturers. Accordingly, when a conversation about a house realizing it is empty and adjusting the heat to conserve energy takes place, all of the appropriate players are at the table to make it happen. Through the AllSeen Alliance, Zemlin envisions the house scenario taken to another level:
“Even better, a system with this level of control could use a variety of states, enabling the house to enter its deepest sleep state when the occupants are out and gradually return to normal before they typically return.”
The underlying connectivity technology utilized by the AllSeen Alliance is AllJoyn. Multiple SDKs and a code base is readily available to accomplish discovery, connection management, message routing and security, and more. The software development is dedicated to ensuring that the most basic systems and devices can communicate and interoperate with third party devices. For more information, visit the AllSeen Alliance and join or participate.
Our API directory now includes 43 metadata APIs. The newest is the OpenStreetMap Taginfo API. The most popular, in terms of mashups, is the Calais API. We list 17 Calais mashups. Below you’ll find some more stats from the directory, including the entire list of metadata APIs.
For reference, here is a list of all 43 metadata APIs.
art.sy API: Artwork information discovery service
Associated Press Metadata API: News content metadata service
Auphonic API: Audio post production service
BibServer API: Bibliographic metadata consolidation and sharing service
Calais API: Semantic data search and extraction service
Cambridge Journals Online API: Cambridge University Press collections
Clear Read API: Article text and metadata extraction tool
Dalet API: Media management service
DCMI Metadata Registry API: Metadata management service
Decibel API: In-depth music metadata
DiscoverEDINA Tagger API: Image metadata service
Fluidinfo API: Online storage and search platform for metadata
Indian Pincode API: Indian PIN code information service
LibraryCloud API: Aggregated library metadata delivery service
mEDRA API: European DOI registry
MELT API: German educational resource search service
mSense API: Metadata generating service
Music Story Pro API: Music metadata database
MusixMatch API: Song lyrics search engine
MyMovies API: Movie and DVD collection management service
NCSU Scholarly Publications Repository API: Academic publication information service
NOAA Historical Observing Metadata Repository API: Historical weather station data service
Nutribu API: Nutrition data for social applications
OCLC Crosswalk API: Metadata translation services
OCLC Virtual International Authority File API: Library authority file service
Ontos API: Semantic data for content
OpenStreetMap Taginfo API: Database of OpenStreetMap location tags
PBS COVE API: PBS video metadata service
PHIN VADS API: Public health terminology service
PLOS Article-Level Metrics API: Usage metrics for PLOS articles
Qobuz API: Streaming/download music service
refbase OpenSearch API: Bibliographic search service
Rutgers University Community Repository API: Digital repository of scholarly articles
Semantics3 API: Product Database
SKOS Thesaurus API: Thesaurus of related concepts
TheGamesDB.net API: Online game artwork and metadata service
Twitlbl API: Twitter data analysis service
Unified Medical Language System API: Medical terminology & taxonomy service
Valobox API: eReader web application for PCs, tablets, and smartphones
Veenome API: Visual data reader for videos
WebKnox Keywords API: Keyword information tool
WebKnox Recipe API: Recipe search engine
WebKnox Web API: Web page processing tool
Amazon EC2's Auto Scaling feature gives you the power to build systems that adapt to a workload that varies over time. You can scale out to meet peak demand, and then scale in later to minimize costs.
Today we are adding Auto Scaling support to the AWS Management Console. You can now create launch configurations and Auto Scaling groups with point-and-click ease, and you can bid for Spot Instances when scaling out. You can also initiate scaling operations from the console and you can manage the associated notifications.
Let's take a tour of the console's new support for Auto Scaling. The welcome page outlines the benefits and the major steps:
The launch configuration specifies the Amazon Machine Image (AMI), EC2 instance type, EBS storage, security group, and other details needed to launch new instances as part of the scale-up process. The console leads you through the necessary steps, beginning with the selection of the desired AMI:
With the AMI chosen, your next task is to choose the EC2 instance type that will be launched when scaling out:
Then you provide a name for your launch configuration, along with an IAM role, enable CloudWatch detailed monitoring, and request EBS-optimized instances. You can even choose a purchasing option (On- Demand or Spot).
If you decide to use Spot Instances, the console will show you the current price for the selected instance type in each Availability Zone. You can use this information to help you make an informed choice when you enter the maximum price that you want to pay to launch a Spot instance:
You can also request the creation of new EBS disk storage volumes as part of the launch. These volumes can be deleted on termination, or they can be left around. The first option is perfect if you use the EBS volumes for temporary storage; the second would be appropriate if you generate log files on the instance and need to move them to long-term storage after the instance has been terminated.
You can choose to attach an existing Security Group to all newly launched instances, or you can create and customize a new one.
With all of the details specified, now is the time to review them and to create the launch configuration:
As you probably know, the launch configuration provides Auto Scaling with all of the information needed to launch and terrminate EC2 instances as part of scaling operations, but it doesn't actually launch any instances. To do that you need to create an Auto Scaling group. Click the following button to do this:
The console will lead you through the steps needed to create your Auto Scaling group. You can set the initial size (number of EC2 instances) of the group, along with the desired minimum and maximum size. You can also choose to launch the instances into a particular Virtual Private Cloud (VPC), and you can select the desired Availability Zones.
If you are using the instances to handle incoming HTTP traffic, you can also choose to associate the Auto Scaling group with an Elastic Load Balancer:
The next step is optional. If you are simply using the Auto Scaling group to ensure that a particular number of instances are up and running, you can skip it. If you want the group to vary in size in response to a changing load or to other factors, then you need to set up scaling policies.
Groups that vary in size must have a Scale Out policy and a Scale In policy. These policies are triggered by Amazon CloudWatch alarms. For example, you can activate the policies when the average CPU load (across the Auto Scaling group) rises above or drops below certain thresholds. Or, you can activate them in response to changes in the amount of network traffic to or from the instances in the group. You can even create custom CloudWatch metrics such as "Requests Per Second" and use them to initiate scaling operations.
As you can see, you can choose the actions to be taken, along with the associated quantities (number of EC2 instances) for the scale out and scale in activities:
Each Auto Scaling activity generates an Amazon SNS notification; you can route these to an existing topic or you can create a new topic and subscribe it to one or more email addresses from the console:
After you create the Auto Scaling group, you can watch the scaling history using the console
You can also initiate scale out and scale in operations
This new feature is available in all of the public AWS Regions and you can start using it today. Give it a try, and let me know what you think.
Load testing service Load Impact has released a Google Chrome extension to let API developers record API engagement direct from the browser. API developers can then run these recorded use cases through a variety of load testing scenarios to ensure reliable API access ahead of any plans to scale API delivery. Load Impact CTO Robin Gustafsson spoke with ProgrammableWeb about the new service feature.
“This new extension for the Chrome browser makes it very simple to record anything that’s happening in an open tab,” Gustafsson said.
“For example, you can use the Postman REST client – a Chrome extension for interacting with REST APIs – to record what happens when talking to your API. You can record a few types of typical user test cases direct from the browser and then combine them in a scenario for load testing. All you need is a copy of our API key, download our Chrome extension, and then the things you record via our Chrome extension will be automatically transferred and available when you next configure your load performance tests in Load Impact.”
Gustafsson says the new extension makes it easier to record example test cases of how potential developer-consumers will use a provider’s API. “Load Impact already has the feature of allowing a recording of user behavior, but it is via a proxy, which requires you to change all of your proxy settings in your browser or system preferences.”
Use cases are at the basis of useful load performance testing, Gustafsson said. In most cases, if an API provider is looking to expand from a private beta release to a general availability release, or if promoting an API in a new market, or making an API available at a popular hackathon, API providers can record actual use cases direct from the web browser. Once these have been recorded, they are available in Load Impact to form the basis of running user scenarios. These scenarios allow API providers to simulate the response under particular conditions such as heightened potential levels of usage. “This gives you a more realistic picture of how your system would cope with realistic – but amplified – traffic patterns,” Gustafsson said.
Gustafsson also said the service could be used with private beta users or other early adopters.
“You could create a project account in Load Impact and then share the same API key with each early adopter. Then when they interact with your API via their browser, their real use cases will be recorded in your Load Impact account. You can even edit the preview of what has been recoprded so if someone made a bad request you can change that before you send it to our servers. As soon as you record the use cases, the only thing you need to consider is the traffic scenarios.”
API provider-developers can download the extension from the Chrome web store.
ProgrammambleWeb has covered the electronic health records leader Practice Fusion as its API story has developed. Apparently, its API strategy will continue to grow as Practice Fusion just bagged $15 million in additional funding to expand its tools and API portfolio. The expanded API offering aims to expand the opportunities available to health device developers.
Ryan Howard, Practice Fusion CEO, commented:
“Practice Fusion doctors have facilitated six percent of all patient visits in the US this year. This huge reach makes us the front-running health technology company, and we’re going to double that reach next year”
The new round of funding was led by Qualcomm Ventures with other investors including Longitude Capital, Artis Ventures, Industry Ventures, and Band of Angels. The additional funding brings this round to a total of $85 million, and Practice Fusion sees itself as having moved out of the “scrappy startup” stage and evolved into an industry leader. Qualcomm Life Fund Director, Jack Young, commented:
“Practice Fusion is bringing a new level of innovation to the healthcare industry….Considering the reach and potential of the platform, we’re excited to become a part of their long-term vision, especially as healthcare grows more personalized and mobile than ever before.”
Practice Fusion now employs more than 300 people and has facilitated over 55 million patient visits across the US. It manages a portfolio of multiple APIs that continues to spark innovation among the developer community. With continued dedication to its API base, Practice Fusion should continue to lead and innovate the EHR space.