Welcome to YUI Weekly, the weekly roundup of news and announcements from the YUI team and community.
The Pure team released the Grunt and Rework tools that they have been working on over the last quarter. These tools make writing CSS more enjoyable, and are not just restricted to Pure. If you write CSS, you may find these tools handy.
You can check them out on Pure’s new Tools page. The new tools are:
- grunt-pure-grids: Generate custom mobile-first responsive grids
- grunt-stripmq: IE fallback for mobile-first CSS
- grunt-css-selectors: Mutate CSS selectors
Stay tuned for more from Pure next week!
YUI Open Roundtable
- You’ve surely heard of Atom by now, right? I’m personally still a big Sublime Text fan but folks really seem to be liking Atom’s plugin infrastructure.
- Luke Hoban has a great overview of all ES6 features over at this GitHub repo.
- The Yeoman folks wrote a nice blog post detailing how you can use Grunt and Gulp tasks to optimize performance.
Enjoy the weekend, folks!
While the number of APIs grows by leaps and bounds every year, only a fraction of websites offer official APIs. Not willing to wait, those who want and need data are increasingly taking matters into their own hands through the creation of unofficial APIs. A new tool, Gargl, gives individuals an open-source option for doing just that.
Using Gargl, it is possible to build a scraper and unofficial API that can be run from a machine of the user’s choice without writing a single line of code. Gargl projects consist of three components:
- Templates, which define the API for a website using JSON.
- A Recorder, which allows users to record their interactions with a website to produce templates.
- A Generator, which creates modules for a specified programming language that can consume a template’s API.
All of the source code for Gargl’s components is available on GitHub. A tutorial and video walk-through showing how Gargl can be used to build an unofficial API for Yahoo in 3 minutes are also available.
Filling a need, raising legal questions
Necessity is the mother of all invention, and Gargl was born of necessity. Its creator, Joe Levy, spends his free time building Windows 8 apps. To increase the utility of these apps, Levy prefers to build apps for existing services.
Many of those existing services, such as Google Voice, OkCupid, and PlentyOfFish, didn’t offer official APIs of their own and Levy found himself reverse engineering their code to get at the data he needed. This was time-consuming, “painstaking” work and it inspired Levy to come up with a more efficient solution. Gargl was born.
Gargl, like many scraping solutions, raises interesting and sometimes still unresolved legal questions. Levy highlighted these prominently in releasing Gargl:
Levy’s decision to provide such a warning was based on his first-hand experience. While a number of his Windows 8 apps haven’t attracted the ire of the companies whose data he’s leveraging, Levy was forced to pull one of the apps he built using an unofficial API. Although he acknowledges that his activity could invite lawsuits, and he has formed a company in an effort to help protect himself against them, Levy has found that “most sites don’t want to go through the time, trouble, or fees of a lawsuit, so will issue you a Cease and Desist letter first, and if you comply [they] will not press any further action.”
This also played a role in motivating Levy to build Gargl. “It sucks to go through all the effort of figuring out a website’s unofficial API, building code to use that API, only to be shut down by the site owner. Gargl eases this pain by allowing you to spend much less time on the figuring out and integrating into unofficial APIs part, so if you do get a takedown request, you haven’t wasted nearly as much time and effort as you would have doing the process manually,” Levy explains.
The golden era of scrapers
Scraping isn’t a new phenomenon, of course. But building a scraper and retrieving the data it scrapes has never been easier.
Commercial tools like Import.io are growing in popularity, and companies like Priceonomics are tapping into the demand companies have for data by building custom scrapers and unofficial APIs for a fee. But free, open-source tools like Gargl have the potential to be the biggest game changers of all.
Levy admits that some of the commercial tools are currently more attractive in certain areas. Kimono, a commercial tool which made headlines recently by building an unofficial API for the Olympic games at Sochi, “is more user friendly than Gargl” but Levy suggests that Gargl has a number of advantages. There’s no shared collection of IP addresses for websites to block, for instance, and no one company that could be taken out by a lawsuit. Because the modules that the Gargl generator produces are incorporated into the user’s own code, they can be run as frequently as desired, allowing for the creation of unofficial APIs that are truly real-time.
If developers embrace Gargl and a strong ecosystem grows around it, it’s not inconceivable that Gargl’s polish and ease of use could some day match that of its commercial peers. Gargl has already attracted attention on Hacker News—a YouTube video showing how Gargl is used has more than 10,000 views—and other developers have started contributing to the Gargl project on GitHub. “[Gargl's] goals are too big for me to handle alone, and only through the community at large can it become truly great,” Levy says.
Ironically, if Levy has his way, more companies will recognize the wisdom of offering official APIs and the need for Gargl will decrease. “Because of the fact that it is nearly impossible to truly stop a savvy developer from reverse-engineering and using your unofficial API for their own purposes, I think all sites should release public APIs and embrace the fact that others want to integrate into their services to create additional value,” Levy told me. “Maybe once these sites see the amazing concepts developers have come up with in integrating into their services, they will recognize the value in creating official APIs for developers to use.”
Google has announced that the AdWords API now allows users to manage Shopping campaigns. Bitrix24 releases API for its online collaboration platform. Plus, Surety Solutions announces single-point-of-entry API and 8 new APIs.
Google Announces Shopping Campaign Support for AdWords API
Google introduced Shopping campaigns to help connect sellers with consumers. Now, sellers and marketers utilizing Google’s Shopping campaigns can create and manage campaigns through the AdWords API. Google is actively working with search management platforms and outside agencies to add Shopping campaign support. More information regarding AdWords API support of Shopping campaigns will be soon to follow.
Bitrix24 Launches API for its Online Collaboration Platform
Bitrix24; social collaboration, project management, and video conferencing solution provider; has announced API access to its popular collaboration platform. Although over 100,000 customers already use Bitrix24’s standalone tool, Bitrix24 released the REST API to integrate with customers’ existing tools. Bitrix24’s President, Dmitry Valyanov, explained:
“Over 120,000 companies have signed up with Bitrix24 in less than two years. Even though Bitrix24 comes with over 35 different tools, we understand that business logic and workflows vary greatly from company to company. With this new API, custom solutions and integrations for Bitrix24 can be created to address the needs of clients or industry verticals.”
Joining the Bitrix24 partner community is free. Systems integrators and web developers are encouraged to visit the developer site to learn more.
APIs You Shouldn’t Miss
- Surety Solutions Announces Single Point of Entry API
- Axway API
- Signpost Addresses the API Economy
- Apple’s New CarPlay API
8 New APIs
Today we had 8 new APIs added to our API directory including a data analysis dashboard service, a two-factor authentication service, a pakistani sms service, a nigerian bulk sms service and a celebrity stock game. Below are more details on each of these new APIs.
Cyfe Push API: Cyfe is a business dashboard application that helps users monitor and analyze different types of data from one place.
The Cyfe Push API allows developers to access and integrate the functionality of Cyfe with other applications. The main API method is pushing data into other applications and dashboards from data channels.
Doluna API: Doluna is a user verification service that uses a mobile phone for performing two-factor authentication. Once the user submits the recipient’s phone number, Doluna generates and sends a one-time PIN code via SMS. At the same time, Doluna gives the user a transaction key which they can check against the recipient’s PIN. Doluna can be used to validate end users, verify phone numbers, and protect against fraudulent activity. Integration is accomplished via API and requires simple REST calls.
FreeSMSBag API: FreeSMSBag is a two-way SMS service for Pakistanis. It is designed especially to help people living abroad communicate with their friends and family back in Pakistan. The FreeSMSBag API allows users to send and receive SMS from their own websites and applications. SDKs are available in .NET, PHP, and Java.
GiftedSMS API: GiftedSMS is a Nigerian bulk text messaging service that caters to a variety of businesses, organizations, and individuals. Users can integrate the GiftedSMS messaging gateway with their own applications via REST API. This enables users to send SMS and check their balances from within those applications.
HollyStock Celebrity API: HollyStock is a celebrity stock exchange where users acquire a portoflio of celebrities that gains or uses loses based on the the number of times the celebrities are mentioned in the news that day. The HollyStock Celebrity API uses REST calls and allows users to retrieve celebrities and their pricing from the online celebrity stock market game HollyStock. The API will return data in XML or JSON format. An account is required with service.
HSL SMS API: HSL (Hay Systems Ltd.) SMS provides a messaging gateway that can easily be integrated with other applications to allow them to send and receive SMS. HSL SMS is a versatile service that can be used for emergency alerts, M2M (machine-to-machine) calls, staff communications, customer promotions, two-factor authentication, and more. Integration can be accomplished using a variety of protocols, including REST, SOAP, SMPP, and SMTP.
Kairos ID API: Kairos is a facial recognition service that aims to allow users to integrate advanced security features into applications to enhance identification and verification. The Kairos API uses REST calls, and requires and API key for access. The API allows users to build applications that integrate the facial recognition into programs. Plans range from 500 to 50,000 calls and run from free up to $1,999 dollars per month.
Zync API: Zync is a global messaging platform that provides communication methods over SMS, Voice, Email, and Fax. Its SMS platform focuses on long code and selects the most reliable routes using a global prefix lookup. The voice platform comes with direct connectors to every major geographic region. The fax engine is capable of delivering and receiving millions of pages per day. All of Zync’s messaging functions are designed to send and receive messages to and from anywhere in the world.
Bitrix24, an enterprise collaboration, project management and social network platform provider, has announced the launch of its Bitrix24 REST API, which provides developers programmatic access to the cloud version of the Bitrix24 platform.
Bitrix24 is a cloud-based platform that provides enterprises and SMBs an online workspace for social collaboration, project management, video conferencing and other office tasks. There is also a self-hosted version of Bitrix24 available that includes additional features not available in the cloud version.
The new Bitrix24 REST API provides programmatic access to platform features and data items including CRM, social network groups (workgroups, projects), data storage (information blocks), notifications, tasks, users, departments, activity streams and calendars. Dmitry Davydov, Bitrix24 Chief Marketing Officer, tells ProgrammableWeb:
We’ve had an API for the self-hosted version of Bitrix24 for several years. But as more and more intranet, CRM and collaboration users move to the cloud, it was obvious that we need an API for the cloud version of Bitrix24 as well.
Developers can use the new API to create apps for the Bitrix24 marketplace and to build custom integrations and solutions that address specific business needs. It should be noted that if a developer would like to use the API to create an app for the Bitrix24 Marketplace, they have to become a partner first. The Bitrix24 partner program is free to join and does not have any annual fees.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Apps for Basecamp, Yammer, Zoho, MailChimp and Doc Designer have already been added to the Bitrix24 marketplace. Bitrix President and Co-Founder Dmitry Valyanov, states in a press release:We expect that by the end of the year we will have integration and migration apps for most, if not all, popular online business tools available to Bitrix24 users. But even more importantly, the new API lets Bitrix24 partners create custom and industry specific solutions so they can open new markets and offer their clients a much higher value service.
According to the press release, more than 120,000 companies have registered to use the Bitrix24 platform in less than two years. Each company has unique business needs and different requirements when it comes to project management, employee collaboration and workflows.
The availability of the new Bitrix24 REST API allows developers to build custom integrations and solutions that address the needs of nearly every type of business.
For more information about the Bitrix24 platform and REST API, visit Bitrix24.com.
By Janet Wagner. Janet is a data journalist and full stack developer based in Toledo, Ohio. Her focus revolves around APIs, open data, data visualization and data-driven journalism. Follow her on Twitter, Google+ and LinkedIn.
Wearables are the next big thing and Plantronics, maker of enterprise-grade Bluetooth headsets and other gear, wants to play a key role. The company has a dedicated developer program with PLT Labs, and is participating in hackathons around the country to raise awareness about its potential.
The center of PLT Labs’ strategy is the Wearable Concept 1. This wearable is no fitness band. Instead, it is a headset that includes a nine-axis sensor that can track head orientation in three dimensions, tap detections, free-fall detection, and other input. Plantronics has a host of APIs to accompany the Wearable Concept 1, which let developers create applications based on the data created by the sensor. The company sees plenty of open road ahead for itself and those willing to take advantage of its developer program and APIs.
“Wearable tech gives today’s software developers an unprecedented opportunity to create applications leveraging information that until now has been out of reach,” says Cary Bran, head of PLT Labs and senior director of Innovation and New Ventures at Plantronics. Bran will speak at Wearables DevCon, an inaugural event devoted to developing for wearables, covering the central issues of wireless connectivity for wearables and offering guidance on how developers can target the right devices and platforms.
“Understanding this opportunity and the associated technology challenges will be critical for developers and businesses alike, as together they strive to create applications that will enrich not only individual experiences but also the much larger and growing wearable tech ecosystem as a whole,” said Bran.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Hackers have already done some interesting things with PLT Labs’ Wearable Concept 1 at various hackathons this year. For example, a company called Leverage created a virtual personal trainer using the accelerometers to detect what workout a user is performing. Another company, Hackcouture.io, built a gesture-detecting glove and accompanying air guitar game called Metalhead, using the nine-axis sensors to detect “headbanging” movement and incorporate it into the game experience. Yes, these guys really figured out how to mix headbanging and developing.
Many of the most prominent wearables introduced by hardware makers in recent months take the form of smartwatches or fitness bands. Plantronics has long targeted business customers with its high-grade Bluetooth products. The Wearable Concept 1 is a perfect example of why developers need to pay attention to products other than those that wrap around the wrist.
We finish out our YUIConf 2013 series with our Lightning Talks session. Anyone who had interesting content to discuss could queue up and give a brief talk. Check out the links below if you would like to jump to a specific speaker.
- Bruno Farache of LifeRay on y3d
- Daniel Stockman of Zillow on “Stub Your Way to Unit Test Bliss”
- Andrew Dejtonski of Georgia Tech on connecting to systems that aren’t obvious
- Carlos Vallejo of Wells Fargo on a spec authoring tool
- Michael Matusak of Yahoo on Yahoo Screen
- Clarence Leung of Yahoo on Code Academy tutorials
- Phil Dokas of Yahoo on Flickr
- Iliyan Peychev of LifeRay on YUI Editor and autodiscovery of required modules
- Brian Johnson of Yahoo on debugging markup with CSS
- Luke Arduini of Yahoo on client side apps with npm
- Johnathan Tsai of Talentral on their latest features
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/MPaKarJ52Ic" width="560"></iframe>
PayPal, looking to provide an additional structure around the Node.js application framework, developed the Kraken implementation, a more secure and scalable framework for building commercial-grade applications. This week, PayPal is making Kraken available to the broader open-source community.
Bill Scott, senior director of user interface engineering at PayPal, says Kraken is being shared with the rest of the Node.js community because its adoption outside of PayPal will spur its adoption within PayPal while at the same time making PayPal a more attractive place to work. Every time a technology gets shared with the open source community, Scott says, developers’ enthusiasm for acquiring a skill they could potentially use elsewhere winds up increasing use of that technology by orders of magnitude.
Kraken is designed to provide some structure to Node.js in much the same way that Ruby on Rails provides developers with some basic configuration decisions that, left to their own devices, individual developers would wind up making slightly different, according to Scott.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
From a security perspective, Kraken sets up a number of defaults, including cross-site request forgery support (CSFS), XFRAMES headers that prevent clickjacking and content security policy that allows developers to restrict what type of resources are allowed and enabled for a Web application.
The Kraken implementation Node.js is being used within PayPal to create to make it easier to create applications where the user experience is likely to evolve iteratively. PayPal is still making extensive use of Java and C## application in the back end. But in terms of the applications that customers are likely to directly engage, PayPal is moving to standardize on Kraken. At the moment, PayPal has more than 20 applications “in flight,” says Scott, and that the various components that make up Kraken are shared via the company’s internal implementation of a Github repository.
When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a
progress element to indicate to the user that an Ajax request is underway.
I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.
progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.
While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.
You’ll notice that I’m attaching an event to each
form element, rather than using event delegation to listen for a
click event on the parent document and then figuring out whether that
click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the
submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the
required attribute where appropriate—safe in the knowledge that the
progess element won’t be generated until the form has passed its validation checks.
If you like this particular pattern, feel free to use the code. Better yet, improve upon it.
As part of a concerted effort to make the SAP HANA in-memory computing platform more appealing to developers, SAP this week announced a more modular approach to exposing SAP HANA services in the cloud. SAP is also moving to open an application store for SAP HANA applications offered by both SAP and third-party developers.
With the services available in three configurations, organizations can now opt to use a complete set of SAP HANA AppServices, a subset called SAP HANA DBServices, or the more basic SAP HANA Infrastructure Services. Dr. Vishal Sikka, a member of the Executive Board of SAP AG responsible for products and innovation, says that now that SAP HANA has emerged as a cloud platform, different classes of organizations — depending on the requirements of their company — will want to consume SAP HANA services in various ways. These services can then be combined as organizations see fit, to create a platform-as-a-service (PaaS) environment for building and deploying cloud applications.
The SAP HANA APIs will be published shortly, including APIs for SAP Business Suite and SuccessFactors running on the SAP HANA platform. By the time of the upcoming SAP Sapphire conference in June, Sikka says that both the Ariba and SuccessFactors software-as-a-service (SaaS) application environments will be running on the SAP HANA platform.
According to Sikka, 1,237 start-up companies located in 57 countries are already building applications on SAP HANA, and 60 of those applications are currently live. Obviously, not all of those applications are going to be on the SAP cloud. However, because SAP HANA is emerging as a platform, SAP is again changing the way it describes the SAP HANA cloud platform, now referred to as the SAP Cloud powered by SAP HANA.
Part of the appeal of partnering with SAP is the new-found willingness that the company has to promote third-party applications in what Sikka says is a world where software today is incredibly interconnected.
SAP is not the only major enterprise vendor with similar ambitions, which is creating something of a race between the larger vendors in enterprise IT to develop an ecosystem around their APIs. Over time, those APIs might ultimately lead to a world in which organizations simply compose business services by combining various application services by invoking published APIs. For that vision of enterprise IT to become an everyday reality might take a while longer. However, it’s already becoming clear in what general direction enterprise IT is heading as the API economy continues to mature.
Trippin’in, Zeebox Among Winners in 3scale StartupBus API Contest
3scale, an API platform company, has announced the ten winners of its StartupBus API contest. They are: Bitcasa (a 1 T drive in the cloud); Evercam.io (connecting cameras); FullContact (beefing up contact info); Kairos (facial recognition APIs); Kii (mobile app backend solution); Nutrionix (food data); Tripin’in (featured in the graphic, finding cool hang out spots); UN Data API (social entrepreneurship, international projects); and Zeebo (social TV platform).
These tools are being used right now by StartupBus hackers–people competing to build apps on, get this, a three day bus ride now converging in Austin at SXSW. The final travails of the bus riders can be followed at #startupbus, including recent news of a bus breakdown–putting the hard back in hardware, perhaps.
Zapier Completes Marathon by Building 28 Apps in 28 Days, Changes World
We last covered Zapier when it revamped its developer platform in January. Now, in what might be the king of “we eat our own dog food” stories, Zapier has turned around and used that platform to build 28 of its own apps in 28 days. Holy February! In the graphic is the most recent one they crafted, making it possible to handle Disqus comments to a spreadsheet, create a message on receipt, and get an IM whenever someone comments.
Here’s the impressive list of 28 companies they built apps with:
- Zoho Invoice
- Quote Roller
- Zoho Creator
Discussion of what each app does in on Zapier’s blog.
API News You Shouldn’t Miss
- Microsoft to unveil DirectX 12 on March 20th at the Games Developers Conference
- Extend Azure Active Directory Schema using Graph API (preview)
- Retention Marketer Optimove Interfaces With Facebook Via API, Updates Custom Audiences Daily
- Modernizing Legacy APIs | Paul M. Jones
- The Echo Nest Partners With AirPair For API Developer Support
- API Deployment Models that Accelerate Digital Banking
- It’s Alive! Facebook’s Atlas Ad Server Adds Rich Media API Program
- MetroTwit’s Windows apps are no longer available to download, Twitter API limitations to blame
- Payfirma Unveils One API for All Kinds of Payments
- How Snapchat imitator Puffchat managed to do everything wrong
- Broadcom Specification Urges More OpenFlow Switches
- Winners of the 3scale StartupBus API Contest Announced
7 New APIs
Today we had 7 new APIs added to our API directory including an advertising management platform, a bulk sms service, an indian bulk sms service, a malaysian bulk sms service and a find u.s. neighborhood by geographic coordinates. Below are more details on each of these new APIs.
AdStage Platform API: AdStage is a online advertising platform that allows users to create and manage advertising campaigns. The service offers cross-network ad tools and a flexible platform so users can focus on the campaign. The AdStage Platform API users can integrate new or existing complementary apps into the AdStage Platform directly. An account is required with service.
evamegsms API: Evamegsms provides users with bulk SMS services that can be used to deliver SMS around the world. Possible uses include marketing to customers, issuing general alerts, and sending messages to friends en mass. Evamegsms’ SMS units never expire and can be used at any time. Users can even send mass SMS from their mobile phones using the SMS2All feature.
GlobalItWebs API: GlobalItWebs is a software, e-commerce, and website design company based in India. They also offer bulk SMS services for use within India, including long code SMS. Their SMS services are meant for a wide variety of use cases. Users can integrate with GlobalItWebs’ SMS services using a RESTful API.
MireZone SMS API: MireZone SMS (MZsms) is a Malaysian SMS marketing service that provides worldwide SMS coverage. MZsms comes with a wide variety of features such as two-way SMS, group messaging, Unicode support, SMS delivery reports, message scheduling, email-to-SMS, sender ID customization, an open source plugin, birthday reminders, and more.
Users can integrate MZsms’s SMS gateway with their own website or application via REST API. This allows users to send SMS, receive confirmation or error reports for a sent message, schedule future SMS, and check their account balance.
Neighborhood API: The Neighborhood API is a service that allows users to find the U.S. neighborhood that corresponds with a given set of geographic coordinates (i.e. latitude and longitude). The service is free, unless users wish to donate or want to pay for their own dedicated instance. Users can access the API via REST calls issued in JSON format.
OpenGraph Hybrid API: OpenGraph.io way to get Open Graph information from websites. Many sites still do not provide OG tags so the OpenGraph site utilizers spiders to search sites for graph available data. The OpenGraph API allows users to run REST queries containing a target URL, and get open graph data in return. Registered users will have an API key to use for the search.
Webpage Analyse API: Webpage Analyse is a free website analysis tool that collects, analyzes, and processes domain-related information from a variety of sources. They also offer independent tools for discovering whois information, website load times, IP information, and Google PageRank.
Webpage Analyse comes with a suite of REST APIs that allow users to find similar sites, determine whether a site contains adult content, determine the category of a site, get a site’s Google PageRank, and get a screenshot of a site in a specified size.
Related ProgrammableWeb Resources
AWS Elastic Load Balancing helps you to build systems that are highly scalable and highly reliable. You can automatically distribute traffic across a dynamically-sized collection of Amazon EC2 instances, and you have the ability to use health checks to keep traffic away from any unhealthy instances.
Today we are giving you additional insight into the operation of your Elastic Load Balancers with the addition of an access log feature. After you enable and configure this feature for an Elastic Load Balancer, log files will be delivered to the Amazon S3 bucket of your choice. The log files contain information about each HTTP and TCP request processed by the load balancer.
You can analyze the log files to learn more about the requests and how they were processed. Here are some suggestions to get you started:
Statistical Analysis - The information in the log files is aggregated across all of the Availability Zones served by the load balancer. You can analyze source IP addresses, server responses, and traffic to the back-end EC2 instances and use the results to understand and optimize your AWS architecture.
Diagnostics - You can use the log files to identify and troubleshoot issues that might be affecting your end users. For example, you can locate back-end EC2 instances that are responding slowly or incorrectly.
Data Retention - Your organization might have a regulatory or legal need to retain logging data for an extended period of time to support audits and other forms of compliance checks. You can easily retain the log files for an extended period of time.
Access Logs are disabled by default for existing and newly created load balancers. You can enable it from the AWS Management Console, the AWS Command Line Interface (CLI), or through the Elastic Load Balancing API. You will need to supply an Amazon S3 bucket name, a prefix that will be used to generate the log files, and a time interval (5 minutes or 60 minutes).
To enable Access Logs for an existing Elastic Load Balancer, simply select it, scroll to the bottom of the Description tab, and click on Edit:
Select the desired configuration and click Save:
Important: You will need to enable the EC2 Preview Console in order to configure your access logs:
You will also have to make sure that the load balancer has permission to write to the bucket (the policy will be created and applied automatically if you checked Create the location for me when you enabled access logs.
Log files will be collected and then sent to the designated bucket at the specified time interval or when they grow too large, whichever comes first. On high traffic sites, you may receive multiple log files for the same period.
You can disable access logs at any time, should your requirements change.
Plenty of Detail
In addition to the bucket name and the prefix that you specified when you configured and enabled access logs, the log file name will also include the IP address of the load balancer, your AWS account number, the load balancer's name and region, the date (year, month, and day), the timestamp of the end of the logging interval, and a random number (to handle multiple log files for the same time interval).
Log files are generated in a plain-text format, one line per request. Each line contains a total of twelve fields (see the Access Logs documentation for a complete list). You can use the Request Processing Time, Backend Processing Time, and Response Processing Time fields to understand where the time is going:
Log Processing With Elastic MapReduce and Hive
A busy web site can easily generate tens or even hundreds of gigabytes of log files each and every day. At this scale, traditional line-at-a- time processing is simply infeasible. Instead, an approach based on large-scale parallel processing is necessary.
Amazon Elastic MapReduce makes it easy to quickly and cost-effectively process vast amounts of data. It uses Hadoop to distribute your data and processing across a resizable cluster of EC2 instances. Hive, an open source data warehouse and analytics package that runs on Hadoop, can be used to pull your logs from S3 and analyze them.
Suppose you want to use your ELB logs to verify that each of the EC2 instances is handling requests properly. You can use EMR and Hive to count and summarize the number of times that each instance returns an HTTP status code other than 200 (OK).
We've created a tutorial to show you how to do exactly this. I'll summarize it here so that you can see just how easy it is to do large-scale log file analysis when you have the proper tools at hand.
You need only configure the S3 bucket to grant access to an IAM role, and then launch a cluster with Hive installed. Then you SSH in to the master node of the cluster and define an external table over all of the site's log files using the following Hive command:
Now you can run the query (number of non–200 responses grouped by backend, URL, and response code):
You could go even further, writing a script to perform multiple Hive queries or using the AWS Data Pipeline to process log files at hourly or daily intervals.
Read the ELB Log Processing Tutorial to learn more.
AWS partners Splunk and Sumo Logic have been working to support this new feature in their tools.
Splunk's Hunk app can map requests to geographic locations and plot the source of client requests:
Splunk can also measure and display latency over time:
Read the Splunk blog post to learn more about this new feature.
The Sumo Logic Application for Elastic Load Balancing displays key metrics and geographic locations on one page:
The product can also measure and analyze latency:
You can read their blog post to learn more about this new feature.
Start Logging Now
This feature is available now and you can start using it today!
Two of the most frequent feature requests for Amazon DynamoDB involve backup/restore and cross-Region data transfer.
Today we are addressing both of these requests with the introduction of a pair of scalable tools (export and import) that you can use to move data between a DynamoDB table and an Amazon S3 bucket. The export and import tools use the AWS Data Pipeline to schedule and supervise the data transfer process. The actual data transfer is run on an Elastic MapReduce cluster that is launched, supervised, and terminated as part of the import or export operation.
In other words, you simply set up the export (either one-shot or every day, at a time that you choose) or import (one-shot) operation, and the combination of AWS Data Pipeline and Elastic MapReduce will take care of the rest. You can even supply an email address that will be used to notify you of the status of each operation.
Because the source bucket (for imports) and the destination bucket (for exports) can be in any AWS Region, you can use this feature for data migration and for disaster recovery.
Export and Import Tour
Let's take a quick tour of the export and import features, both of which can be accessed from the DynamoDB tab of the AWS Management Console. Start by clicking on the Export/Import button:
At this point you have two options: You can select multiple tables and click Export from DynamoDB, or you can select one table and click Import into DynamoDB.
If you click Export from DynamoDB, you can specify the desired S3 buckets for the data and for the log files.
As you can see, you can decide how much of the table's provisioned throughput to allocate to the export process (10% to 100% in 5% increments). You can run an immediate, one-time export or you can choose to start it every day at the time of your choice. You can also choose the IAM role to be used for the pipeline and for the compute resources that it provisions on your behalf.
I selected one of my tables for immediate export, and watched as the MapReduce job was started up:
The export operation was finished within a few minutes and my data was in S3:
Because the file's key includes the date and the time as well as a unique identifier, exports that are run on a daily basis will accumulate in S3. You can use S3's lifecycle management features to control what happens after that.
I downloaded the file and verified that my DynamoDB records were inside:
Although you can't see them in this screen shot, the attribute names are surrounded by the STX and ETX ASCII characters. Refer to the documentation section titled Verify Data File Export for more information on the file format.
The import process is just as simple. You can create as many one-shot import jobs as you need, one table at a time:
Again, S3 plays an important role here, and you can control how much throughput you'd like to devote to the import process. You will need to point to a specific "folder" for the input data when you set up the import. Although the most common use case for this feature is to import data that was previously exported, you can also export data from an existing relational or NoSQL database, transform it into the structure described here, and import the resulting file into DynamoDB.
Dice, the leading career website for tech professionals, will soon host its first mobile hackathon: Mobile Hack. Mobile Hack will take place March 15th and 16th at the Des Moines Area Community College’s Future Farmers of America Enrichment Center. Dice is based out of New York, but has a significant Iowa presence, and felt compelled to host the event in Des Moines. Jonathan Blank, Dice Director of Public Relations, commented:
“Our company has deep roots in Iowa, we wanted to hold the Hackathon here, in a central location with businesses and schools nearby.”
Mobile Hack hopes to attract mobile developers regardless of platform. Straight from the hackathon’s site, the only requirements include being 18 years old and awesome. Cash prizes are available for best overall, runner up, best use of Dice API, best design, and best student submission. Blank continued with the purpose and goal of the event:
“We have a long tradition of inspiring tech pros to achieve big things in a short amount of time.”
A panel of five judges will review the submissions. Dice chose the judges based on involvement with entrepreneurs and dedication to innovation. Accordingly, the judges come from Dice, Zetetic, BitMethod, Dwolla, and Lean Techniques. Judge and Dice Vice President of Engineering, Manish Dixit, commented:
“We’re entirely focused on this first Hackathon….We believe in this structure for training, networking and brainstorming new ideas, and we want tech professionals to understand how to combine any number of different skill sets together and then apply to mobile development.”
2014 is already turning out to be a great year for BigML. In the first few months of 2014, BigML announced the availability of the Winter Release, reached the 1-million-predictive-models milestone, became a Tableau Software technology partner and launched the open sunburst feature. And more exciting features are coming soon to the BigML platform.
An interactive SunBurst visualization can now be embedded in blog posts and web pages using a snippet of HTML code.
In January, ProgrammableWeb reported that BigML had announced the availability of the 2014 Winter Release, which includes new features and improvements that boost predictive modeling. The release also includes Flatline, a new Lisp-like language created and developed by BigML that comes with the BigML API. Flatline can be used to to transform the REST resources programmatically and allows for a new paradigm that BigML calls Programmatic Machine Learning.
The million-model mark
At the end of February, BigML reached a major milestone: More than 1 million predictive models have been created with the BigML platform—400,000 of those predictive models had been created in the two months prior to the announcement.
Image Credit: BigML.
About one year ago, BigML had reached the milestone of 10,000 predictive models created with the platform. Francisco J. Martin, BigML co-founder and CEO, said to ProgrammableWeb about the 1 million milestone: “A million models isn’t cool. You know what’s cool? A billion models. That’s our next milestone.”
Tableau software partnership
Earlier this week, BigML announced that the company had become a Tableau Software technology partner which will allow the analytics and visualization features of both platforms to be combined in a number of ways.
A new export feature has already been launched which allows BigML models to be exported directly to the Tableau platform as a calculated field. Tableau users can now interact with BigML models just as they would any other Tableau field.
Open SunBurst feature launched
SunBurst tree visualizations were introduced in mid-2013 as an alternative method of visualizing decision trees. A decision tree in machine learning is a model that predicts a target variable value based on multiple input variables. It is also used to create a simple representation for classifying examples.
In January, BigML announced the launch of the Open SunBurst feature, which makes it possible to embed a smaller version of an interactive SunBurst visualization into a blog post or Web page using a snippet of HTML code.
BigML API webinar
On March 11 from 10 a.m to 11 a.m. PDT, BigML will present a free webinar “Building Predictive Apps with BigML’s API.” The webinar will cover topics that help developers build machine learning predictive applications using the BigML API. Webinar topics will include (but will not be limited to):
- BigML API introduction
- Summary of bindings in other languages such as Java, Node.js, Clojure
- Dataset transformations
- Machine learning strategies: Covariate shift detection, boosting, smart feature selection.
Developers, data scientists and others interested in viewing the webinar can register to attend on the BigML website.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
More exciting features coming soon
BigML is working on a lot of new features and improvements that will be coming out soon. Martin provided ProgrammableWeb with a few insights regarding the BigML road map. The company is currently working on three main areas: Extending the BigML API, a new clustering algorithm and organizations.
The company is working on extending the BigML API with a language to implement high-level machine learning strategies. As Martin explains it to ProgrammableWeb:
This language will allow our users to implement sophisticated machine learning templates in the cloud either programmatically or just in one click. This language together with our current transformational language for datasets will elevate programmatic machine learning to another dimension.
Another feature now in development is a new clustering algorithm that is the company’s first take on unsupervised learning. The clustering algorithm, along with the current supervised algorithm, “will help solve many other problems that our customers are facing with their data,” says Martin.
Organizations is a new feature that BigML is working on that will allow users to have multiple dashboards and collaborate with other co-coworkers. The BigML organizations feature will work somewhat like GitHub organizations.
2014 is shaping up to be a banner year for BigML, with more features coming soon. To learn more about the BigML platform, visit BigML.com.
By Janet Wagner. Janet is a data journalist and full stack developer based in Toledo, Ohio. Her focus revolves around APIs, open data, data visualization and data-driven journalism. Follow her on Twitter, Google+ and LinkedIn.
Late last month, ProgrammableWeb reported on the second annual Hackomotive competition presented by Edmunds.com. The annual three-day event brings together automakers, developers, and others in the automotive industry to develop new products that deliver a better car shopping experience. The winners of the 2014 Hackomotive contest have been announced, and Carcode.me, a Seattle company, took the top prize.
Teams participating in the contest used Edmunds’ APIs and other tools available on the Edmunds Developer Site to create applications that make car shopping easier for consumers. Each team was judged on five factors:
- Improvement to the existing car shopping process
- Perceived trustworthiness of the product
- Likelihood of real-world adoption
- Quality of product
- Quality of presentation
The Carcode.me team won the Hackomotive grand prize of $20,000.
Carcode.me took home the $20,000 grand prize for creating a website plugin that provides a phone number for car dealerships that mobile shoppers can text. The plugin also provides an app that makes it possible for dealerships to respond to and manage messages from mobile costomers. Carcode.me Co-founder Nick Gorton told ProgrammableWeb:
Carcodesms.com was thrilled to be part of this years Edmunds.com Hackomotive event. Obviously the outcome was great, but the experience the whole way through the event was excellent regardless of the outcome. We are looking forward to reinvesting the prize money into further developing the Carcode.me app and exploring a partnership with Edmunds.com.
The second-place $10,000 cash prize was awarded to team Au.to for its automotive-specific search engine, which is currently in development and will be coming soon. Adam Jansen of team Au.to told ProgrammableWeb:
Winning the customer challenge on Wednesday was the highlight of the event for us and really gives us that customer validation. Even though it was disappointing to not win (second is not too bad), it was a great learning experience and [we] are grateful that Hackomotive 2014 was AU.TO’s unveiling to the world! We are looking forward to the continued guidance from some key industry players we met, and feel AU.TO’s future is even brighter after this awesome event.
The $5,000 third-place cash prize went to team Showroom for a car shopping website that provides consumers a “visceral car research experience.” Nick Sergeant of Showroom said:
Hackomotive was a phenomenal experience for our early-stage startup. We got to work with some of the best folks in the industry and they really helped us refine our product’s message and delivery. Hackomotive will help us propel http://showroom.is to be a leader in the car-research category.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
The Hackomotive competition was created to help make shopping for and purchasing a car or truck—a major consumer purchase—as easy and pleasant as possible. Bringing together automobile industry innovators to come up with ideas and build better car-shopping applications is one of the methods that Edmunds has implemented to achieve that goal. Edmunds.com CEO Avi Steinlauf explained:
Each team zeroed on improving an element of car shopping which is consistent with our mission here at Edmunds.com and none of the solutions could have existed for dealers 20 years ago. The Carcode.me supports dealer/customer texting, Au.to is building an impressive automotive search engine, and Showroom maximizes the experience of researching cars online.
Check back with the official Hackomotive Web site to see the published results of the 2014 Hackomotive competition.
By Janet Wagner. Janet is a data journalist and full stack developer based in Toledo, Ohio. Her focus revolves around APIs, open data, data visualization and data-driven journalism. Follow her on Twitter, Google+ and LinkedIn.
Amazon Web Services — New Features for Amazon CloudFront: Server Name Indication (SNI) and HTTP Redirection
Amazon CloudFront is a web service for content delivery. You can use CloudFront to deliver content to your end users with low latency, high data transfer speed, and no commitments.
I am happy to announce that CloudFront now supports Server Name Indication (SNI) for custom SSL certificates, along with the ability to take incoming HTTP requests and redirect them to secure HTTPS requests. Both of these features are available at no additional cost to all users of CloudFront.
Let's take a look at both of these useful new features...
Server Name Indication for Custom SSL Certificates
We launched support for custom SSL certificates last year, giving you the ability to upload your own certificate to your AWS account, and to select it for use with your CloudFront distribution. This model works with any browser because we dedicate IP addresses to your SSL certificate at each CloudFront edge location. The dedicated IP addresses are necessary so that CloudFront can associate the incoming requests with the proper SSL certificate.
Today's launch of SNI support for CloudFront gives you a second way to use your own SSL certificates with CloudFront. With the SNI extension of TLS, dedicated IP addresses are no longer necessary. This is because modern web browsers and HTTP client libraries transmit the destination host name at the beginning of the SSL handshaking process.
While most modern browsers include the SNI extension, some older ones may not. These older browsers, including Internet Explorer on Windows XP, the default browser on devices that are running Android 2.2, and Java web browsers earlier than version 1.7 running on any operating system, will not be able to load your HTTPS content. If you want to take advantage of SNI and need to support legacy browsers, you can detect them in your client code and route the HTTPS requests directly to the origin server.
You can set up SNI when you create a new distribution. You can also modify an existing distribution so that it will use SNI instead of the default CloudFront certificate or the dedicated IP addresses. The creation and modification operations can be taken care of with a couple of clicks in the AWS Management Console:
There is no extra cost for using the SNI Custom SSL feature beyond the usual data transfer and request prices for Amazon CloudFront.
If you modify a distribution that you've previously set up using dedicated IP addresses (the All Clients option in the console) so that it uses SNI (the Only Clients... option), the monthly charge for the dedicated IP option will be pro-rated and stop as of the date of the modification.
To learn more about this new feature and how to use it, visit the documentation on [Using Alternate Domain Names and HTTPS].
HTTP Redirection at the Edge
In many cases it is best to serve an entire web site through HTTPS. If you are moving an existing HTTP-based site to a full or partial HTTPS-based model, you can now use a CloudFront behavior to configure CloudFront to redirect HTTP requests to HTTPS:
When yours users make an HTTP request for an object that's in a distribution configured for redirection, the CloudFront edge location will return an HTTP 301 (moved permanently) status code, along with the HTTPS URL for the object. Your Viewer will then make a second request for the object, this time via HTTPS.
Both of these features are available now and you can start using them today.
PS - I would like to thank everyone who has asked us to support SNI. Your feedback means a lot to us. We read it, analyze it, and use the results to prioritize our work. Please continue to send feedback on every AWS service our way!
Twitch today announced the Twitch Mobile Software Development Kit (SDK), which will eventually let mobile device gamers capture, archive, and live-broadcast their games to Twitch, a social network for gamers that already has a large presence on consoles and PCs. Android and iOS device owners have access to the Twitch community through a dedicated mobile app, but it allows only for viewing and interacting with content that’s already been posted to the site. For mobile gaming fanatics, things are about to get a whole lot more interesting.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
The Twitch Mobile SDK takes everything that’s great about Twitch and makes it possible from mobile devices. The SDK offers the ability to capture and broadcast gameplay video and audio; capture video from the front-facing cameras; capture audio using an internal or external microphone; and archive videos for viewing and sharing on Twitch. Users will be able to adjust between low-, medium-, and high-quality broadcast settings and easily discover related broadcasts from other gamers. The SDK also includes a solid chat client complete with emoticons, badges, and color schemes.
“Our vision is to provide the Twitch community with the ability not only to view but also to broadcast live video game content wherever they are, whether they’re on the go or in the living room,” said Matthew DiPietro, VP of Marketing at Twitch, in a statement. “We’ve achieved that with our PC and console integrations, so the trifecta will soon be complete with our deep and concerted foray into mobile broadcasting.”
Twitch said that its Android and iOS apps have been downloaded 10 million times, split evenly between the two platforms. That milestone and its 45 million active monthly users speak to the popularity of the service for sharing games and interacting with other gamers.
Michael Pachter, video game analyst with Wedbush Securities, believes that Twitch’s move to mobile could blow things wide open. “Facilitating the ability to ‘broadcast anywhere’ by bringing live streaming functionality to mobile has the potential to convert millions of Twitch’s passive viewers into active broadcasters,” he said.
Transitioning its passive viewers into active ones is exactly the reasoning behind Twitch’s move to mobile. Gaming on mobile devices has exploded in the last several years, with hundreds of thousands of games available on smartphones and tablets. One need only look at the wild popularity of Flappy Bird several weeks ago to see that the demand for good mobile games is off the charts. Enabling those games with support for Twitch could be the key to increasing Twitch’s user base exponentially.
Locaid launches first multi-state poker pact API. Google Wallet Instant Buy API makes your wallet…lighter. Plus: Parasoft enhances mobile API testing, and Google and Intel release x86 emulator with Google APIs.
Locaid Takes the Gambling out of Compliance with the First Multi-State Poker Pact API
What does the real estate business and internet gambling have in common? The famous slogan: “Location, location, location.” But the meaning for the online version is reversed–where once the slogan meant make sure your business is in the right place, the online gaming version means: make sure your customer is in the right location to legally engage with your business. Locaid now provides the only multi-state geo-location compliance API that regulators will accept to verify that a customer really is in the state they say they are. We first covered the API release back in June. Now, for the first time, gamblers can gamble across state lines thanks to that API, provided the states in question have signed onto the Multi-State Internet Gaming Agreement. Delaware and Nevada just became the first two states to do so.
But what’s the big deal? Location APIs are a dime a dozen and have been around for years. Pinpointing people’s location is easy, sometimes too easy (such as when a dating app got hacked last year to allow people to locate users too precisely). The problem is that most location APIs use GPS–and GPS is relatively easy to spoof, making it possible for you to appear to be in one location while you are actually somewhere else. The chances of gamblers engaging in such spoofing may or may not be high, but state regulators don’t like the odds.
To get iron clad verification–that can be so exact that it can verify that a patron is not in a parking garage or parking lot, because it is illegal to gamble from those venues in New Jersey–Locaid relies on location data from a user’s carrier. That’s because cell phones automatically use the nearest tower, and aren’t easily spoofed.
As the CEO commented in the press release,
“Locaid delivers mobile compliance technology that ultimately enabled the first interstate online poker deal in U.S. history,” said Rip Gerber, President and CEO, Locaid. “We have been plumbing the mobile location systems with carriers, operators and regulators for years to reach this landmark accord, and we congratulate Nevada and Delaware in delivering a secure, customer-friendly geolocation compliance approach that can cross state lines. We look forward to working with additional state regulatory teams and gaming operators as they onboard this year. And if New Jersey joins the pact, our location service currently deployed in the Garden State will seamlessly integrate.”
Locaid is the largest provider of LaaS, Location as a Service. And its compliance API is the only location compliant solution of its kind.
Google Wallet Instant Buy API Raises Conversion Rates in the Ecommerce Tap Dance
Amazon’s one-click proved a hit with customers that was so big they licensed the technology to Apple back in 2000, on the strength that frictionless commerce is worth a lot of money. Now in a mobile world, Google Wallet is trying to provide one-tap, or as few taps as possible. One customer, Eat24, featured in the graphic below, uses the Google Wallet Instant Buy API and finds that purchases are 11% higher and that users are a whopping 72% more likely to make repeat purchases.
As Laurie Sullivan reports in SearchBlog, the results are astounding:
Priceline discovered early in the transition that shoppers signing in through Google on their Android app are 71% more likely to make a reservation or a purchase, compared with those who book as guests.
Mobile has become a major tool for consumers looking online for travel information, especially with the option to pay with Google Wallet through their Android app in just two taps. During the first few months, the share of transactions on Priceline’s Android app grew at an average rate of more than 100%.
It’s a new new mobile world. But age old principles, like frictionless commerce, are starting to emerge as powerful principles going forward.
API News You Shouldn’t Miss
- Tap Two For Priceline.com, Rue La La, Newegg In Google Wallet
- Will Google Maps Ever Come to Apple CarPlay?
- Inmarsat Chooses Apigee to Power App Development Across World’s First Globally Available High-Speed Mobile Broadband Service
- Helioviewer API tools for multiple platforms now available
- Locaid Launches First Multi-State Geolocation Compliance API™ For Historic Online Interstate Poker Pact
- Truth, Lies & APIs
- Building Successful Web APIs
- APIs Are Bridging the Mobile App Gap
- Parasoft API testing enhances automation for mobile API Testing
- Developer PSA: Google And Intel Release x86 Emulator Image With Google APIs For The First Time
The VM Import/Export feature gives you the power to import existing virtual machine images to Amazon EC2 instances and to export them back to your on-premises environment. You can move images to hasten and simplify your migration from on-premises to the AWS cloud or as part of a disaster recovery model.
Import & Export Windows Server 2012 Images
I am happy to announce that you can now import and export Windows Server 2012 images to EC2. This is generally done with the EC2 API tools; however, you can use the Amazon EC2 VM Import Connector for VMware vCenter if you use VMware.
AWS will provide the appropriate Microsoft Windows Server license key for the imported image. Your on-premises key will not be used in the cloud and you are free to use it for other Windows Server images that are still running in your on-premises environment.
Windows Import Enhancements
In addition to adding support for Windows Server 2012, VM Import has also made a few improvements to the import process for customers importing Windows Server 2003 and Windows 2008 images. Amazon EC2 instances created from Windows VMs will now benefit from having EC2Config installed by default and from having the latest-generation Citrix PV drivers.
Support for Windows Server 2012 is available now and you can start using it today.
You can also import Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, Red Hat Enterprise Linux, CentOS, Ubuntu, and Debian images; see the VM Import Prerequisites and Before You Get Started section of the documentation for additional information.
The line between what constitutes a software-as-a-service (SaaS) application and a platform-as-a-service (PaaS) environment has always been relatively thin. SaaS applications that expose an API to third-party developers can quickly transform into a development platform.
The latest SaaS application provider to make that transition is GoodData, which today announced the GoodData Open Analytics Platform. According to Jeff Morris, vice president of marketing for GoodData, the GoodData Platform — designed as a PaaS environment optimized for data discovery governance — provides a framework for consolidating structured and unstructured data that can be exposed via GoodData APIs.
Developers have the option of using those APIs to invoke that data or to build analytic applications, written in the R programming language, that will reside directly on the GoodData Open Analytics Platform, says Morris. The data residing on the GoodData platform can be stored as raw Hadoop files or in a data warehouse based on a columnar database.
Related Searches From ProgrammableWeb’s
Directory of More Than 10,000 APIs
Morris explains that in addition to pulling data from on-premise systems, the GoodData Open Analytics Platform comes with connectors and metadata maps for 50 sources of data in a GoodData private cloud that spans hundreds of servers, providing access to terabytes of RAM to support analytic applications running in memory.
The Platform, adds Morris, is designed to be extensible in that organizations can make use of ETL tools or APIs to load data into the platform, define and execute their own workflows, and share data with other data discovery tools.
GoodData is not the only vendor making analytics available as a cloud service. But what will distinguish GoodData most is how open the analytics environment is, says Morris. In a world in which organizations can never be sure where their next source of Big Data might be coming from, the GoodData Open Analytics Platform provides a level of interoperability that will allow organizations to more easily correlate multiple data sources.
At the end of the day, Big Data is as much about collecting massive amounts of information as it is about finding the most cost-effective and practical approach to actually analyzing that information. For these reasons, organizations looking to make data-driven decisions based on multiple sources of information are going to look to the cloud not only to store that data but also to run analytics applications as close to that data as possible. Naturally, once that occurs, the next step will be to expose the results of those analytics via APIs to inform other applications — in effect for the first bringing the benefits of advanced analytics to an emerging API economy that today is all too often starved for truly meaningful data.
ProgrammableWeb initially covered Cloudinary in 2012 as it was a burgeoning startup in the quickly growing cloud community. Today, Cloudinary announced a host of new image related add-ons that should help Cloudinary expand its offering and user base. Prior to the announcement, we caught up with Cloudinary CEO, Itai Lahan, to learn more.
The new add-ons come only through a number of new integration partnerships. Whenever we see such robust integration capabilities, we can’t help but think of the API story underneath. Lahan confirmed the API focus of the announcement:
“Cloudinary’s core is based on our rich APIs. The service offers a complete cloud-based image management solution for websites and mobile applications, covering everything from image uploads to cloud-based storage, online administration, on-the-fly manipulation and optimized delivery. We’ve built the service to be used either end-to-end or as a mix-and-match. To get this going, we’ve built a modular infrastructure that begins at a powerful, inclusive RESTful API layer. Above it we’ve layered client integration libraries for all major web development languages. On top of these is a powerful interactive management console, side-by-side with integrations to common PaaS, BaaS and CMSs. Our new add-on partnerships are a direct extension of our core capabilities. We’ve engineered the add-ons so they speak the same language as the rest of our APIs and the integration is completely seamless.”
The add-ons allow Cloudinary to add to its already robust feature set. The goal is to expand its one-stop-shop mentality for image needs. Lahan continued:
“At Cloudinary, we are committed to offering a go-to-place for any image-related requirement you may encounter when building, supporting, extending and scaling your website or mobile application. The new partnerships give our customers instant access to powerful image-processing capabilities. A few examples – with Cloudinary, customers can easily moderate their user uploaded images, one-by-one, making sure that adult-oriented content doesn’t leak in. With this launch, our customers will be able to moderate all their images automatically with just a virtual flip of a button, using WebPurify’s human moderation experts service. With Cloudinary, you can crop your images perfectly using face-detection based cropping. This launch will also provide our customers with perfect crops even if faces are not present, using Imagga’s powerful smart cropping. The list goes on and on.”
Cloudinary has always targeted the developer community. Both web and mobile developers have image-related needs and Cloudinary simplifies such product requirements. In addition to the developer community, Cloudinary allows organizations as a whole to edit and manage online images regardless of background. Lahan explained that its user base has always been Cloudinary’s best marketing tool, and Cloudinary fully intends the base to take the new add-ons to market:
“Our customers have been our strongest growth agent so far. We will keep building the best service that we can and do our best to answer all our customers’ needs. If we’ll succeed, we trust our community to continue spreading the word around about Cloudinary.”
Of the 20,000 Cloudinary users, Cloudinary hand picked a small group of beta testers for the new add-ons. So far, Lahan explained that the response has been very positive. Lahan then described some potential use case scenarios for the new offerings:
“At launch, Cloudinary’s new add-ons will already cover anything from automatic image moderation, automatic image categorization, smarter image cropping, improved image compression, advanced face attributes detection, website screenshot generation and more via a single click integration. Some quick [examples include”
- Taking an Office Docx document and creating a thumbnail image with Sepia effect and adding watermark, using Aspose’s document conversion add-on
- Using Imagga’s smart cropping add-on to perfectly crop an image and also adding some cool manipulations on top – before, after
- Creating a website screenshot using URL2PNG’s add-on while adding an overlay, rounded corners, border and shadows
- Automatic “Glasses-ification” using Rekognition’s face attributes detection add-on -before, after”
Cloudinary is heavily engaged with its user base, and it will use that user base to measure the success of the new add-ons. Lahan left us with his expectations for evaluating the add-ons:
“We will be closely monitoring the usage patterns for each one of our add-ons, but the real success will be measured by the response of our community. It’s safe to assume that once this is off the ground, our customers will drive our add-on partnerships forward with feedback and feature requests, like they have for Cloudinary’s core features so far.”
Related ProgrammableWeb Resources
As a platform for indexing and analyzing machine data, Splunk has emerged as a provider of a Big Data platform that developers can easily invoke. Now Splunk is starting to build an ecosystem around that platform.
Today, Splunk and Tableau Software announced that Tableau now allows users of its data visualization software to manipulate Splunk data. As part of the agreement, the latest version of Tableau software includes Splunk Enterprise as a native data source using the recently launched ODBC driver created by Splunk.
Ted Wasserman, a member of the product management team at Tableau, says that end users of the company’s application can now pull Splunk data into the Tableau in-memory engine or launch a query directly against data residing in Splunk. This hybrid query approach, says Wasserman, gives organizations the flexibility they need to work with unstructured machine data that can be quite large.
The alliance with Tableau, says Tapan Bhatt, vice president of business analytics for Splunk, will make it easier for organizations to correlate other sources of data with the machine data that Splunk collects to more easily identify a trend line. As valuable as machine data is, it’s usually only when that information is compared with other sources of data that the real significance of all that machine data becomes apparent.
Splunk now boasts 7,000 customers worldwide, with total revenues for its most recent quarter reaching $100 million. As part of a concerted effort to increase the base of people with Splunk skills, the company has inked commercial agreements with entities such as Internet2, the nonprofit organization that manages a private Internet on behalf of dozens of universities and government agencies.
The alliance with Splunk, says Wasserman — part of a general movement to create data-driven analytics applicaitons — is intended to make it easier for businesses to make sense of Big Data so that they can make more informed decisions. Increasingly, that means creating applications that take advantage of APIs and connectors to access multiple sources of Big Data in a way that makes it simpler to apply analytics against multiple sources of data.
ProgrammableWeb’s Editor-in-chief David Berlind moderated a lively panel session at the recent DeveloperWeek conference in San Francisco. The panel included industry thought leaders Jason Harmon from PayPal, Jeremiah Lee Cohick (Fitbit), Alex Salazar (Stormpath), Uri Sarid from MuleSoft (the parent company of ProgrammableWeb) and John Musser, founder of both ProgrammableWeb and API Science. In an hour-long panel on “emergent APIs,” panelists covered API design, SDKs versus APIs and the challenge of API versioning. Bonus points: Each shared their number one piece advice for developers in businesses charged with creating their first API.
How important is developer experience?
Berlind stirred the panel with a devil’s-advocate approach to designing-in for the developer experience (DX) of an API. “Is DX really necessary?” Berlind asked. After all, developers are problem-solvers who will do what it takes to get things done. Although the panel agreed with Berlind that developers are a resilient, resourceful bunch, they cautioned against taking too many shortcuts in API design.
Sarid urged API designers to take an all-encompassing business approach: “You should think about your customers, and design your [API] product around your customers…. And developers are your customers.”
Harmon agreed with the idea that developers would do what it takes if they needed to access a business’ data via API, but he cautioned that a poor DX would destroy developer loyalty and make it harder for businesses to build an active, evangelized community around their API products: “If developers have to jump through hoops to get to the capabilities unlocked in your API, they are going to send a nasty tweet. It’s customer feedback.”
Musser suggested that new API designers could look at engagement levels on ProgrammableWeb to determine what makes a successful API. (Writer’s note: Keeping up with Adam DuVander’s posts, for example, is a great way to review current industry best practices, because he often writes summary posts about what leaders are doing or trends he has identified from ProgrammableWeb data.)
“If you look at the top APIs on PW, the data was so good that developers were going to use it,” Musser confirmed. “The second group of API popularity was with businesses like Twilio and Stripe, which have created a business by doing a great job at user experience.”
Sarid shared the example of a previous project he had worked on at a company before MuleSoft: ”Scraping data from a TV Guide API needed to be done for a business project, but it was such a difficult process that it brought in the whole question of whether the business model would be viable” if the process needed to be repeated regularly, Sarid said.
In this case, he agreed that because the business needed the data (at least once), it would make the API work. However, this was done at the expense of long-term customer loyalty. “The minute someone comes along with a better API that can give access to that sort of data, everyone jumps to that data source. So you are building on a house of cards to choose [not to prioritize the developer experience],” said Sarid.
API business models
Even though the panel consisted of developer evangelists who were comfortable getting their hands dirty with coding and API design issues, panelists were also used to liaising with the business side of their companies. It was a position the panelists encouraged all API developers to take in their business.
“API-first sounds great, but it is a means to an end,” said Salazar. “The business value we are trying to provide happens to be a very developer-centric problem—that is, solving identity authentication. So to deliver that product to our developers, it makes sense in an API.”
Stormpath’s case is interesting and can be unique among businesses looking to create APIs for their end customers. For many established businesses, APIs create new access pathways into a business’ data. But for Stormpath and other API-as-product business models, it is a service component or functionality that is being made available by an API.
“Exposing data is a traditional business approach to creating an API product. Exposing business logic is an API-first product business model,” said Harmon.
APIs vs. SDKs
Harmon described the API strategy discussion that he is leading at PayPal. “We are coming on very strong now with a look at the total vision of our business capabilities and putting together a capability model first: What are the capabilities we really offer? Then we are talking about user interface; that’s the basis of all of our REST API design,” he said. “We need to create a quality experience for developers. Our SDKs often tend to be the training wheels to get people used to our APIs. Measuring things like ‘time to first hello world’ (for both external and internal developers) is important. What it boils down to is speed and capability.
“SDKs are a way to get quickly onto a platform. If I buy a car and I know I want to race eventually, I want to buy a car that I can get down to the wires with. The SDK is like the plushy dashboard; the REST API is the race car,” Harmon said.
Meanwhile, in the long run, Salazar sees SDKs as more integral than APIs for developers who want to get up and running quickly. “They tinker with the API to get a feel for the features we offer, but, as soon as they actually want to start to integrate, they are looking for an SDK in the programming language they want. What we are now seeing is that customers want framework integration: It’s not enough to have a Python SDK — you need a Django framework as well,” he said.
“If you are building an API capability into your app or product, SDKs do a lot of the heavy lifting: SDKs can provide caching, for example.”
When looking at the API versus SDK question, both Salazar and Harmon recommended reviewing API analytics to see how developers are using APIs. “Check your API analytics to see what client developers are using—it will open your eyes to what is happening,” said Salazar.
The panel unanimously agreed that API versioning is one of the greatest challenges in API design and development.
“The challenge with APIs is that it is very difficult to change anything,” Salazar said. “You need to do some future-scoping to see what API capabilities you may need in one or two years from now. The best way forward is to follow proper REST models: Represent everything as a noun. If you get any type of scale, it will hurt you if you do not.”
He added: “Modular design is always faster and cheaper than insular design. As a developer, be wary of insular design; it will slow you down and, as soon as you have a competitive product, you will have a modular competitor coming up behind you who is activating an ecosystem of partners.”
Harmon believes developer engagement from the start can help with some of the challenges with versioning APIs down the track. Managing a good directory of who is using your API enables a provider to communicate change. “If you don’t start early with engaging your developer community—you need to know who they are from day one—you will have great difficulties broadcasting change [if you are retiring an early version of your API].”
Final advice for business developers getting started with an API
Audience members asked the panel to share their top recommendation for developers embarking on API design in their business. Here’s what the panelists shared.
Sarid: “Start with API design, and make sure the market wants that thing.”
Cohick: “User research will validate your API design.”
Musser: “The business model comes first: How well is your API mapped to your business model?”
Harmon: “Instrumentation: Make sure you can measure how people are using your API and what error codes are being logged. You can’t live without it, and you have no operational where-with-all without it.”
By Mark Boyd. Mark is a freelance writer focusing on how we use technology to connect and interact. He writes regularly about API business models, open data, smart cities, Quantified Self and e-commerce. He can be contacted via e-mail, on Twitter or on Google+.
Go ahead, file, change. I've got my eyes on you!
If we did not regularly re-invent the wheel, Ferrari's would have wooden cart-wheels.
This is a little hack that I spent twenty minutes writing and have spent twenty days regretting that I didn't write twenty years ago. The inspiration was a question from Florent several years ago. I no longer remember the question, or the answer, but he's still the one that gets the credit.
I do lots of tasks that fit into a general pattern: edit some file,
run some command, view the results. This weblog posting is one such example.
I am currently editing
If I run the command post -f ~/tmp/2014/03/04/watcher.xml http://localhost:8401/pub/ then I will be able to view the posting in the local
version of my weblog,
But the same pattern fits presentations I write, programs I compile, and
a host of other things. The only differences, really, are the files I edit
and the command that produces the results.
Tangential to the question that Florent asked was the fact that he was using some Java API to “watch” the contents of a directory. That was obviously a technique that would short circuit the edit-process-view cycle, but it still took me years to get around to writing watcher.
Watcher reads a simple XML format (naturally):
1<watcher> 2 <!-- Ignore any filename that contains a # --> 3 <action match="\#"/> 4 <!-- If a .xml file changes, run make --> 5 <action match=".*\.xml">make</action> 6</watcher>
./.watcher.xml are read when watcher
Anytime a matching filename changes, the specified command is run. The entire subtree under the current directory is monitored, so paths can be included in the match patterns.
It's not terribly sophisticated, but it sure is convenient!</article>
Who art thou? Thou art the sum of all thy posts.
This mortal life is a little thing, lived in a little corner of the earth; and little, too, is the longest fame to come—dependent as it is on a succession of fast-perishing little men who have no knowledge even of their own selves, much less of one dead and gone.
It has been a source of minor embarrassment to me
for many years that the “what's new” section of
http://nwalsh.com/ has been so utterly moribund.
The sad truth is that, although I consider
nwalsh.com to be my “home page”, much of what I used to use it for
has been superceded by other sites: my random musings go on my
go on Twitter or
photographs go on my
photo site, projects
are most likely to go on
Github (at least
As it happens, I actually archive all of those sites. I keep my data close to my chest. I think the cloud is the greatest thing ever, and I expect the things I put in it to have a durability roughly equivalent to the durability of the fluffy white things in the sky. I'm weird, ok, I care about the data I produce. It's mine and I don't trust you to preserve it for me.
I recognize that this is something like the digital equivalent of hoarding but I don't feel any great compunction about it.
It occurred to me not long ago that I could synthesize a much
more dynamic “what's new” section for
nwalsh.com simply by
syndicating the various things that I archive. This lead quickly to a
little ≅300 line MarkLogic server that collates
the atom feeds from sites of interest and republishes them in HTML.
Share and enjoy.</article>
Street Repairs API makes service of spotting problems easy, popular. AdStage launches platform API. Plus: API design tooling, SeriousBit launches NetBalancer, and 5 new APIs.
Britain’s Street Repairs API Steers Crews to Potholes
The API can provide a view of the reports made, local trends, issues resolved and more.
But this is more than just crowdsourcing the report of potholes on streets. As the company explains,
Street Repairs are committed to making it as easy as possible for local people to report local problems to their council. We then work with the council to get these issues resolved, while keeping the original informant up-to-date with the progress being made. …Since its launch just a few months ago, Street Repair’s popularity has exploded among members of the public via social media. It now has thousands of Facebook fans and is receiving hundreds of detailed reports from concerned members of the public.
By installing the free API plug-in on their website, newspapers, community organisations, cycling groups, and other interested website owners, can encourage their audience to engage with local authorities, to improve their neighbourhood and community.
The API features simple integration, Google maps, geolocation, photo uploading, storage, and more.
AdStage Launches Platform API and New App Partnership Program for Advertising
Adstage has announced a new API that integrates the ability to mange, monetize and market apps to thousands of advertisers on Adstage. The company has also announced an all in one marketing partnership program. The cost is $99 per month after a free month’s trial period.
As Richard Harris reports in App Developer Magazine,
The AdStage advertising platform is a self-serve cross-network online advertising platform with management and analytics across search, display, social and mobile ad networks like Google, Bing, Facebook & LinkedIn. It’s an all-in-one marketing platform, with an integrated app system, for advertisers of all sizes.
AB testing is easy through one of three optimization apps. Other apps offered by the platform include: creation apps such as Ad Variants, and partner integration apps like Facebook Retargeting and Banner Ad Builder.
API News You Shouldn’t Miss
- CA Technologies Launches Portals for API Development
- AirPair Expands Its Live Programming Assistance By Partnering With Stripe, Twilio, And Others
- API layer fuels official US Navy mobile app
- Parasoft API testing enhances automation for mobile API Testing
- AdStage Launches New App Partnership Program and Platform API to Enhance Advertising Platform
- SeriousBit Launches NetBalancer Network Monitoring Service With REST API Support
- API Design Tooling From RAML
5 New APIs
Today we had 5 new APIs added to our API directory including an indian bulk sms service, a nigerian bulk sms service, a bulk sms service and an ethiopian mobile bulk sms service. Below are more details on each of these new APIs.
24X7SMS API: 24X7SMS is an Indian SMS portal that provides bulk SMS and voice chat solutions to customers around the world. 24X7SMS can connect with more than 800 mobile operators worldwide. Recipients don’t have to pay for SMS received through 24X7SMS. Support is available at all times over chat, email, or phone calls. 24X7SMS provides two SMS APIs: one for sending texts within India, and another for sending texts internationally.
donbulkSMS API: donbulkSMS is a Nigerian bulk SMS portal. It provides flexible billing packages and supports both numeric and alphanumeric sender IDs. Users can integrate donbulkSMS with their own website or application programmatically via API, enabling users to send SMS or check their account balances over REST or SMPP calls.
GenesisBulkSMS API: GenesisBulkSMS is a Nigerian bulk SMS provider that caters to businesses, organizations, and private individuals. Users can integrate with the GenesisBulkSMS gateway using the service’s REST API, allowing users to send SMS and check their account balances from within third-party websites, systems, or applications.
GMT SMS API: GMT SMS provides bulk SMS services. Users can customize their sender ID, schedule SMS, receive inbound SMS, make Network Query (NQ) requests, and more. An online customer portal allows users to access their delivery reports at any time. Users can integrate with GMT SMS programmatically via REST or SMPP API. Sample code is available in PHP, ASP, C#, and other languages.
WebSprix API: WebSprix is an Ethiopian provider of MVAS (Mobile Value Added Services). Their services include bulk SMS, voice broadcasting, call conferencing, SMS marketing, IVR (Interactive Voice Response), SMS-based polling, etc. Developers can integrate WebSprix’s bulk SMS services with their own applications or systems via API. Ready-made scripts are available in PHP and Java.
Daniel Jacobson is the vice president of Netflix Edge Engineering, which handles the Netflix API and playback functionality. Prior to his time at Netflix, Jacobson was director of application development at NPR. There, he led the development of NPR’s digital properties, including the NPR API. Jacobson also recently co-authored the O’Reilly book “APIs: A Strategy Guide.” You can follow Daniel on Twitter or connect with him on LinkedIn.
In some of my more recent posts, I have written about upcoming transformations within the API space, ranging from orchestration layers to how Netflix pursued an optimized API design for the 1,000-plus device types that it supports. In this post, I will provide more context for these decisions and how they may (or, perhaps more likely, will) apply to your API designs–especially when it comes to the benefits of the separation-of-design model.
What do APIs do?
APIs do a lot of things, but, principally, they are responsible for three things: data gathering, data formatting and data delivery.
Of course, APIs are responsible for a number of other things, as well—for example, security, discoverability and resiliency–but it’s all in support for the big three. For the purposes of API design, let’s break down these three functions.
1. Data gathering is the process by which an API calls out to its data source(s) to retrieve whatever data is necessary to satisfy a request. In some cases, the data source could be a database accessed directly by the API server. In other cases, the API server could be accessing a range of backend distributed systems to gather bits of what it needs from each. Regardless of where the data source is, the data-gathering step is essential to satisfying a request–without it, there is no payload.
2. Once data has been gathered, the next step is data formatting in preparation for the response. There are many things that can happen in the formatting phase, including pruning of elements, transforming values, additional lookups, and retries for missing elements. Once the final bundle of elements is assembled and processed, it needs to be structured in concert with what the requesting agent expects. Whether that is through RSS, hierarchical JSON, flat XML, protocol buffers or some other payload structure, this is when the data gets manipulated according to certain specifications.
3. Data delivery is the act of transferring the formatted payload from the server to the requesting agent. Most often, this is accomplished through HTTP, and the full document will be delivered once processed completely by the server. It could, however, be handled in a variety of other ways, including streaming bits across the wire once gathered rather than waiting for the complete payload.
Separation of concerns
When considering these three responsibilities, it becomes clear that the API provider (that is, the team that produces and maintains the API) has a very different set of concerns than API consumers (that is, internal and/or external developers, UI teams and partners who make requests to the API).
As shown in this chart, API providers care deeply about how data is gathered. API consumers have no vested interest in how gathering happens, as long as it happens. On the other hand, API consumers have their own concerns about what they are receiving and how it is to be received. That is, API consumers care about formatting and delivery methods, although there is likely a lot of diversity in how each consumer wants those functions carried out. API providers also have an interest in formatting and delivery, but that interest can be expressed in a very different way. The API provider cares about the format and delivery method only insofar as it affects the ability to handle the diverse needs expressed by API consumers.
Because both parties (API providers and API consumers) have an expressed interest in formatting and delivery, API providers most often provide a common, one-size-fits-all (OSFA) request/response model to which everyone must adhere. This makes it easier for the API provider to support virtually all API consumers, with the primary trade-off being that each API consumer has a serviceable, although not optimal, way to interact with the API.
As I stated in a previous post, this OSFA model works great for a LSUDs (large sets of unknown developers) but not for SSKDs (small sets of known developers). As private APIs in support of mobile and device strategies continue to grow in prominence, we will continue to see the holes in this OSFA API design model.
Benefits for all involved
The better approach for handling API design for SSKDs is to create true separation of concerns, rather than have all server-side decisions made by the API provider. The API provider should be principally focused on designing systems that are excellent at data gathering. The systems should also be able to support, in some form, the ability for API consumers to define their own formatting and delivery options.
Handing off the formatting and delivery responsibilities to the consumers has tremendous benefits for all involved. It gets the API provider out of the business of trying to handle all of the nuanced formatting demands of a growing and diversifying set of consumers. Meanwhile, it liberates the API consumers from the rigid protocols that have been forced upon them, letting them define payloads that are optimal for the way they develop, deploy and test their code (while likely improving the processing of their UI code).
Let’s look at a couple of examples to see why this separation of concerns is important.
Some devices have hardware constraints, such as limited memory or a weaker CPU. Because of these limitations, a UI developer may prefer to have smaller payload sizes or a flatter object model that is easier for the CPU to parse. Meanwhile, some devices may have strict requirements. For example, a requirement for proprietary XML payloads may present a conflict with other devices that prefer to handle serialized JSON. Because of these differences, exacerbated by the tremendous growth in connected devices, separating out these concerns can play a significant role in development velocity and system reliability.
Of course, there are many ways for API providers to enable API consumers to control formatting and delivery, which will be a design decision that needs to be discussed by all parties involved.
My team at Netflix and I have written a range of posts that clarify how we designed our system to support this separation of concerns. We believe that our model is quite strong, especially given the scale of the operation in handling more than 1,000 different device types, and we think it could be helpful for others, as well.
Independent of the Netflix solution, as we continue to see the Internet of Things expand, more companies will aspire to get more content onto more platforms. Having a strong design that supports the separation of concerns will be an effective–and perhaps essential–tactic to help you get to where you want to go.
Testing your code is one of the best things you can do for the quality of your app. For a project as large as YUI this can present a number of challenges. Reid Burke in his YUIConf presentation “Testing YUI Everywhere” talks about the issues he faces daily in keeping our CI system up and healthy. You’ll definitely find some great insights into how we test our code and get a glimpse of the work he’s been doing. You can find the previous YUIConf talk here, and watch all of them via YouTube.
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/AABmn_h4HAM" width="560"></iframe>
You can find Reid’s slides on his own site.
The AWS GovCloud (US) Region is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements.
Today we are making Red Hat Enterprise Linux (which everyone calls RHEL), available in the AWS GovCloud (US) Region.
Red Hat Enterprise Linux was designed for secure, enterprise computing. The Security-Enhanced Linux (SELinux) capabilities found in RHEL have fostered adoption across many agencies of the United States government.
With a total of 15 Common Criteria certifications across four hardware platforms, RHEL is one of the industry's most certified operating systems. Today's launch of RHEL in AWS GovCloud (US) means that government users can now standardize on a single operating system for on-premises and cloud-based deployments.
Mobile developers have come to rely on a number of backend-as-a-service (BaaS) offerings that make developing and deploying mobile applications a whole lot simpler. Now Sencha wants to take that that concept to the next logical conclusion in the form of an extension to its application development tools that turns the Microsoft Azure cloud into a BaaS environment for mobile applications.
Sencha Touch Extensions for Microsoft Azure is an HTML5-compatible framework for developing mobile applications that allows developers to easily integrate Windows Azure Mobile Services, including mobile data, authentication and push notification services.
According to Sencha CEO Michael Mullany, the Microsoft Azure cloud platform lends itself easily to functioning as a BaaS platform because the APIs provided by Microsoft are clean and simple. In addition, Microsoft has made it a priority to focus on the needs of specific classes of developers—one of which is the base of mobile application developers that Sencha serves.
As a free downloadable plug-in, Sencha Touch Extension for Microsoft Azure allows developers to connect with the Azure Mobile Services API to perform create, read, update and delete (CRUD) operations on data service tables. The authentication service, meanwhile, eliminates the need to write, configure and test custom authentication systems. Sending push notifications only requires uploading developer credentials for a given platform, and developers can also connect to hubs that broadcast push notifications to millions of devices. Microsoft Azure also allows developers to store data in the cloud using blob and table formats, which makes it easy to add a global leaderboard to cross-platform games, maintain a friends list, and store images, videos and transactional information.
Separately, Sencha released the results of a survey of 2,128 business application developers, which shows mobile applications developers are now supporting, on average, five classes of platforms. Traditional Windows platforms are still the dominant platform, followed by Google Android, and Apple iOS, iPad and MacOS. St the same time, 30 percent of those developers now say they don’t support traditional Windows platforms at all, Mullany notes.
The degree to which Microsoft leverages its Windows Azure cloud services to reverse that momentum remains to be seen. Although it’s obvious that most developers still see Windows as a dominant platform they need to support, they are also starting to vote with their feet and time for other platforms. If these defections increase as a percentage of the total developer community in 2014, the Microsoft house that the development community helped build might not be as stable as many once took for granted.
The week in review, 140 characters at a time. This week, 4 messages in 9 conversations. (With 6 favorites.)
Tuesday at 07:21am
In a conversation that started on Wednesday at 12:59pm
In a conversation that started on Wednesday at 01:16pm
In a conversation that started on Wednesday at 05:48pm
In a conversation that started on Wednesday at 06:46pm
Thursday at 05:21pm
Friday at 05:34am
Friday at 06:24am
Saturday at 07:58am
Amazon Web Services — AWS Activate Update - Solutions Architect Office Hours, More Training, and Exclusive Offers
AWS Activate is a package of resources for startups. It was designed to make it even easier for startups to get started and to quickly scale on AWS. It is an international program, with members from all over the world.
The initial benefits package for members of AWS Activate included AWS credits, AWS training, AWS support, a set of exclusive offers from third parties, and the opportunity to share knowledge with other members of the program.
Today we are expanding the AWS Activate benefits package to make it even more useful to startups. We are adding virtual office hours with an AWS Solutions Architect, additional training, and eight additional exclusive offers from third parties.
Virtual Office Hours
The AWS Solutions Architects are the mad scientists of the AWS ecosystem, albeit with better social skills and haircuts (case in point, my colleague Miles Ward, pictured at right).
As a member of AWS Activate, you have the opportunity to book time with an AWS Solutions Architect. The Solution Architects are ready to discuss your security, architectural, and performance questions one-on-one. They can help you to design for high availability and are also ready and willing to talk about cost optimization. If you are a member of AWS Activate, think about booking an office hour now.
Additional Training for Self-Starters
The Self-Starter package now includes additional training. Each recipient of this package now has access to the AWS Essentials eLearning course (a $600 value) and eight tokens for self-paced labs (normally $30 per token). If you are a member of AWS Activate, simply log in and then access the self-paced labs.
- Amazon Login and Pay - Make it easy for millions of Amazon.com buyers to shop on your website or mobile site using the information already stored in their Amazon.com account.
- Bitnami - Easily deploy your development, QA, and production environments.
- Cloudability Pro - Get total visibility into your usage and spending across all of your AWS accounts.
- CopperEgg - Increase your operational confidence with this lightweight and affordable tool for application performance management.
- Nitrous.IO - Launch, manage, and collaborate on cloud-based development environments with the "Shift" development environment.
- Podio - Communicate, organize, and get work done with Podio by Citrix.
- Stackdriver - Monitor and manage your full AWS stack (infrastructure, systems, and apps) with the help of Stackdriver Elite.
- Trend Micro - Improve security, visibility, control, reporting, and protection of customer data with Trend Micro's suite of host-based security tools and vulnerability scanning for web applications.
To learn more about these offers, click here. If you are a member of AWS Activate and would like to redeem one of these offers, send an email to email@example.com and let us know which offers you’re interested in.
Sign Up Now
To sign up for the program or to learn more about it, visit the AWS Activate page.
How to Build an API: Zapier Takes it From the Top
We last covered Zapier when it launched its developer platform earlier this year. With its own API that integrates email with Evernote and Paypal with MailChimp, among other examples, Zapier now offers a course on building them. Created by Brian Cooksey, it spans eight weeks. However, you are free to run through the sessions at your own speed.
According to Katy Schamberger writing in the Silicon Prairie News, the course is aimed at a wide audience, including people who don’t work with APIs directly:
With the help of Zapier product designer Bryan Landers, as well as other members of the Zapier team, Cooksey is in the midst of publishing an eight-week “Introduction to APIs” course on the company’s site.
The course covers protocols, data formats, authentication, basic design, patterns for communication, and implementation. It is offered for free. For many readers, this may be too basic, at least in the beginning. But even for them it could be a useful tool to pass on to friends and relatives asking the question, what’s an API and how do I use one?
Nordic APIs Adds Stockholm Stop to Tour, March 31
Nordic APIs, whose aim is to make the Nordics programmable, has added a new stop to its planned tour, Stockholm March 31. Sponsored by our parent company, Mulesoft, among others, Nordic APIs will have several speakers at the event. These include Holger Reinhardt from Layer 7, Tom Burnwell from Axway, Mulesoft’s Sumit Sharma, and Pernilla Nasfors from the World Bank. Organized by Dopeter and Twobo technologies, Stockholm is just one stop in a series of conferences on APIs. Other locations include Copenhagen (April 1), Helsinki (April 2) and Oslo (April 3).
It might be cold weather… but the event is conducted in English. The conference is focused on”from private to public APIS,” while other topics include: the next big thing in APIs, security, BaaS, innovations and use cases, among other issues. Early bird registration has closed. The regular price is 450 NOK, about US $75 for a one day event. Registration for any of the three days can be started here.
API News You Shouldn’t Miss
- Adding Stockholm Stock to Tour
- Chrome 34 Beta Brings New Unprefixed Web Audio API And Hands-Free Google Voice Search
- Website “Street Repairs” Expands Public Reach to Help Fix Britain
- OAuth 2.0: Enabling identity for the cloud, mobile
- Google Launches Genomics Effort, Joins Global Alliance
- Amazon Chooses HAL Media Type for AppStream API
- Broadcom Announces Open Switch Pipeline Specification Targeting Growing SDN Application Ecosystem
- Rezzcard And InComm Partnership Allows Rent Pay At Retail
- Brivo Labs debuts new Google Glass application
- SnapLogic Updates iPaaS for APIs, Mobile Integration; Expands SaaS Library
- Google Joins Global Alliance for Genomics and Health, Proposes API
- Zapier engineer publishes free API course online for beginners
- Is Microsoft looking to take the wind out of Mantle’s sails?
- AMD Supports Possible Lower Level DirectX
- Application programming interface for MIL-STD-1553 terminal devices introduced by Holt Integrated Circuits
- Promise and peril in an ultra-connected world
- Alipay shuts down WeChat’s API payment gateway
4 New APIs
Today we had 4 new APIs added to our API directory including a norwegian financial product data feeds, account breach database, direct marketing print service, relationship management service, . Below is more details on each of these new APIs.
Finansportalen API: Finansportalen is a site provided by the Norwegian Consumer Protection Agency to provide consumers the ability to make good choices in the market for financial services. The portal is a tool that helps consumers to compare financial industry products The Finansportalen API exposes data feeds on financial products for the Norwegian market. An account is required to view the URLs for data feeds.
Have I been pwned API: Have I been Pwned is a database of usernames and email addresses that have appeared on breached website disclosures. The site contains breach data from 16 websites, and contains over 161,000,000 accounts that have been “pwned.”
The Have I been Pwned API uses REST calls, returns JSON, and uses SSL for security. The API allows users to make calls to access the data housed on Have I Been Pwned, including getting all breaches for an account, getting all breaches in the system, and other calls.
PFLlink Printing API: Print For Less is a printing company that offers the ability to send professional-looking mailers to 500 customers, printed full-color on both sides, quickly produce 2,000 product or sell sheets for a trade show, put together 7,500 16-page product catalogs or booklets, upgrade newsletters to full color, or create posters for an event or cause. The PFLlink Printing API allows users to send marketing materials. Users can send a single customized piece to a single client or tens of thousands to the prospective clients. The API uses REST and an account is required with service.
RelateIQ API: RelateIQ is a relationship management service that focuses on being data driven to present optimal solutions. The service utilizes your email history, business activity, and your contact frequency to facilitate scheduling, messaging, and other relationship management activities to free up time for more business focused activities. The RelateIQ API uses REST calls and returns JSON. It allows users to get lists, search relationships, add relationships, set field values, and create comment events. An account is required with service.