ProgrammableWebSupreme Court Reviews Oracle v. Google Copyright Case

Recently, the Supreme Court met to decide whether or not they will review the Federal Circuit's decision in the Google v. Oracle Case.

Anne van Kesteren (Opera)Regional digital stores

I moved countries twice in a relatively short amount of time. I switched credit cards along the way. I buy digital goods and services that are distributed through regional digital stores. Such a fricking mess.

  • Changing regions for an account is often a non-trivial operation. I could not just update my credit card information for my Xbox on the Xbox, but had to go on the web and mess around for about an hour.
  • The language of the store is often tied to that of the country. My Xbox is now partially in German.
  • You end up losing content. My iTunes music purchases from the Netherlands are no longer available for download from iCloud.
  • You end up losing updates. I can no longer update the HSBC banking application as it is locked to the United Kingdom store.
  • You end up being unable to download useful applications. Due to an unclear distribution decision from the city of Amsterdam I cannot download a pay-for-parking application from a Swiss store.

Do not enough people move countries for this to be a problem? Why does the European Union tolerate this abuse? (Switzerland is not part of the EU, but the United Kingdom and the Netherlands are. Apparently there has been a consultation.)

To be clear, some of these problems exist outside of digital stores as well. It just makes even less sense that you cannot download or update an application across regional boundaries. Or maintain a music collection.

ProgrammableWebGoogle Maps API Adds New Transit Directions and Fares

Android developers can now create robust direction-enabled applications with new Google Maps API enhancements. Writing for, Joe Rossignol highlights major takeaways from Google's new release.

ProgrammableWebGeist Leverages APIs in Next-Generation Data Center PDUs

API Report this week announced the release of Geist’s next-generation platform, available in their R-series Power Distribution Units (PDUs). With the importance of a complete Data Center Infrastructure Management (DCIM) system covering power, cooling, monitoring, and management, Geist’s latest platform is a welcome tool for customers.

Matt Webb (Schulze & Webb)Filtered for a squelchy something or other


Words in the 25 most common passwords of 2014:

  • password
  • qwerty
  • baseball
  • dragon
  • football
  • monkey
  • letmein
  • mustang
  • access
  • shadow
  • master
  • michael
  • superman
  • batman
  • trustno1


I can't remember when I first saw this, a segment from a BBC natural history document of a man hunting an antelope by endurance running.

It takes hours.

The hunter uses his hand to get into the mind of the antelope -- there's a moment where he has to think at it does, choose the same direction, pure animal empathy.

Yeah humans! I can't help it, every time I see this. Go us!

It's a bit of a weird reaction I know, because mostly my sense of "us" is mammals. When I think "we're all in it together," when I'm trying to figure out what's ok and not ok about sacrificing dogs in the pursuit of leaving this planet to live in space, my loyalties are mammals. And my sense of "people" goes wider still. Distinguishable matter, probably. Asteroid people. A very different mode of thinking and being, sure, but a type of personhood and rights unto themselves.


I'm completely obsessed with this extended mix of Bojack's theme: great sounds. Deep dubstep bass, loooong sax, and some weird squelchy something or other. Can't stop listening.

Bojack Horseman on Netflix.


A digital clock where all the components are visible, every resistor, capacitor and all the wiring unpacked from its silicon chips, and laid out.

Henry SivonenIf You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility


To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.


Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer were the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). (Addition 2015-01-24: There exists a project called LuneOS to port Open webOS to Nexus phones, though.) Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps. Addition 2015-01-24: There seems to exist a project, OsmocomBB, that is trying to produce GSM-level baseband software as Free Software. (Unlike the project page, the Git repository shows recent signs of activity.) For smart phones, you really need 3G, though.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs. Correction 2015-01-24: It appears that after I initially wrote that sentence in August 2014 but before I got around to publishing in January 2015, Intel announced such a CPU/GPU combination.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have video DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015. Addition 2015-01-24: They have actually reached and exceeded their funding target! Awesome!) I’m not going to try to organize anything like that myself—I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use—in addition to their phone—to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)


In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.


As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at is instructive in terms of seeing what’s hard about putting it all together. On the server, runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using, it’s non-trivial to migrate between the version and a version of a given app. It particularly bothers me that completely hides the original URL structure of the app.


And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from to was that even though I was able to get the board of IKI to submit to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to obscuring the URLs. Correction 2015-01-24: Rather paradoxically, even though is really serious about isolating apps from each other on the server, gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Correction 2015-01-24: instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. To provide Origin-based privilege separation on the browser side, generates hostnames that do not look meaningful to the user and hides them in an iframe, but the URLs shown for the top-level origin in the URL bar are equally obscure. I find it unfortunate that does not mint human-friendly URL bar origins with the app names when it is capable of minting origins. (Instead of and, you get and Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.


There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

ProgrammableWebInmarsat Hosts Developer Conference to Showcase Satellite APIs

Satellite telecommunications company Inmarsat hosted its first ever developer conference in London this week and introduced developers to APIs that give them the ability to interact with its satellite network.

ProgrammableWebAmid The API Copyright Controversy, An API Patent Claim Surfaces

Editor's Note: As far as ProgrammableWeb knows, APIs themselves cannot be patented. Copyrighted? That's before the Supreme Court to decide in the case of Oracle v. Google. But patented?

ProgrammableWebKassabok API Provides Simple Budgeting Functionality

Everybody wants to save more money, and the first step toward getting there is to know what you're spending and where. Kassabok is an online tool that helps users do exactly that. It's a simple tool to help users to keep track of their daily income and expenses. The Kassabok API allows developers to integrate this service with their own applications, offering their customers this service directly.

The Kassabok service is free to use and offers the following features:

ProgrammableWebNew Javascript Obfuscator Claims To Be The Hardest To Deobfuscate has released what it claims to be the best of the major Javascript obfuscators. Through a form of masking that makes source code unintelligible to the naked eye, such obfuscators raise the barrier to undesirable inspection of Javascript that is otherwise discoverable through the viewing of HTML and .JS files.

ProgrammableWebGoogle to Devs: Replace Maps Coordinate With Maps For Work

Google recently sent a letter to its Google Maps Coordinate users to inform them that the service will deprecate on Jan. 21, 2016. Google Maps Coordinate was launched in 2012 as a business-centric mapping service that workforces could use to monitor, coordinate and communicate with mobile workforces.

Uche & Chimezie OgbujiProject: Libhub

Have you ever noticed something missing when you do an online search for a book, music or a film you want to check out? Something big? If you're anything like me you've been lucky enough to spend valuable, serendipitous, formative hours in a library. I remember walking to Cleveland Public Library at least weekly in the few years when I lived there as a child, supplementing the supply of books my father secured from thrift shops. I'm pretty sure that's where I got the introduction to atomic theory (Democritus through Rutherford to Bohr and beyond) which cemented a lifelong fascination with science. There was Luton Public Library where I went almost every day of the summer of 1986 when my Mom bought me my first computer (a ZX Spectrum Plus) and I taught myself programming reading all the books and magazines they had on the topic. There was the library of the University of Nigeria, Nsukka where I gorged on African and worldwide literature, history and esoteric religions and philosophies right around the time when I discovered my love for poetry. There have been many, many other libraries dear to me in all the places where I've lived.

And now when I search for a book, multimedia or other such resource, I'm struck by the fact that libraries have become sadly obscure on the Web, which is where my children and their generation discover and learn so much of what I did in the brick institutions. Search for a book and you'll find Amazon, B&N and other bookseller listings, Wikipedia pages, film derivations and fan fiction, but you'll go pages and pages and pages into results before you see any indication that you can stroll into your local public library and borrow it for free.

I've been thrilled to be part of a project, led by my company Zepheira, to work towards rectifying this situation. We've launched Libhub, taking sensible steps towards increasing the prominence of libraries on the Web. There are several things that make this a bit more complicated than it should be (and why this visibility problem is so persistent). Libraries have very rich electronic catalogs, but they are in extraordinarily antiquated and arcane formats and conventions. We've invested a great deal of our specialized data processing expertise to develop an engine which can ingest such library data and convert it into useful web representations, including technologies such as RDFa, Open Graph, and the library-focused BIBFRAME which we developed for the US Library of Congress. We're planning to launch this Libhub network this summer.

We've been fortunate to have some great libraries working with us through this project, led by Denver Public Library, and we're been inspired by the many-chaptered story of Denver's own Molly Brown, best known for surviving the Titanic disaster. We turned a handful of DPL records into a tiny experiment ("the Linkable Molly Brown" as named by my colleague Gloria Gonzalez) as we continue to work on our Libhub engine. If you're curious for a sneak peek, and OK looking at something still packed with librarian/technical minutiae, a good place to start a click-round is with the record of her papers at DPL. Remember, this is just a quick experiment and we have a long road ahead, but one we're more delighted to travel than Dorothy was hers in Oz, starting out arm-in-arm with the redoubtable Molly Brown, and with fond thoughts of libraries swimming our heads. We'll see you along the way.

ProgrammableWebKeen IO Enables Easy Custom Analytics

In today's business environment, no piece of information is too small. No piece of information--at least, when combined with other pieces of information--is insignificant. It's all about big data, but they don't call it "big" for nothing. Some companies don't have the resources to develop their own analytics infrastructure. Service offerings can fill the gap, but only if the services can meet companies' many--and ever-changing--needs. Enter Keen IO, which brings to the table scalable customization and calibration, along with simplicity in implementation.

Jeremy Keith (Adactio)Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).


Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

<figcaption>—Jamais Cascio</figcaption> </figure>

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Anne van Kesteren (Opera)DOM: custom elements

Now JavaScript classes and subclassing are finally maturing there is a revived interest in custom elements. The idea behind custom elements is to give developers lifecycle hooks for elements and enable their custom element classes to be instantiated through markup. There is also some overarching goal of being able to explain the platform, though as html-as-custom-elements demonstrates this is extremely hard.

The first iteration of custom elements was based on mutating the prototype of a custom element object, followed by a callback that gives developers the ability to further mutate the object as needed. Google has shipped this in Chrome, but other browsers have been reluctant to follow. I created a CustomElements wiki page that summarizes where we are at with the second iteration, which will likely be incompatible with what is out there today. There is a couple of outstanding disputes, but the main one is how exactly a custom element object is to be instantiated from markup (referred to as “Upgrading”).

If you are interested in participating, most of the discussion is happening on There is also some on IRC.

Anne van Kesteren (Opera)DOM: element constructors

Elements are rather curious web platform objects in that they are instantiated based on their name (and namespace). <a> creates an HTMLAnchorElement object. So does document.createElement("a"). Elements are also identified by their name, more so than by their class. Well written code branches on localName (not instanceof HTMLAnchorElement, which has cross-Realm issues to boot). Selectors match based on element names. Ergo, there is a lot of code assuming that an element whose name is input and namespace is, is also an instance of an HTMLInputElement class.

That is further compounded by only an instance of an HTMLInputElement class having the correct internal slots (as explained in Web platform and JavaScript). An element named input with namespace would cause havoc in browsers (and specifications) if its object was not an instance of an HTMLInputElement class.

These invariants start becoming problematic when introducing constructors and allowing for subclassing. Domenic started a GitHub repository element-constructors to tease them out. If you are interested in how we bring the DOM and JavaScript together I recommend participating.

Amazon Web ServicesNew Action Links for AWS Trusted Advisor

AWS Trusted Advisor inspects your AWS environment and looks for opportunities to save money, increase performance & reliability, and to help close security gaps. Today we are enhancing Trusted Advisor with the addition of Action Links. You can now click on an item in a Trusted Advisor alert to navigate to the appropriate part of the AWS Management Console. For example, I ran the Trusted Advisor on my own AWS account and it displayed the following alert:

I decided to fix the problem and activated an Action Link to head on over to the RDS section of the Console. From there I right-clicked to add a Read Replica:

These new links are available now and you can click on them today!

For Tool Vendors
If you build applications that link (or could link) to the Console, you can use the same URLs. Here are a few to get you started (all of the links are relative to the base URL of the console):

  • EC2 Reserved Instance Purchase –  ec2/home?region={region}#ReservedInstances
  • EC2 Instances – ec2/home?region={region}#Instances:search={search_string}
  • Elastic Load Balancer – ec2/home?region={region}#LoadBalancers:search={search_string}
  • EBS Volumes – ec2/home?region={region}#Volumes:search={search_string}
  • Elastic IP Addresses – vpc/home?region={region}#eips:filter={filter_string}
  • RDS Database Instances – rds/home?#dbinstance:id=dbInstanceId
  • Auto Scaling Configuration – ec2/autoscaling/home?#LaunchConfigurations:id=LaunchConfigurationName

There is a chance that these links will change in the future as the console continues to evolve. If you decide to make use of them, please plan for that eventuality in your application.


ProgrammableWeb: APIsF9Analytics Lease Optimizer

F9Analytics, owned and operated by Codeworks, offers financial analytics tools for commercial real estate. The F9Analytics Lease Optimizer employs an algorithm that considers metrics such as lease term, property value, start rate, escalations, and more data to determine an optimized lease that reaches long term financial objectives for a tenant and landlord. The F9Analytics tool can be accessed via the F9Analytics iOS app, or can be downloaded to be accessed on a company's corporate server as a cross platform enterprise API.
Date Updated: 2015-01-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsAppbase

Appbase is a realtime graph data store for rapid app development. The Appbase API uses REST principles. Using HTTP calls, developers can list and create vertices and edges, filter vertices by specific properties, retrieve vertex data properties, delete a vertex or an edge, and scan through vertices that are RESTful. A non-RESTful API component is the real-time event streaming, which operates with a Websocket protocol using also hosts JavaScript library and bindings to increase API accessibility.
Date Updated: 2015-01-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebPubMatic Launches New API for Direct Ad Buys

Programmatic advertising platform provider PubMatic this week announced the launch of a new API that gives advertisers access to a source of unified ad supply.

Matt Webb (Schulze & Webb)Next coffee morning and how to run one

Let's do coffee morning again! Next week.

Thursday 29th January, 9.30am for a couple of hours, at the Book Club (100 Leonard St).

It would be lovely to see you, come along! There's a vague "making things" skew, but honestly I've spent a lot of time chatting about dogs and music...

We had way too many dudes last time. So if you're Not A Dude or you bring a friend who is Not A Dude, I will be extra extra EXTRA pleased to see you. Please help me fix this.

Last week's coffee morning was bonkers... 15 people, 3 unreleased prototypes from hardware startups, an emergent theme about how to sell products. Other coffee mornings have been more low-key: Six of us talking nonsense and drinking too much caffeine. I don't really mind what happens, it's all good, maybe it'll just be me and my laptop next time :)

(What works for me)

But seeing as coffee morning is spreading to San Francisco I thought it might be worth writing down what works for me...

  • Space beats structure. Hardware-ish coffee morning is once every two weeks, same time, same place. I'll be there, people come and go. There's no sign-up list, no name badges, no speakers. There are a bunch of great events out there, I don't need another place to be in an audience. Open space.
  • Informality wins. It's good to not have regular attendees... It's like a street corner, familiar faces and surprise visitors. I try to help this by making sure there are lots of little conversations, not one big one, and by making connections if two people seem to be talking abut the same thing. Mingling is where magic happens.
  • Convening not chairing. I announce a week ahead of time, and send reminders. I circulate my own perspective afterwards. If I'm having relevant meetings, I ask people to come to the coffee morning instead; that helps set a tone. I also collect names: Everyone gets added to a mailing list where they get all the updates. But at the thing itself, I just chat.

If I'm ever in any doubt, I go back and read what Russell did with his coffee mornings in 2007. He's who it all comes from.

ProgrammableWebMicrosoft Stays Mum on Windows 10 Developer Details

Microsoft on Wednesday announced Windows 10, a sweeping change to its operating system for PCs, tablets and phones. The platform is now fully integrated across all three device categories and promises to bring a more seamless computing experience to consumers and business users alike. If you were expecting new APIs and SDKs, however, the news isn't so good.

ProgrammableWebAppbase Launches Real-Time, Events-Based Database API

Appbase, a database-as-a-service (DBaaS) startup, has just announced the public launch of the Appbase platform and real-time events based database API. The Appbase API allows developers to build scalable applications that include real-time events functionality and collaborative and full-text search features.

ProgrammableWebTryMyUI API Provides Usability Insight

Creating cool apps and websites is one thing, but knowing how your users feel about the way they work is another. User experience is high on the list of priorities when it comes to building effective Web products. If the user journey on your site is complicated or awkward in any way, you'll probably find that people just won't be keen to visit it anymore. That's where a tool like TryMyUI could be quite helpful. It's an interesting initiative that provides a way for website and app owners to see how users are reacting to their websites.

Bob DuCharme (Innodata Isogen)R (and SPARQL), part 2

Retrieve data from a SPARQL endpoint, graph it and more, then automate it.

In the future whenever I use SPARQL to retrieve numeric data I'll have some much more interesting ideas about what I can do with that data.

In part 1 of this series, I discussed the history of R, the programming language and environment for statistical computing and graph generation, and why it's become so popular lately. The many libraries that people have contributed to it are a key reason for its popularity, and the SPARQL one inspired me to learn some R to try it out. Part 1 showed how to load this library, retrieve a SPARQL result set, and perform some basic statistical analysis of the numbers in the result set. After I published it, it was nice to see how its comments section filled up with a nice list of projects out there that combine R and SPARQL.

If you executed the sample commands from Part 1 and saved your session when quitting out of R (or in the case of what I was doing last week, RGui), all of the variables set in that session will be available for the commands described here. Today we'll look at a few more commands for analyzing the data, how to plot points and regression lines, and how to automate it all so that you can quickly perform the same analysis on different SPARQL result sets. Again, corrections welcome.

My original goal was to find out how closely the number of employees in the companies making up the Dow Jones Industrial Average correlated with the net income, which we can find out with R's cor() function:

> cor(queryResult$netIncome,queryResult$numEmployees)
[1] 0.1722887

A correlation figure close to 1 or -1 indicates a strong correlation (a negative correlation indicates that one variable's values tend to go in the opposite direction of the other's—for example, if incidence of a certain disease goes down as the use of a particular vaccine goes up) and 0 indicates no correlation. The correlation of 0.1722887 is much closer to 0 than it is to 1 or -1, so we see very little correlation here. (Once we automate this series of steps, we'll finder strong correlations when we focus on specific industries.)

More graphing

We're going to graph the relationship between the employee and net income figures, and then we'll tell R to draw a straight line that fits as closely as possible to the pattern created by the plotted values. This is called a linear regression model, and before we do that we tell R to calculate some data necessary for this task with the lm() ("linear model") function:

> myLinearModelData <- lm(queryResult$numEmployees~queryResult$netIncome) 

Next, we draw the graph:

> plot(queryResult$netIncome,queryResult$numEmployees,xlab="net income",
   ylab="# of employees", main="Dow Jones Industrial Average companies")

As with the histogram that we saw in Part 1, R offers many ways to control the graph's appearance, and add-in libraries let you do even more. (Try a Google image search on "fancy R plots" to get a feel for the possibilities.) In the call to plot() I included three parameters to set a main title and labels for the X and Y axes, and we see these in the result:

DJIA plot

We can see more intuitively what the cor() function already told us: that there is minimal correlation between the rise of employee counts and net income in the companies comprising the Dow Jones Industrial average.

Let's put the data that we stored in myLinearModelData to use. The abline() function can use it to add a regression line to our plot:

> abline(myLinearModelData)  
DJIA plot with regression line

When you type in function calls such as sd(queryResult$numEmployees) and cor(queryResult$netIncome,queryResult$numEmployees), R prints the return values as output, but you can use the return values in other operations. In the following, I've replotted the graph with the cor() function call's result used in a subtitle for the graph, concatenated onto the string "correlation: " with R's paste() function:

 plot(queryResult$netIncome,queryResult$numEmployees,xlab="net income",
   ylab="# of employees", main="Dow Jones Industrial Average companies",
   sub=paste("correlation: ",cor(queryResult$numEmployees,

(The paste() function's sep argument here shows that we don't want any separator between our concatenated pieces. I'm guessing that paste() is more typically used to create delimited data files.) R puts the subtitle at the image's bottom:

DJIA plot with subtitle

Instead of plotting the graph on the screen, we can tell R to send it to a JPEG, BMP, PNG, or TIFF file. Calling a graphics devices function such as jpeg() before doing the plot tells R to send the results to a file, and turns off the "device" that writes to the image file.

Automating it

Now we know nearly enough commands to create a useful script. The remainder are just string manipulation functions that I found easy enough to look up when I needed them, although having a string concatenation command called paste() is another example of the odd R terminology that I warned about last week. Here is my script:


category <- "Companies_in_the_Dow_Jones_Industrial_Average"
#category <- "Electronics_companies_of_the_United_States"
#category <- "Financial_services_companies_of_the_United_States"

query <- "PREFIX rdfs: <>
PREFIX dcterms: <>
PREFIX dbo: <>
PREFIX dbpprop: <>
PREFIX xsd: <>
SELECT ?label ?numEmployees ?netIncome  
  ?s dcterms:subject <> ;
     rdfs:label ?label ;
     dbo:netIncome ?netIncomeDollars ;
     dbpprop:numEmployees ?numEmployees . 
     BIND(replace(?numEmployees,',','') AS ?employees)  # lose commas
     FILTER ( lang(?label) = 'en' )
     # Following because DBpedia types them as dbpedia:datatype/usDollar
     BIND(xsd:float(?netIncomeDollars) AS ?netIncome)
     # Original query on following line had two 
     # slashes, but R needed both escaped.
ORDER BY ?numEmployees"

query <- sub(pattern="DUMMY-CATEGORY-NAME",replacement=category,x=query)

endpoint <- ""
resultList <- SPARQL(endpoint,query)
queryResult <- resultList$results 
correlationLegend=paste("correlation: ",cor(queryResult$numEmployees,
myLinearModelData <- lm(queryResult$numEmployees~queryResult$netIncome) 
plotTitle <- chartr(old="_",new=" ",x=category)
outputFilename <- paste("c:/temp/",category,".jpg",sep="")
plot(queryResult$netIncome,queryResult$numEmployees,xlab="net income",
     ylab="number of employees", main=plotTitle,cex.main=.9,

Instead of hardcoding the URI of the industry category whose data I wanted, my script has DUMMY-CATEGORY-NAME, a string that it substitutes with the category value assigned at the script's beginning. The category value here is "Companies_in_the_Dow_Jones_Industrial_Average", with the setting of two other potential category values commented out so that we can easily try them later. (R, like SPARQL, uses the # character for commenting.) I also used the category value to create the output filename.

An additional embellishment to the sequence of commands that we entered manually is that the script stores the plot title in a plotTitle variable, replacing the underscores in the category name with spaces. Because this sometimes resulted in titles that were too wide for the plot image, I added cex.main=9 as a plot() argument to reduce the title's size.

With the script stored in /temp/myscript.R, entering the following at the R prompt runs it:


If I don't have an R interpreter up and running, I can run the script from the operating system command line by calling rscript, which is included with R:

rscript /temp/myscript.R

After it runs, my /temp directory has this Companies_in_the_Dow_Jones_Industrial_Average.jpg file in it:

DJIA plot from script

When I uncomment the script's second category assignment line instead of the first and run the script again, it creates the file Electronics_companies_of_the_United_States.jpg:

data on U.S. electronics companies

There's better correlation this time, of almost .5. Fitting two particular outliers onto the plot means that R put enough points in the lower-left to make a bit of a blotch; I did find with experimentation that the plot() command offers parameters to only display the points within a particular range of values on the horizontal or vertical axis, making it easier to show a zoomed view.

Here's what we get when querying about Financial_services_companies_of_the_United_States:

data on U.S. electronics companies

We see the strongest correlation yet: over .84. I suppose that at financial services companies, hiring more people is more likely to increase revenue than in other typical sectors because you can provide (and charge for) a higher volume of services. This is only a theory, but that's why people use statistical analysis packages: to look for patterns that can suggest theories, and it's great to know that such a powerful open-source package can do this with data retrieved from SPARQL endpoints.

If I was going to run this script from the operating system command line regularly, then instead of setting the category value at the beginning of the script, I would pass it to rscript as an argument with the script name.

Learning more about R

Because of R's age and academic roots, there is a lot of stray documentation around, often in LaTeXish-looking PDFs from several years ago. Many introductions to R are aimed at people in a specific field, and I suppose my blog entries here fall in this category.

The best short, modern tour of R that I've found recently is Sharon Machlis's six-part series beginning at Beginner's Guide to R: Introduction. Part six points to many other places to learn about R ranging from blog entries to complete books to videos, and reviewing the list now I see more entries that I hadn't noticed before that look worth investigating.

Her list is where I learned about Jeffrey M. Stanton's Introduction to Data Science, an excellent introduction to both Data Science and to the use of R to execute common data science analysis tasks. The link here goes to an iTunes version of the book, but there's also a PDF version, which I read beginning to end.

The R Programming Wikibook makes a good quick reference work, especially when you need a particular function for something; see the table of contents down its right side. I found myself going back to the Text Processing page there several times. The four-page "R Reference Card" (pdf) by Tom Short is also worth printing out.

Last week I mentioned John D. Cook's R language for programmers, a blog entry that will help anyone familiar with typical modern programming languages get over a few initial small humps more quickly when learning R.

I described Machlis's six-part series as "short" because there there are so many full-length books on R out there, such as free ones like Stanton's and several offerings from O'Reilly and Manning. I've read the first few chapters of Manning's R in Action by Robert Kabakoff and find it very helpful so far. Apparently a new edition is coming out in March, so if you're thinking of buying it you may want to wait or else get the early access edition. Manning's Practical Data Science with R also looks good, but assumes a bit of R background (in fact, it recommends "R in Action" as a starting point), and a real beginner to this area would be better off starting with Stanton's free book mentioned above.

O'Reilly has several books on R, including an R Cookbook whose very task-oriented table of contents is worth skimming, as well as an accompanying R Graphics Cookbook.

I know that I'll be going back to several of these books and web pages, because in the future whenever I use SPARQL to retrieve numeric data I'll have some much more interesting ideas about what I can do with that data.

fancy R plots on Google

Please add any comments to this Google+ post.

Norman Walsh (Sun)The short-form week of 12–18 Jan 2015

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 4 messages in 8 conversations. (With 5 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

Monday at 09:21am

Republicans are almost as upset about Obama not being in Paris as they would be if Obama had dared to go to Paris.—@LOLGOP

Monday at 06:01pm

Tuesday at 01:05am

Interviewer "What is your greatest weakness" Interviewee "I'm actually a swarm of flies wearing a human skin as a suit...also perfectionism"—@ingdamnit

Tuesday at 03:25am

"A developed country is not a place where the poor have cars. It's where the rich use public transportation." @petrogustavo Mayor of Bogotá—@bestham

In a conversation that started on Thursday at 06:47pm

@shelleypowers @verge Oh dear $DIETY.—@ndw
@ndw @verge And Marco Rubio is in charge of NOAA....—@shelleypowers

Friday at 02:09am

Saturday at 02:02pm

Waiting for The Festival of the Spoken Nerd to start. #maths #comedy —@ndw

In a conversation that started on Sunday at 04:43am

Phone has run off to Bristol Temple Meads without me. Return uncertain. In short: don't bother trying to call me, ain't no one home.—@ndw
@ndw Still (or again?) looking for a new phone? Starting tomorrow, the OnePlus One will be freely available, i.e., without invite.—@gimsieke
@gimsieke Perhaps even more so as I left my phone on the train last Friday!—@ndw
@ndw That’s why I asked whether you need a new phone “again.” If it was your new phone that you lost on the train. Glad to hear it wasn’t…—@gimsieke

Norman Walsh (Sun)The short-form week of 5–11 Jan 2015

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 18 messages in 34 conversations. (With 13 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

Monday at 05:25am

Just a quick warning. This bit between New Year's Day and Christmas Eve can really drag.—@johnnycandon

Monday at 11:32am

RT @edibleaustin: It's here! #BaconandBeerATX tickets are on sale!! —@ndw

Monday at 03:07pm

Nothing shows weakness, whether in #Russia, #China, #NorthKorea or here at home, like the suppression of dissent —@bhweingarten

Monday at 05:30pm

Let it be known that I support @NekoCase's right to park wherever the hell she wants.—@cramerdw

Monday at 05:48pm

Betting on the web continues to be a low-risk strategy.—@stilkov

In a conversation that started on Monday at 06:44pm

@kendall Envelopes? Checks? Where have you obtained these historical artifacts of which you speak?—@ndw
@ndw you'd be shocked how much actual paperwork is required to run a business...—@kendall

Monday at 06:46pm

@pandora_radio Wait a sec while I get my Cards Against Humanity deck. ;-)—@ndw

Monday at 06:52pm

RT @kfury: "Prof. Mandelbrot? It's Dr. Schrödinger. I seem to have a problem." —@ndw

Tuesday at 07:13am

XML Stars, the journal is out! Stories via @xmlguild @ndw @georgebina —@dominixml

Tuesday at 03:53pm

“Waiter, how can you serve 12oz and 16oz prime rib? Clearly they should be 13oz and 17oz to be classified as primes.” *my date leaves*—@ChrisHallbeck

Wednesday at 02:14am

RT @kiphampton: If the NYPD not leaning on regular people causes a budget crunch, the city might have to raise taxes on the rich. I'm looki…—@ndw

Wednesday at 07:14pm

Pour one out for Hitch tonight. If it's daylight where you are, don't let that stop you. Never stopped him. —@iTod

Wednesday at 07:54pm

RT @ajplus: #Banksy has also contributed a powerful message of perseverance & support after #CharlieHebdo attack. #JeSuisCharlie http://t.c… —@ndw

Wednesday at 07:57pm

RT @unclebobmartin: "Journalism is printing what someone else does not want printed. Everything else is public relations" - George Orwell h…—@ndw

Thursday at 03:14am

RT @B_RussellQuotes: It is preoccupation with possession, more than anything else, that prevents men from living freely and nobly.—@ndw

Thursday at 03:18am

RT @bluxte: To authorities responsible for the measurement and distribution of time: we'll have a minute with 61 seconds in July http://t.c… —@ndw

In a conversation that started on Thursday at 03:20am

@mdubinko Which features?—@ndw
@ndw Anything having to do with calls/texts on my computer, plus handoff, plus Podcast and Weather apps.—@mdubinko

Thursday at 03:26am

@liza Congratulations!—@ndw

Thursday at 03:30am

RT @dauwhe: Liberté, égalité, fraternité, solidarité. #JeSuisCharlie —@ndw

Thursday at 03:32am

RT @BadAstronomer: Seriously. Here’s a tiny section of that Andromeda pic. That’s not noise; *those are stars*. In another galaxy. http://t… —@ndw

Thursday at 08:01am

@BadAstronomer @ndw Taking all those trips to fixup #Hubble's lens were worth it. Gives real perspective on "Space is big. Really big. ..."—@shanecurcuru

Thursday at 08:01am

@bluxte @ndw Hey, how do I make a request for more #time from these authorities?—@shanecurcuru

Thursday at 01:03pm

The extremists are scared of ideas. This is not Islam, this is just murder #JeSuisCharlie —@eddieizzard

Friday at 09:46am

Friday at 06:59pm

Saturday at 12:50am

Why Religion Needs To Be Destroyed - Godless Mom #atheism —@denyreligion

Saturday at 01:16am

RT @rob_pike: Few things are more infuriating that not being able to play content one bought in country A because one is standing in countr…—@ndw

Saturday at 10:37am

@jorabin Fantastic. Congrats to your daughter!—@ndw

Saturday at 08:55pm

It’s been five minutes since Adobe asked me to install an update. I hope they didn’t go out of business or something.—@BiIIMurray

Sunday at 07:56am

Few things are needed to make a wise man happy; nothing can make a fool content; that is why most men are miserable.—@LaRochefoucau1d

Sunday at 11:05am

The Saudi Arabian ambassador is marching for freedom of speech. This world is like a cartoon strip come alive.—@Lexialex

Sunday at 05:29pm

RT @mnot: "Liberté, égalité, fraternité" is so much better a motto than "in God we trust."—@ndw

Sunday at 06:01pm

RT @PlioceneBloke: Fox News not invent yet, or fact. So just shout random pretend lie for get big attention for now.—@ndw

Sunday at 06:01pm

RT @PlioceneBloke: Think probable get more actual real truth fact of actual real fox. Or weasel.—@ndw

Norman Walsh (Sun)The short-form week of 29 Dec 2014–4 Jan 2015

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 9 messages in 18 conversations. (With 9 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

In a conversation that started on Monday at 06:37am

When did Chrome on Android start truncating URIs in the address bar, only showing the domain name after a few seconds? Yuck.—@ndw
@ndw Firefox for Android works very well.—@gimsieke

Monday at 10:15pm

Baggage handling at AUS is a consistent disappointment. (Yes, I had to check a bag)—@ndw

Tuesday at 03:20pm

600 million years of human evolution (X-post r/gifs) #atheism —@denyreligion

Wednesday at 04:48am

Don't get sucked in! January is not Guiltuary or Gymuary. It's dark and cold, read books, drink port, write poems, have sex, watch snow.—@salenagodden

Wednesday at 09:58am

Someone is replacing Helvetica subway signs with Comic Sans. This is typographic terrorism. —@_alastair

Wednesday at 10:45am

@gruber I hate it when that happens.—@ndw

Thursday at 02:09am

Wanna talk about Sodium? Na Nitric Oxide? NO Oxygen Magnesium Phosphorus Iodine Sulfur or Flourine? OMg PISS OFF ...Potassium? K—@SciencePorn

Thursday at 05:15am

Our Reply To A Totally Bogus Monkey Selfie Cease & Desist | Techdirt —@webmink

Thursday at 07:31am

RT @mdubinko: Three signs you may be easily distractible: 1)—@ndw

Thursday at 10:54am

RT @sentsentence: This is a sentence: via @furtherfield —@ndw

Thursday at 11:34am

Yo Mama jokes by scientific discipline —@trieloff

Thursday at 01:11pm

RT @trieloff: Google got it wrong. The open-office trend is destroying the workplace. —@ndw

Thursday at 01:56pm

I won't be doing a detox this year because I have a liver and kidneys.—@Sci_Phile

Thursday at 03:45pm

I was a senior before I was a freshman, but it didn’t matter. I went to a commutative college.—@BadAstronomer

In a conversation that started on Thursday at 07:03pm

First the orientation sensor went, now I'm not sure my phone is charging reliably. Guess it's time for a new one. N6?—@ndw
@ndw A colleague now owns a OnePlus One. He’s very satisfied. Excellent value for money, long battery life, …—@gimsieke
@gimsieke And apparently available by invitation only, but it does sound nice.—@ndw
@ndw Unfortunately, my colleague’s invites expired on Dec. 31. But some of your followers might eventually get some…—@gimsieke

Friday at 07:13am

XML Stars, the journal is out! Stories via @ndw @james_clark —@dominixml

Sunday at 02:57pm

RT @mdubinko: New Year Resolutions are a worst practice. Additionally, they are connected with a narrative of inevitable failure, which doe…—@ndw

Sunday at 05:48pm

All technology is "wearable" if you own duct tape—@Aerocles

Norman Walsh (Sun)The short-form week of 22–28 Dec 2014

<article class="essay" id="content" lang="en">

The week in review, 140 characters at a time. This week, 20 messages in 24 conversations. (With 6 favorites.)

This document was created automatically from my archive of my Twitter stream. Due to limitations in the Twitter API and occasional glitches in my archiving system, it may not be 100% complete.

In a conversation that started on Saturday at 08:37pm

@xproc @ndw Any summary of the diffs? Or an actual diff with 1.0?—@ebruchez
@ebruchez @xproc Too much churn for a useful diff from the 1.0 spec, I'm afraid.—@ndw
@ebruchez @xproc There's a Change Log appendix, linked from the status section.—@ndw
@ndw @xproc Thanks, I would have seen it if I hadn't been too lazy to read the beginning :)—@ebruchez

In a conversation that started on Sunday at 06:05pm

@doctortovey peace:war != war:peace?—@ndw
@ndw :) There's something terribly Orwellian about that!—@doctortovey

In a conversation that started on Monday at 08:30pm

There's something slightly terrifying about a big pot of boiling sugar syrup.—@ndw
@ndw There should be.—@shanecurcuru
@ndw but doesn't it makes you feel badass and controlling forces of nature not meant to be tampered with? #bwahaha —@mathling

Monday at 08:35pm

In other news: I have treacle taffy.—@ndw

Monday at 08:46pm

I'm trying not to believe that blaming N. Korea for the Sony hacking is the work of hawks saber rattling for another war front. Trying.—@ndw

Monday at 10:23pm

Anyone know a good place to get a renter's insurance policy? And also do you have a time machine I could use?—@mc_frontalot

Monday at 11:31pm

@denyreligion In fairness, that's kind of funny.—@ndw

Tuesday at 05:20am

/x/post from /r/childfree: "A Reminder for the Holidays" #atheism —@denyreligion

In a conversation that started on Tuesday at 09:08am

@rdeltour Calabash uses @ndw’s that doesn’t feature the specific bug I’ve encountered. Yet I find it harder to debug—@gimsieke
@gimsieke @rdeltour Hmm. If you file a reproducible bug report, I'll see if I can fix it or at least give a warning message or something.—@ndw
@ndw and the UNC path thingy: /cc @gimsieke —@rdeltour
@ndw @rdeltour—@gimsieke

Tuesday at 10:56pm

RT @denyreligion: Nothing to live for? #atheism —@ndw

Wednesday at 10:31am

At the risk of stirring up a bloody hornet's nest, may I wish you all a lovely Thursday?—@hughlaurie

Wednesday at 10:33am

BREAKING: FBI releases evidence that North Korea has stolen the source code for Linux.—@SwiftOnSecurity

In a conversation that started on Wednesday at 11:53am

Evernote is in the process of jumping the shark. I fear Concur will be doing the same thing to Tripit next. At least I can reinvent Evernote—@ndw
@ndw what did I miss?—@laurendw
@laurendw Stupid annoyance from Evernote (chat FFS) and increasing corp presence from Concur.—@ndw
@ndw @laurendw I was wondering about the Evernote chat. Really? We need more ways to chat/text/message? I need fewer, myself.—@JeanKaplansky
@JeanKaplansky @ndw I use different ones for different groups of people—@laurendw
@laurendw @ndw me too... but there are just so many people I know!—@JeanKaplansky
@ndw right, that. I guess some people live in Evernote?—@laurendw

Wednesday at 01:07pm

RT @spacebrianspace: "The sinking of the Titanic must have been a miracle to the lobsters in the kitchen."—@ndw

Wednesday at 01:08pm

@pgor OMG! #Want —@ndw

Wednesday at 01:25pm

RT @denyreligion: "Sure, Charlie Brown, I can tell you what Christmas is all about." #atheism —@ndw

Wednesday at 01:29pm

@MattioV Give them money anyway.—@ndw

Thursday at 06:30pm

RT @bsletten: There is no one I would *RATHER* see play James Bond than Idris Elba. Do it.—@ndw

Thursday at 06:34pm

RT @StephenFrug: "our politics are just a bidding war between selfish transhuman corporations that view humans as their gut flora" -@doctor—@ndw

Thursday at 06:51pm

@kstirman Spring?—@ndw

Thursday at 06:53pm

RT @platobooktour: On this day a babe was born who would bring light onto the world. Happy birthday Isaac Newton, b. 1643, published Princi…—@ndw

Friday at 03:08am

In the end, the real cyberterrorists still won —@shilkytouch

Sunday at 06:00pm

Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders. —@Benioff

Thursday at 10:54am

RT @sentsentence: This is a sentence: via @furtherfield —@ndw

ProgrammableWeb: APIsTowerData Personalization

TowerData provides services that access business data associated with email addresses. These services retrieve demographic information for individuals or households, location and ISP. TowerData services for email marketing include email intelligence, email validation and email append. TowerData Personalization API allows specific and targeted data to be retrieved from emails, hashed emails, or postal addresses through a unique API Key. This API can be queried with HTTP GET and successful responses are returned in JSON format.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPrintmotor Server

Printmotor is an online platform for creating and printing documents, such as flyers. The Printmotor Server API allows developers to access and integrate the functionality of Printmotor's production and printing services with other applications. An example API method is sending and retrieving orders.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsPrintmotor Client

Printmotor is an online platform for creating and printing documents, such as flyers. The Printmotor Client API allows developers to access and integrate the functionality of the Printmotor backend with other applications. The main API method is integrating the Printmotor service.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNanorep Knowledge Base

Nanorep is a customer service and support platform. Businesses can use Nanorep for customer engagement and customer service and support functions. The Nanorep Knowledge Base API allows developers to access and integrate the knowledge base functionality of Nanorep with other applications. Some example API methods include retrieving knowledge base content, adding knowledge base content, and exporting the entire knowledge base.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNanorep Event Widget

Nanorep is a customer service and support platform. Businesses can use Nanorep for customer engagement and customer service and support functions. The Nanorep Event Widget API allows developers to access and integrate the event log functionality of Nanorep with other applications. Some example API methods include retrieving event information, answering queries, and managing users.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsGoogle Cloud Monitoring

Google Cloud Monitoring provides access to metrics and data from the Google Cloud Platform. Developers can access and integrate the functionality of Google Cloud Monitoring with other applications. Some example API methods include retrieving usage data and costs, managing users, and retrieving information on issues.
Date Updated: 2015-01-22
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesDeploy a Hybrid Storage Solution Using Avere’s Edge Filer and Amazon S3

Enterprise-scale AWS customers often ask me for advice on how to connect their existing on-premises compute and storage infrastructure to the AWS cloud.  They are not interested in all-or-nothing solutions that render their existing IT model obsolete. Instead, they want to enhance the model by taking advantage of the security, scale, performance, and cost-effectiveness of the cloud.

One of the most interesting connection points is storage. With enterprise storage requirements growing at a rapacious pace, the seemingly limitless capacity of the cloud, coupled with the pay-as-you-go cost model, becomes a very attractive option.

In order to meet this need, we have worked with AWS storage competency partner Avere Systems to create a solution bundle. This bundle will enable enterprises to quickly deploy and evaluate an end-to-end hybrid storage “on-ramp” with a minimal investment.

Special Offer from Avere and AWS
As part of a limited-time offer, new Avere customers in the US and the UK who meet the qualifications can purchase a three-pack (for high availability) of Avere’s FXT 3200 Edge filer appliances (15 TB total capacity) for $60,000.  The package includes unlimited capacity NAS core software,  the FlashMove data migration software, one year of FlashCloud capacity-based software, one year of hardware and software support, and Avere installation services.

To make this offer even sweeter, qualified customers are also eligible for up to $10,000 in Amazon Simple Storage Service (S3) storage credits.

If you are interested in learning more, click here.

Avere & ITMI at re:Invent
At last year’s AWS re:Invent conference, Avere, AWS, and the Inova Translational Medical Institute (the largest health care system in Northern Virginia) discussed their use of this system to bring about their vision of precise, personalized medicine using a hybrid cloud. Here’s the video:

<iframe allowfullscreen="" frameborder="0" height="360" src="" width="640"></iframe>

As part of their treatment model for at-risk newborns, they routinely sequence and analyze the genes of the infant and the parents (which they call trio-based data). This allows them to enhance and customize their treatment, while generating terabytes of data. Some of this data is archived and used to build predictive models and to inform longitudinal studies that can look back up to 18 years.


ProgrammableWebMuleSoft Extends Range of Integration Options

MuleSoft extended the integration platform options it makes available to developers with the release today of both an upgrade of its core platform and the launch of a mobile edition that is simpler to deploy.

ProgrammableWebRBS&#039; API Reasoning Is A Textbook Justification For Doing APIs

Every now and then, we either spot or hear one of those money-quotes that explains at least one reason why all organizations should consider providing APIs if they aren't doing so already. Today, we came across one of those at ProgrammableWeb that had to do with the Royal Bank of Scotland (RBS).

ProgrammableWebUlster Bank To Run Hackathon In Collaboration With Open Bank Project

According to, Ulster Bank (a subsidary of the Royal Bank of Scotland) will be holding a multi-city hackathon series in Belfast and Dublin. The report says that Ulster is producing the hackathon (dubbed Hack/Make The Bank) in collaboration with O

ProgrammableWebEBay Sets Deadline for Deprecated Shopping API Calls

Developers and partners who utilize eBay’s Shopping API recently received a reminder that certain calls within the API will cease to function February 28th. While the end of February was tagged as the hard stop, eBay mentioned that returns to such calls will not be guaranteed after January 31st. The deprecated calls include FindItems, FindItemsAdvanced, and FindProducts.

ProgrammableWebF9Analytics Launches API for Commercial Property Leasing

Real estate technology startup F9Analytics has launched a platform that gives companies access to an algorithm for evaluating commercial property leases.

Amazon Web ServicesSystem Center Virtual Machine Manager Add-In Update – Import & Launch Instances

We launched the AWS Systems Manager for Microsoft System Center Virtual Machine Manager (SCVMM) last fall. This add-in allows you to monitor and manage your on-premises VMs (Virtual Machines), as well as your Amazon Elastic Compute Cloud (EC2) instances (running either Windows or Linux) from within Microsoft System Center Virtual Machine Manager. As a refresher, here’s the main screen:

Today we are updating this add-in with new features that allow you to import existing virtual machines and to launch new EC2 instances without having to use the AWS Management Console.

Import Virtual Machines
Select an existing on-premises VM and choose Import to Amazon EC2 from the right-click menu. The VM must be running atop the Hyper-V hypervisor and it must be using a VHD (dynamically sized) disk no larger than 1 TB. These conditions, along with a couple of others, are verified as part of the import process. You will need to specify the architecture (32-bit or 64-bit) in order to proceed:

Launch EC2 Instances
Click on the Create Instance button to launch a new EC2 instance. Select the region and an AMI (Amazon Machine Image), an instance type, and a key pair:

You can click on Advanced Settings to reveal additional options:

Click on the Create button to launch the instance.

Available Now
This add-in is available now (download it at no charge) and you can start using it today!


ProgrammableWebWorld Leaders Address Cyber Security Via Hackathon

Last week, US President Obama met with British Prime Minister Cameron to discuss a myriad of pertinent world issues. Among the topics discussed came the announcement of an upcoming "Cambridge v. Cambridge" hackathon that will pit elite cybersecurity experts from both sides of the Atlantic against each other.

ProgrammableWebSilk&#039;s New API Features Highlight Startup Trend

Data visualization service Silk has released a number of new features, all of which are also available as new functionalities in its API. Mapping, additional graph options, pinning and programmatically updating Google Sheets integration are all now possible.

Daniel Glazman (Disruptive Innovations)Jean-Claude Bellamy

Jean-Claude Bellamy

J'ai été pendant six ans Ingénieur-Chercheur à la Direction des Études et Recherches d'EDF. Un environnement très spécial, où les coups tordus pleuvaient comme la mousson en Inde au mois de Juin et où l'inertie ambiante était assez délirante. Peu de temps après mon arrivée, j'avais été "invité" dans un mini-réseau de gens fiables et loyaux à l'entreprise. Parce que tout marchait comme ça, il y avait les gens fiables, sur qui on pouvait compter en toute circonstance, et les autres. Jean-Claude Bellamy, qui vient de nous quitter, faisait partie des gens hyper-fiables.

Mais Jean-Claude était aussi bien plus que ça : c'était un humoriste, un bon vivant, un mentor, un ingénieur remarquable, un type fidèle, loyal. Un homme bon, tout simplement.

C'était aussi un dieu vivant de Windows. Il était MVP (Most Valuable Professional) Microsoft depuis des années et même un des très rares non-employés-Microsoft en France à avoir accès au code source de Windows, par décision venant de Microsoft Corp. à Redmond... Microsoft France l'avait il y a peu viré de son statut MVP (après plus de douze ans ininterrompus !!!) et de l'accès au source, malgré une confiance renouvelée par Redmond. Comme Jean-Claude me le disait lui-même :

Mais je crois que mon franc-parler critiquant çà et là (avec argumentation incontestable) le désastre de "Windows Phone" et les concetés à répétition et l'horreur de "Windows 8" n'ont pas du plaire! ;-)

Ces andouilles ne se sont pas rendus compte qu'en faisant cela, il m'ont redonné une liberté totale! ;-)

On ne pouvait que bien s'entendre, n'est-ce pas ?

Ses docs sur Windows étaient également des mines de renseignements fabuleuses et je crois bien que c'est Jean-Claude qui a écrit un des tous premiers documents sur la possibilité de dual-boot, il y a une éternité. Tout francophone qui voulait bidouiller/coder dans Windows est un jour ou l'autre tombé sur son site Web ou ses innombrables contributions sur Usenet ou les forums développeurs de Microsoft.

Il nous faisait aussi marrer comme des baleines. Je me souviens d'une réunion EDF à laquelle nous étions tous les deux, assis l'un à côté de l'autre, et nous pestions sans arrêt contre l'épouvantable langue-de-bois de tous les orateurs qui se succédaient. Quelques jours plus tard, Jean-Claude nous annonçait sa page de logomachie, et nous hurlions de rire en nous rendant compte à quel point c'était malheureusement proche des discours qu'on nous servait.

Mes six années à EDF n'auraient pas été les mêmes sans Jean-Claude. Tant humainement que techniquement, sa présence m'a été inestimable. J'ai été infiniment honoré de son amitié fidèle, même après mon départ d'EDF.

So long Jean-Claude, and many thanks for the fish.

Amazon Web ServicesRemembering AWS Evangelist Mike Culver

Earlier today, after a hard-fought battle with pancreatic cancer that lasted nearly two years, my friend and colleague Mike Culver passed away leaving his wife and adult children behind.

Mike was well known within the AWS community. He joined my team in the spring of 2006 and went to work right away. Using his business training and experience as a starting point, he decided to make sure that his audiences understood that the cloud was as much about business value as it was about mere bits and bytes. We shared an office during the early days. One day he drew a rough (yet functionally complete) draft of the following chart on our white board:

As you can see, Mike captured the business value of cloud computing in a single diagram. I have used this diagram in hundreds of presentations since that time, as have many of my colleagues.

Mike thoroughly enjoyed his role as an evangelist. To quote from his LinkedIn profile:

There is nothing more exciting than telling the world about the amazing things that they can do with Amazon Web Services. So it was easy to travel the world, telling anyone who would listen, about this new thing known as “the cloud.”

After almost four intense years as an AWS evangelist, Mike turned his attention to some new challenges that arose as the organization grew. He managed Strategic Alliances and Partner Training, and retired in the Spring of 2014 after serving as a Professional Services Consultant for over two years. It was difficult for Mike to retire but he had no choice due to his failing health. To quote from his farewell email:

It’s the only job I’ve ever had where I set the alarm for 5:30 AM and still wake up early to get to the office. So it is going to be super tough to go to “you can’t work” cold turkey.

As I noted earlier, Mike and I shared an office for several years. We found that we had a lot in common – a low tolerance for nonsense, a passion for evangelism, and a strong understanding of the value of good family ties. I learned a lot from listening to him and by watching him work. Even though I nominally managed him, I really did nothing more than sign his expense reports and take care of his annual reviews. He knew what had to be done, and he did it without bragging. End of story.

In addition to his work at Amazon, Mike had the time and the energy to serve on the Advisory Board for the Cloud Computing program offered by University of Washington’s department of Professional and Continuing Education. Even as his health flagged and travel became difficult, Mike showed up for every meeting and forcefully (yet with unfailing politeness) argued for his position.

Two weeks ago I was on a conference call with Mike and one of my AWS colleagues. Even though he was officially retired, heavily medicated, debilitated from his cancer, and near the end of his journey, he still refused to give up and continued to advocate for a stronger AWS presence in some important markets.

Mike had a lifelong passion for aviation and owned a shiny silver 1947 Luscombe 8E for many years. Although I never had the opportunity to fly with him, it was clear from our conversations that he would be calm, cool, and collected as a pilot, regardless of the situation. Sadly, Mike’s health began to fail before he was able to finish assembling the kit plane (a Van’s RV-9) that he had started working on a couple of years earlier. Although the words “kit” and “plane” don’t always instill confidence when used together, Mike’s well-documented craftsmanship was clearly second to none and I would have been honored to sit beside him.

To Mike’s wife and children, I can tell you that he loved you all very much, perhaps more than he ever told you. Your names often came up in our conversations and his affection for each and every one of you was obvious. He worried about you, he thought about you, he was proud of your accomplishments, and he wanted nothing but the best for you.

I’m not sure what else I can tell you about Mike. He was an awesome guy and it was a privilege to be able to work side-by-side with him. Rest in peace, my good friend. You will be missed by everyone who knew you.



ProgrammableWeb: APIsFlashphoner

Flashphoner is a web call server that allows users to access VoIP, web calling, messaging, streaming, and other WebRTC communication functions. The Flashphoner API allows developers to access and integrate the functionality of Flashphoner with other applications. Some example API methods include managing users, retrieving locations, and making calls.
Date Updated: 2015-01-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsStrutta Promotions

Strutta is an online marketing company that provides businesses a way to stay socially connected with their customers. Strutta Promotions API lets developers integrate its services with their applications, and enabling their customers to use Strutta Promotions services directly. With this API, applications can be integrated with many different social channels.
Date Updated: 2015-01-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIdyl Cloud

Idyl Cloud by Mountain Fog is a webservice that provides entity extraction from tweets and natural language text and language detection. The Idyl Cloud lets users integrate entity extraction into applications and systems. The RESTful API has HTTP and HTTPS endpoints and responses are in JSON.
Date Updated: 2015-01-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: Weather is a Norwegian Meteorological Institute that provides the public with meteorological services for both civil and military puposes. The Weather API lets developers to integrate the institute's weather information and services with their applications, enabling direct access for users of the applications. The API is REST based, and uses HTTP Basic Authentication.
Date Updated: 2015-01-21
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsKassabok

Kassabok is an online website that provides users with a way to keep track of their expenses and income. The Kassabok API lets developers integrate its services with their applications, enabling their customers to use Kassabok services directly. The API uses HTTP Basic Authentication.
Date Updated: 2015-01-21
Tags: [field_primary_category], [field_secondary_categories]

Amazon Web ServicesAmazon Cognito Update – Sync Store Access, Improved Console, More

We’ve made some important updates to Amazon Cognito! As you may already know, this service makes it easy for you to save user data such as app preferences or game state in the AWS cloud without writing any backend code or managing any infrastructure.

Here’s what’s new:

  1. Developer-oriented access to the sync store.
  2. Updated AWS console interface for developers.
  3. Identity pools role association.
  4. Simplified SDK initialization.

Let’s dive in!

Developer-Oriented Access to the Sync Store
The Cognito sync store lets you save end-user data in key-value pairs. The data is associated with a Cognito identity so that it can be accessed across logins and devices.  The Cognito Sync client (available in the AWS Mobile SDK) uses temporary AWS credentials vended by the Security Token Service. The credentials give the client the ability to access and modify the data associated with a single Cognito identity.

This level of access is perfect for client apps, since they are operating on behalf of a single user. It is, however, insufficiently permissive for certain interesting use cases. For example, game developers have told us that they would like to run backend processes to award certain users special prizes by modifying the data in the user’s Cognito profile.

To enable this use case, we are introducing developer-oriented access to the Cognito sync store. Developers can now use their AWS credentials (including IAM user credentials) to gain read and write access to all identities in the sync store.

The detailed post on the AWS Mobile Development Blog contains sample code that shows you how to make use of this new feature.

Updated AWS Console Interface
On a related note, the AWS Management Console now allows you to view and search (by Identity ID) all of the identities associated with any of your Cognito identity pools:

You can also view and edit their profile data from within the Console:

Identity Pool Role Association
The updated console also simplifies the creation of IAM roles that are configured to access a particular identity pool. Simply choose Create a new IAM Role when you create a new identity pool (you can click on View Policy Document if you would like to see how the role will be configured):

Cognito saves the selected roles and associates them with the pool. This gives Cognito the information that it needs to have in order to be able to show you the “Getting Started” code at any time:

Even better, it also simplifies SDK initialization!

Simplified SDK Initialization
Because Cognito now saves the roles associated with a pool, you can now initialize the SDK without passing the ARNs for the role. Cognito will automatically use the roles associated with the pool. This simplifies the initialization process and also allows Cognito to call STS on your behalf, avoid an additional network call from the device in the process.

Available Now
These new features are available now and you can start using them today! Read the Cognito documentation to learn more and to see how to get started.

— Jeff;

Amazon Web ServicesNew Training to Help You Prepare for the AWS Certified Solutions Architect – Associate Exam

My colleague Janna Pellegrino sent me a guest post to introduce some new materials to help you to prepare for the AWS Certified Solutions Architect exam!


Are you planning to take the AWS Certified Solutions Architect – Associate exam?  We now have a new half-day workshop to help you prepare.  Designed to complement the technical training in the Architecting on AWS course, this supplemental workshop helps you to get ready to take the exam.  We recommend that you take Architecting on AWS before taking the workshop.

In the AWS Certification Exam Readiness Workshop: AWS Certified Solutions Architect – Associate, we review what to expect at the testing center and while taking the exam. We walk you through how the exam is structured, as well as teach you how to interpret the concepts being tested so that you can better eliminate incorrect responses.  You will also have the chance to test concepts we cover through a series of practice exam questions.  At the end of the class, you will receive a voucher to take an online practice exam at no cost.

Check out the schedule of upcoming training classes for more information!

Janna Pellegrino, AWS Training and Certification


Amazon Web ServicesAWS Quick Start Reference Deployment – Exchange Server 2013

Would you like to run Exchange Server 2013 on AWS? If so, I’ve got some good news for you! Our newest Quick Start Reference Deployment will show you how to do just that. Formally titled [[Microsoft Exchange Server 2013 on the AWS Cloud]], this 34 page document addresses all of the architectural considerations needed to bring this deployment to life.

The reference deployment is in alignment with the AWS best practices for high availability with minimal infrastructure, and supports up to 250 mailboxes. Guidance for larger scenarios that use the Microsoft Preferred Architecture, with support for 250, 2,500 or 10,000 mailboxes is also provided. The architecture is fault-tolerant, and eliminates the need for traditional backups. You can use it as-is, or you can modify it as needed.

Launched by means of a AWS CloudFormation template, the architecture includes a Virtual Private Cloud with two subnets in each Availability Zone and supports remote administration. Each subnet includes a public (DMZ) address space and a private address space. The DMZ includes Remote Desktop (RD) gateways and NAT gateways for outbound Internet access. The private address space in each subnet hosts a Domain Controller and Exchange 2013 in Multi Role mode. Here’s a block diagram:

The guide will walk you through the process of processor sizing, memory sizing, and storage sizing.  It will also help you to choose the appropriate type of EBS volume.

The CloudFormation template is fully parameterized. When you use it to launch the reference deployment, you will be prompted for the necessary parameters (each of which is fully described and documented in the reference deployment document):

To get started, download the Quick Start Reference Deployment from the AWS Quick Start Reference Deployments catalog.


ProgrammableWebDropbox Core API Adds Shared Folder Metadata

Dropbox has announced the full production of shared folder metadata, which allows developers to see the users who are part of a shared folder, permissions each user has and who last modified a file in a folder. Dropbox released the functionality in beta production last summer.


Updated: .  Michael(tm) Smith <>