ProgrammableWebHow Uber Broke a Monolithic API into Microservices

In the last few months, Uber has invested thousands of engineering hours in expanding their new microservices ecosystem after abandoning its monolithic codebase.

Shelley Powers (Burningbird)The Killing of the Profanity Peak Wolf Pack

By the time you read this, the Profanity Peak wolf pack in Washington State will be no more.

At last count, six wolves of the 11 member pack have been destroyed. All that remains is one adult and four cubs. And if the last remaining adult is killed, the cubs will most likely starve to death.

What’s left of the pack will either survive long enough to join another pack , or they won’t. Regardless, the Profanity Peak wolf pack is gone.

Washington State Proud

Washington State prides itself on not being the same as its neighbors to the east and north. It doesn’t immediately issue a shoot-to-kill order for endangered wolves when one head of cattle is killed or injured. No, Washington State has a Wolf Advisory Board. On this Board are wolf conservation groups, like Defenders of Wildlife, Humane Society of the United States, Conservation Northwest, and Wolf Haven International.

This Board has helped Washington State develop a protocol for when wolves are killed; a set of non-lethal actions that must be followed before the kill order is given.

The reality is, though, the wolves in the Profanity Peak pack are being killed. Just like the wolves are being killed in Idaho, Wyoming, Alaska, and other states. But we’re not supposed to feel outrage at such an action because Washington State has a Wolf Advisory Board, and it has guidelines.

The Animal Welfare Group Statement

Four of the animal welfare organizations on the Advisory Board issued a statement about the Profanity Peak wolves:

The authorized removal of wolves in the Profanity Peak wolf pack in northeast Washington is deeply regrettable. The Washington Department of Fish and Wildlife (WDFW) is however following the protocol developed by Washington State’s Wolf Advisory Group (WAG) – a diverse group of stakeholders. The WAG and WDFW have committed to evaluate how the protocol worked on the ground this season in order to improve it for next year. In addition, we intend to conduct a thorough and open-minded assessment of the issues raised for all stakeholders involved.

We remain steadfast that our important goals remain the long-term recovery and public acceptance of wolves in our state alongside thriving rural communities. In the meantime, we ask our community and the citizens of Washington State and beyond to engage in respectful and civil dialogue as we work through these challenging events. We believe that ultimately we can create conditions where everyone’s values are respected and the needs of wildlife, wildlife advocates, and rural communities are met.

The organizations don’t want us to be outraged. They want us to accept that, “Eh, these things happen.” They want us to treat the destruction of an entire pack of endangered wolves, as if it’s just another Sunday, and here’s a cookie. We’re to engage in a respectful and civil dialogue.

An entire pack of endangered wolves is being killed, and they want us to be respectful and civil?

OK, then. Let’s engage in a respectful and civil dialogue.

Grazing Permits and National Forests

The cattle are on public land in the Colville National Forest. They are on this land because the rancher has a grazing permit. His cattle join with approximately 32,000 or so other privately held cattle  allowed to graze on public land in Washington State. Graze at a taxpayer-subsidized rate—grazing permit holders don’t pay full value for the true cost of grazing on public land.

The rancher is Len McIver, of the Diamond M Ranch. He’s a multi-generation rancher who uses grazing permits to raise his cattle. You might say, since grazing permits are subsidized, he’s the fourth generation rancher benefiting from taxpayer support. A common term used for this type of rancher is “welfare rancher”.

Oh. I’m sorry. Was that not respectful? I’ll try to do better.

The Diamond M Ranch Connection

This isn’t the first time Len McIrvin has been involved in the destruction of a wolf pack. In 2012, it was his cattle that led to the decision to kill off the Wedge wolf pack, in the same area as the profanity Peak pack. In a 2012 interview, his son, Bill McIrvin, claimed that wolves are the worst predator:

Bill tells me that the first confirmed wolf kill on the Diamond M was in 2007, and probably from the same pack accused of livestock depredation now, the Wedge pack. When I ask about other predators, Bill says lots of predators go after their cattle, including black bear and cougar, although he is unable to tell me how many cattle succumb to these animals yearly. But wolves, he says, are the worse. Why? I ask. Because they are killing but not eating–for fun, not merely for food, he responds.

Wolves kill for fun. It’s an odd thing, but of all the reasons given why wolves kill, not one wolf expert has stated that wolves kill for fun.

One could say that Bill McIrvin is a lying sack of cow poop, if one wasn’t attempting to remain civil.

The Anti-Wolf Message

What’s interesting about Bill and Len McIrvin is how dedicated they’ve been about spreading the message that wolves are killers, wolves and cattle don’t mix, and how all wolves need to be killed. I contrast this with the assurances we’ve been given that any and all non-lethal measures were taken, first, by these same individuals before the decision was made to kill both Wedge pack in 20012, and the Profanity Peak pack this month.

I hope I won’t seem disrespectful if I happen to believe that two people passionate about removing wolves won’t do  everything in their power to ensure wolves can remain.

Evidently, the McIrvins do support some wolves. I’m not sure what the definition of some is. I mean, it isn’t as if people are tripping over the wolves on a daily basis in Washington state: there are less than 90 wolves now, and it’s a big state. Decrease the number of wolves much more, and you don’t have any wolves.

Contrary to the Washington Department of Fish & Wildlife diagram, wolf packs in Washington aren’t growing. In fact, they’ve shrunk by two. And I suspect the Ranchers McIrvin believe this is still too many.

graph of wolf packs in Washington

I also suspect that some wolves the McIrvins want, is more about geography than numbers. Some wolves are OK. Those wolves over there (and not here) are some wolves. Those wolves are OK.

Let’s Not Overly Impact the Ranchers

In 2012, Mitch Friedman from Conservation Northwest, discussed the McIrvin’s motives and processes.

Mitch Friedman, Conservation Northwest executive director, said he remains unconvinced about McIrvin’s efforts to manage his herd to reduce conflicts with wolves. He does not agree that there are no options for better herd management.

“We want to see more clarity, certainty, that wolves are responsible for these past incidences,” he said. “We’re aware there are experts raising questions and the field biologists are themselves not convinced that all, or perhaps even any, of these incidents are conclusively wolves.”

Friedman believes the state is under pressure and needs to take more time. He accused McIrvin of alerting the media first, then the local sheriff’s office, then the wildlife department while reaching out to county and state legislators to turn up the heat.

“Generally, when wolves are in the neighborhood, everything gets blamed on them,” he said. “But when the evidence is in, it’s a small portion of incidents that actually ends up involving wolves.”

If it’s not a wolf, Friedman isn’t certain what would be the cause. While he admitted to hemorrhaging on the rear flanks and groin in one of the recent calf attacks, there were no puncture wounds in the hide.

“We want to work collaboratively, we want to make this work so ranchers are not overly impacted by the presence of wolves,” he said.

How nice. Let’s ensure that the ranchers aren’t inconvenienced. That should be top priority for an animal welfare group.

By the way, this is the same Mitch Friedman who now exhorts us all to be respectful and civil about the killing of the Profanity Peak pack.

About Those Non-Lethal Measures

A couple of days after the decision to kill the Profanity Peak wolf pack was made, Robert Wielgus, of the Large Carnivore Lab at Washington State University, provided some surprising revelations.

“This livestock operator elected to put his livestock directly on top of their den site; we have pictures of cows swamping it, I just want people to know,” Wielgus said in an interview Thursday.

Evidently, the McIrvins deliberately introduced cattle directly into the den area for the Profanity Peak wolf pack.

The thing with cattle is they drive out most other animals in the area where they graze—they are inherently destructive of their surroundings. They decimate the plant life, damage the trees, churn up and damage the soil, and they muddy creeks and streams, as well as damage stream banks. Animals native to the land have no other choice but to leave.

Cause and effect: If all other prey animals are driven out, a wolf pack has little recourse but to hunt what animals remain. Though of course, they only do so for fun…not because they’re desperately trying to survive, and feed their young.

The Judas Wolves

The decision was made to kill the entire Profanity Peak wolf pack. All 6 adults and 5 cubs.

You know, wolves are hard to hunt. They’re intelligent and cunning. They know how to avoid hunters, even hunters using high-powered rifles from helicopters.

Helicopter and shooter

But the Profanity Peak pack was operating under a handicap: members of the pack were equipped with radio collars, allowing them to be tracked.

Such wolves are called “Judas wolves”, because their presence is a threat to the entire pack. I don’t know what’s more disturbing: that we allow hunting of a species that’s so rare, we actually equip them with tracking collars that cost thousands of dollars; or that wolves with such collars have been hunted so much, we actually have a term for them.

Thanks to the radio collars, the 11 member Profanity Peak pack is down to five remaining members. And the hunt still continues.

No, Washington State is Not “Better”

The wolf welfare organizations mentioned earlier have been receiving a great deal of heat in their Facebook posts related to the Profanity Peak pack.

If HSUS had a post with the Profanity Peak statement, it’s since removed it. But a post still remains in Conservation Northwest, Defenders of Wildlife, and Wolf Haven International.  In one comment to their post, Defenders of Wildlife stated:

Washington state has made it a requirement that ranchers use multiple nonlethal methods to deter wolves before the state will even consider a lethal option. Once a depredation has occurred, the state also steps in to help ramp up the nonlethal measures, with the goal of exhausting every possible nonlethal option. It is certainly not a perfect plan, but far better than the “shoot first” approach some other states have. As a member of the Wolf Advisory Group, we hope to continue to help revise the state’s protocols to better protect wolves. (emph. added)

The consensus among these groups is that, while its sad that the Profanity Peak pack is being killed, Washington State is still better than other states that have no advisory board. Animal welfare and conservation groups have a seat at the table. They have a hand in the decisions. This is better.

It’s an intellectual response to an emotional event…and it’s dead wrong.

We should be reacting emotionally to this event. We should be outraged. All those who support wolves should be speaking with one voice.

This isn’t a few animals killed among many: this is the deliberate extermination of 11 members of a group of 90, in the entire state. The number of wolves in Washington is so low, claiming they’ve recovered borders on the ludicrous. The State pontificates about “recovery” of the wolves, and how they’re no longer endangered, but we’re only talking about 90 wolves.

No. Now we’re only talking about 80. Well, unless those four cubs survive, which is doubtful.

Washington State allows 32,000 heads of cattle to graze on public land, and it won’t cut even a small break for the 90 wolves currently in its borders. It isn’t “better” than Idaho or Wyoming. Its process isn’t superior, or more humane. The only difference between the states, is optics.

Never Lose the Outrage

I had a strong scorched earth initial reaction to all of the animal welfare groups that issued such a passive, capitulating statement about the Profanity Peak wolves. I think there were some feelings of sowing salt into the ground at their feet, too.

I am calm enough today to know that ripping these organizations to shreds, while momentarily satisfying, doesn’t really address the problem. The problems is that our government doesn’t value us.

They value ranchers. They value farmers. They value hunters. They value people with guns. But they don’t value people who care about the animals just because the animals exist. In the great scheme of things, we’re expendable.  And so are the wolves.

Six cattle were supposedly killed and that’s enough to wipe out an entire wolf pack. By all that’s sane, this isn’t equitable, balanced, decent, humane, or right. Washington State, for all of its high mindedness, is no better than Idaho or Alaska or any other state that advocates killing off wolves so ranchers, hunters, and farmers aren’t inconvenienced. Let’s lose this feel-good facade.

What also wasn’t right was the statement the Humane Society of the US, Conservation Northwest, Wolf Haven International, and Defenders of Wildlife made. They were profoundly wrong to urge restraint. They have allowed their participation in the Advisory Board to file down their teeth, blunt their claws, and to remove the only weapons they have to fight for real change.

Membership on the Board or not, they should have howled, as loud as the wolves howled before death. They should have said to all of us, “Don’t accept this! Fight this!”

They should have embraced outrage, instead of trying to damp it down. If they can’t be outraged and serve on the Board, then they have no place on this board. Or they have no place in the animal welfare movement.

I’m not ready to abandon the groups, but I’m not ready to embrace them, either. They screwed up.

Don’t accept this. Get in people’s faces. Be mad. Be vocal. Be loud. And if being loud means to hell with respectful and civil discourse, so be it.

Photos, public domain by US Fish & Wildlife

 

 

 

The post The Killing of the Profanity Peak Wolf Pack appeared first on Burningbird.

Bob DuCharme (Innodata Isogen)Converting between MIDI and RDF: readable MIDI and more fun with RDF

Listen to my fun!

MIDI and RDF logos

When I first heard about Albert Meroño-Peñuela and Rinke Hoekstra's midi2rdf project, which converts back and forth between the venerable Musical Instrument Digital Interface binary format and RDF, at first I thought it seemed like an interesting academic exercise. Thinking about it more, I realized that it makes a great contribution to both the MIDI world and to musical RDF geeks.

MIDI has been the standard protocol for integrating synthesizers and related musical equipment together since the 1980s. I've only recently thrown out a book with the MIDI specs that I've owned for nearly that long because, as with so many other technical specifications, they're now available online.

Meroño-Peñuela and Hoekstra's midi2rdf lets you convert between MIDI files and Turtle RDF. I love the title of their ESWC 2016 paper on it, "The Song Remains the Same" (pdf)--I was pretty young when Led Zeppelin's Houses of the Holy album came out, but I remember it vividly. The song remains the same because the project's midi2rdf and rdf2midi scripts provide lossless round trip compression between the two formats, which makes it a very valuable tool: it gives us a text file serialization of MIDI based on a published standard, which makes MIDI downright readable. Looking at these RDF files and spending no serious time with the MIDI spec, I worked out which resources and properties were doing what and used this to create my own MIDI files.

As a somewhat musical RDF geek, this was a lot of fun. I wrote Python scripts to generate different Turtle files of different kinds of random music, then converted them to MIDI so that I could listen to them. (You can find it all in github.) The use of random functions means that running the same script several times creates different variations on the music. Below you will find links to MP3 versions of what I called fakeBebop and two versions of some whole-tone piano music that I generated, along with the MIDI and RDF files that go with them.

Each MIDI file (and its RDF equivalent) starts with some setup data to identify information such as the sounds that it will play and the tempo. Instead of learning all those setup details for my program to generate, I used the excellent Linux/Mac/Windows open source MuseScore music scoring program to generate a MIDI file with just a few notes of whatever instruments I wanted and then converted that to RDF. (This ability to convert in both directions is is an important part of the value of the midi2rdf package.) Then, keeping the setup part of that RDF, I deleted the actual notes and had my script copy the setup part and then generate new notes that it appended to the setup part.

In RDF terms, the note generation meant two things: adding a pair of mid:NoteOnEvent resources (one to start playing a note and one to stop) and then adding references to those events onto a musical track listing the events to execute. So, for example, the first mid:NoteOnEvent in the following pair defines the start of of a note at pitch 69, which is A above middle C on a piano. The mid:channel of 0 had been defined in the setup part, and the mid:tick value specifies how long the note will play until the next mid:NoteOnEvent. (I was too lazy to look up how the mid:tick values relate to elapsed time and picked some through trial and error.) The mid:velocity values essentially turn the note on and off.

p2:event0104 a mid:NoteOnEvent ;
    mid:channel 0 ;
    mid:pitch 69 ;
    mid:tick 400 ;
    mid:velocity 80 .

p2:event0105 a mid:NoteOnEvent ;
    mid:channel 0 ;
    mid:pitch 69 ;
    mid:tick 500 ;
    mid:velocity 0 .

As my script outputs noteOn events after the setup part, it appends references to them onto a string in memory that begins like this:

mid:pianoHeadertrack01 a mid:Track ;
    mid:hasEvent p2:event0000,
        p2:event0001,
        p2:event0002,
        p2:event0003,
        # etc. until you finish with a period

After outputting all the mid:NoteOnEvent events, the script outputs this string. (While the triples in this resource are technically unordered, rdf2midi seemed to assume that the event names are "event" followed by a zero-padded number. When an early version of my first script didn't do this, the notes got played in an odd order. Maybe it's just playing them in alphabetic sort order.)

That's all for just one track. My fakeBebop script does this for three tracks: a bass track playing fairly random quarter notes in the range of an upright bass, a muted trumpet track playing fairly random triplet-feel eighth notes (sometimes with a rest substituted), and a percussion track repeating a standard bebop ride cymbal pattern. You can see some generated Turtle RDF at fakeBebop.ttl, the MIDI file generated from the Turtle file by midi2rdf at fakeBebop.mid, and listen to what it sounds like at fakeBebop.mp3.

By "fairly random" I mean a random note within 5 half steps (a major third) of the previous note. Without any melodies beyond this random selection of notes, I think it still sounds a bit beboppy because, as the early bebop pioneers added more complex scales to the simple major and minor scales played by earlier jazz musicians, it all got more chromatic.

I have joked with my brother about how if you quietly play random notes on a piano with both hands using the same whole tone scale, it can sound a bit like Debussy, who was one of the early users of this scale. My wholeTonePianoQuarterNotes.py script follows logic similar to the fakeBebop script but outputs two piano tracks that correspond to a piano player's left and right hands and use the same whole tone scale. You can see some generated Turtle RDF at wholeTonePianoQuarterNotes.ttl, the MIDI file generated from that by rdf2midi at wholeTonePianoQuarterNotes.mid, and hear what it sounds like at wholeTonePianoQuarterNotes.mp3.

Before doing the whole tone piano quarter notes script I did one with random note durations, so it sounds like something from a bit later in the twentieth century. Generated Turtle RDF: wholeTonePiano.ttl; MIDI file generated by rdf2midi: wholeTonePiano.mid; MP3: wholeTonePiano.mp3.

I can think of all kinds of ideas for additional experiments, such as redoing the two piano experiments with the four voices of a string quartet or having the fakeBebop one generate common jazz chord progressions and typical licks over them. (Speaking of string quartets and Debussy, I love that Apple iPad Pro ad that NBC showed so often during the recent Olympics.) It would also be interesting to try some experiments with Black MIDI (or perhaps "Black RDF"!). If I had pursued these ideas, I wouldn't be writing this blog entry right now, because I had to cut myself off at some point.

I recently learned about Supercollider, an open source Windows/Mac/Linux IDE with its own programming language that several serious electronic music composers use for generating music, and I could easily picture spending all of my free time playing with that. At least midi2rdf's RDF basis gave me the excuse of having a work-related angle as I wrote scripts to generate odd music. Although I was just slapping together some demo code for fun, I do think that midi2rdf's ability to provide lossless round-trip conversion between a popular old binary music format and a readable standardized format has a lot of potential to help people doing music with computers.


Please add any comments to this Google+ post.

Doug Schepers (Vectoreal)Topic of Cancer

I’m now officially a cancer survivor! Achievement unlocked!

A couple weeks ago, on July 27th, during a routine colonoscopy, they found a mass in my ascending colon which turned out to have some cancer cells.

I immediately went to UNC Hospital, a world-class local teaching hospital, and they did a CT scan on me. There are no signs that the cancer has spread. I was asymptomatic, so they caught it very early. The only reason I did the colonoscopy is that there’s a history of colon cancer in my family.

Yesterday, I had surgery to remove my ascending colon (an operation they call a “right colectomy”). They used a robot (named da Vinci!) operated by their chief GI oncology surgeon, and made 5 small incisions: 4 on the left side of my belly to cut out that part of the right colon; and a slightly larger one below my belly to remove the tissue (ruining my bikini line).

Everything went fine (I made sure in advance that this was a good robot and not a killer robot that might pull a gun on me), and I’m recovering well. I walked three times today so far, and even drank some clear liquids. I’ll probably be back on my feet and at home sometime this weekend. Visitors are welcome!

There are very few long-term negative effects from this surgery, if any.

They still don’t know for certain what stage the cancer was at, or if it’s spread to my lymph nodes; they’ll be doing a biopsy on my removed colon and lymph nodes to determine if I have to do chemotherapy. As of right now, they are optimistic that it has not spread, and even if it has, the chemo for this kind of cancer is typically pretty mild. If it hasn’t spread (or “metastasized”), then I’m already cured by having the tumor removed. In either case, I’m going to recover quickly.

My Dad had colon cancer, and came through fine. My eldest sister also had colon cancer over a decade ago, and it had even metastasized, and her chemo went fine… and cancer treatments have greatly improved in the past few years.

So, nobody should worry. I didn’t mention it widely, because I didn’t want to cause needless grief to anyone until after the operation was done. Cancer is such a scary word, and I don’t think this is going to be as serious as it might otherwise sound.

I’ll be seeing a geneticist in the coming weeks to determine exactly what signature of cancer I have, so I know what I’m dealing with. And I want to give more information to my family, because this runs in our genes, and if I’d gotten a colonoscopy a few years ago, they could have removed the polyp in the early stages and I’d have never developed cancer. (And because I’m otherwise healthy, I probably wouldn’t have gotten the colonoscopy if I hadn’t had insurance, which I probably wouldn’t have had if Obamacare didn’t mandate it. Thanks, Obama!)

Yay, science!

Future Plans

So, the cliché here is for me to say that this has opened my eyes to the ephemerality and immediacy of life, and that I’m planning to make major decisions in my life that prioritize what I truly value, based on my experience with cancer.

But the fact is, I’ve already been doing that recently, and while the cancer underscores this, I’ve already been making big plans for the future. I’ll post soon about some exciting new projects I’m trying to get underway, things that are far outside my comfort zone for which I’ll need to transform myself (you know, in a not-cancerous sort of way). I’ve already reduced my hours at W3C to 50%, and I’m looking at changing my role and remaining time there; I love the mission of W3C, which I see as a valuable kind of public service, so no matter what, I’ll probably stay involved there in some capacity for the foreseeable future. But I feel myself pulled toward building software and social systems, not just specifications. Stay tuned for more soon!

I’m optimistic and excited, not just about leaving behind this roadbump of cancer, but of new possibilities and new missions to change the world for the better in my own small ways.

Update:

Today (Friday, 26 August), I got the results of my biopsy from my oncologist, and I’m pleased to announce that I have no more colon cancer! The results were that the cancer was “well-differentiated, no activity in lymph nodes”, meaning that there was no metastasis, and I’m cured. This whole “adventure” emerged, played out, and concluded in just a month: I heard there was a tumor, was diagnosed with cancer, consulted an oncologist, had surgery, recovered, and got my cancer-free results all in 30 days. It felt much longer!

ProgrammableWebNetflix Offers Comprehensive Insight into Its Microservices Approach

Over on the Netflix blog which we strongly encourage you to bookmark or add to your feedreader, Netflix's Katharina Probst and Justin Becker have penned a fascinating post that offers critical insight into how they think about their microservices architecture in terms of structure and orchestration.

ProgrammableWebHow the Automated Attacks on the Pokémon GO API Happened

If you play Pokémon GO you may already be aware, but the game’s API has been under attack, preventing many casual users from accessing their accounts to find the elusive creatures. In this detailed post on the Shape Security Engineering Blog, Yao Zhao gives readers a closer look at what happened.

ProgrammableWeb: APIsKloudless Universal File Storage On-Premises

The Kloudless Universal File Storage On-Premises REST API is the enterprise version of the Kloudless Universal CRM REST API. Kloudless clients that choose enterprise service from Kloudless are able to access and integrate this API. The API allows developers to access and integrate the file storage functionalities of Kloudless with other applications and create new applications. Some example API methods include creating, retrieving, and managing files, creating and managing permissions, and managing file folders and lists. Kloudless provides a customizable toolkit to integrate cloud storage, CRM, file sharing, and other features into applications on both web and mobile.
Date Updated: 2016-08-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsKloudless Universal File Storage Cloud

The Kloudless Universal File Storage REST API allows developers to access and integrate the file storage functionalities of Kloudless with other applications and create new applications. Some example API methods include creating, retrieving, and managing files, creating and managing permissions, and managing file folders and lists. Kloudless provides a customizable toolkit to integrate cloud storage, CRM, file sharing, and other features into applications on both web and mobile.
Date Updated: 2016-08-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsKloudless Universal CRM On-Premises

The Kloudless Universal CRM On-Premises REST API is the enterprise version of the Kloudless Universal CRM REST API. Kloudless clients that choose enterprise service from Kloudless are able to access and integrate this API. The API allows developers to access and integrate the CRM functionalities of Kloudless with other applications and create new applications. Some example API methods include creating, retrieving, and managing CRMs, creating and managing contacts, and managing leads and opportunities. Kloudless provides a customizable toolkit to integrate cloud storage, CRM, file sharing, and other features into applications on both web and mobile.
Date Updated: 2016-08-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsKloudless Universal CRM Cloud

The Kloudless Universal CRM REST API allows developers to access and integrate the CRM functionalities of Kloudless with other applications and create new applications. Some example API methods include creating, retrieving, and managing CRMs, creating and managing contacts, and managing leads and opportunities. Kloudless provides a customizable toolkit to integrate cloud storage, CRM, file sharing, and other features into applications on both web and mobile.
Date Updated: 2016-08-26
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebDaily API RoundUp: Facebook Instant Articles, NASA, Twilio Lookups, Mozilla

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebGrip Launches AI-Powered Event Matchmaking API

Grip, an artificial intelligence-powered event networking solution, has launched an API that allows event applications to integrate with Grip's matchmaking functionality.

ProgrammableWeb: APIsCampus Labs

The Campus Labs API integrates educational data into applications. OAuth2 authentication is available with HTTP requests and JSON responses. Developers can explore courses, evaluations, notations, demographics, accounts, and outcomes interfaces. The Campus Labs platform helps unify campus data to help institutions make data-informed decisions.
Date Updated: 2016-08-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsUnofficial GoPro

The Unofficial GoPro API allows developers to obtain camera parameters such as photo resolution, and battery status. Additionally, it can be used to control, livestream, or obtain data from a GoPro Wi-Fi enabled camera. This REST based API supports JSON for data exchange.
Date Updated: 2016-08-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOutlook Task REST

This REST API allows you to create, read, synchronize, update, and delete a user's tasks that are secured by Azure Active Directory in Office 365. The user's account can be on Office 365 or a Microsoft account including; Hotmail.com, Live.com, MSN.com, Outlook.com and Passport.com.
Date Updated: 2016-08-25
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebDaily API RoundUp: Slack Events Webhooks, PlanGrid, Restpack, VictorOps, Aquaplot

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebKloudless Introduces Universal CRM Integration with New API

Kloudless, a universal API company that enables thousands of apps and services connect to each other, announced its Universal CRM API today. The new API allows developers to create apps that embed integrations with many popular customer relationship management (CRM) products. Integrations include Salesforce, Microsoft Dynamics, and Oracle Sales Cloud.

Jeremy Keith (Adactio)Marking up help text in forms

Zoe asked a question on Twitter recently:

‘Sfunny—I had been pondering this exact question. In fact, I threw a CodePen together a couple of weeks ago.

<iframe height="433" scrolling="no" src="https://codepen.io/adactio/embed/jAXyxP/?height=433&amp;theme-id=0&amp;default-tab=html,result&amp;embed-version=2" style="width: 100%;">See the Pen Form field accessibility question by Jeremy Keith (@adactio) on CodePen. </iframe>

Visually, both examples look the same; there’s a label, then a form field, then some extra text (in this case, a validation message).

The first example puts the validation message in an em element inside the label text itself, so I know it won’t be missed by a screen reader—I think I first learned this technique from Derek many years ago.

<div class="first error example">
 <label for="firstemail">Email
<em class="message">must include the @ symbol</em>
 </label>
 <input type="email" id="firstemail" placeholder="e.g. you@example.com">
</div>

The second example puts the validation message after the form field, but uses aria-describedby to explicitly associate that message with the form field—this means the message should be read after the form field.

<div class="second error example">
 <label for="secondemail">Email</label>
 <input type="email" id="secondemail" placeholder="e.g. you@example.com" aria-describedby="seconderror">
 <em class="message" id="seconderror">must include the @ symbol</em>
</div>

In both cases, the validation message won’t be missed by screen readers, although there’s a slight difference in the order in which things get read out. In the first example we get:

  1. Label text,
  2. Validation message,
  3. Form field.

And in the second example we get:

  1. Label text,
  2. Form field,
  3. Validation message.

In this particular example, the ordering in the second example more closely matches the visual representation, although I’m not sure how much of a factor that should be in choosing between the options.

Anyway, I was wondering whether one of these two options is “better” or “worse” than the other. I suspect that there isn’t a hard and fast answer.

Andy Budd (Clearleft)Developers “Own” The Code, So Shouldn’t Designers “Own” The Experience?

We’ve all been there. You spent months gathering business requirements, working out complex user journeys, crafting precision interface elements and testing them on a representative sample of users, only to see a final product that bears little resemblance to the desired experience.

Maybe you should have been more forceful and insisted on an agile approach, despite your belief that the organization wasn’t ready? Perhaps you should have done a better job with your pattern portfolios, ensuring that the developers used your modular code library rather than creating five different variations of a carousel. Or, maybe you even should’ve sat next to the development team every day, making sure what you designed actually came to pass.

Instead you’re left with a jumble of UI elements, with all the subtlety stripped out. Couldn’t they see that you worked for days getting the transitions just right, only for them to drop in a default animation library? And where on earth did that extra check-out step come from. I bet marketing threw that in at the last minute. You knew integration was going to be hard and compromises would need to be made, but we’re supposed to be making the users lives easier here, not the tech team.

When many people are involved in a project, it is very important to make sure that they have a common understanding of the problem and its solution.

Of course, there are loads of good reasons why the site is this way. Different teams with varying levels of skill working on different parts of the project, a bunch of last-minute changes shortening the development cycle, and a whole host of technical challenges. Still, why couldn’t the development team come and ask for your advice on their UI changes? You don’t mess with their code, so why do they have to change your designs around? Especially when the business impact could be huge! You’re only round the corner and would have been happy to help if they had just asked.

While the above story may be fictional, it’s a sentiment I hear from all corners of the design world, whether in-house or agency side. A carefully crafted experienced ruined by a heavy-handed development team.

This experience reminds me of a news story I saw on a US local news channel several years ago. A county fair was running an endurance competition where the last person remaining with their hand on a pickup truck won the prize. I often think that design is like a massive game of “touch the truck”, with the development team always walking away with the keys at the end of the contest. Like the last word in an argument, the final person to come in contact with the site holds all the power and can dictate how it works or what it looks like. Especially if they claim that the particular target experience isn’t “technically possible”, which is often shorthand for “really difficult”, “I can’t be bothered doing it that way” or “I think there’s a better way of doing it so am going to pull the dev card”.

Now I know I’m being unfairly harsh about developers here and I don’t mean to be. There are some amazingly talented technologists out there who really care about usability and want to do the best for the user. However, it often feels as though there’s an asymmetric level of respect between disciplines due to a belief that design is easy and therefore something everybody can have an opinion on, while development is hard and only for the specially initiated. So while designers are encouraged (sometimes expected) to involve everybody in the design process, they often aren’t afforded the same luxury.

To be honest, I don’t blame them. After all, I know just enough development to be dangerous, so you’d be an idiot if you wanted my opinion on database structure and code performance (other than I largely think performance is a good thing). Then again I do know enough to tell when the developers are fudging things and it’s always fun to come back to them with a working prototype of something they said was impossible or take months to implement — but I digress.

The problem is, I think a lot of developers are in the same position about design — they just don’t realize it. So when they make a change to an interface element based on something they had heard at a conference a few years back, they may be lacking important context. Maybe this was something you’ve already tested and discounted because it performed poorly. Perhaps you chose this element over another for a specific reason, like accessibility? Or perhaps the developers opinions were just wrong, based on how they experience the web as superusers rather than an average Jo.

Now let’s get something straight here. I’m not saying that developers shouldn’t show an interest in design or input into the design process. I’m a firm believer in cross-functional pairing and think that some of the best usability solutions emanate from the tech team. There are also a lot of talented people out there who span a multitude of disciplines. However, at some point the experience needs to be owned, and I don’t think it should be owned by the last person to open the HTML file and “touch the truck”.

So, if good designers respect the skill and experience great developers bring to the table, how about a little parity? If designers are happy for developers to “own the code”, why not show a similar amount of respect and let designers “own the experience”?

Everybody has an opinion. However, it’s not a good enough reason to just dive in and start making changes.

Doing this is fairly simple. If you ever find yourself in a situation where you’re not sure why something was designed in a particular way, and think it could be done better, don’t just dive in and start making changes. Similarly, if you hit a technical roadblock and think it would make your lives easier to design something a different way, go talk to your designer. They may be absolutely fine with your suggested changes, or they may want to go away and think about some other ways of solving the same problem.
After all, collaboration goes both ways. So if you don’t want designers to start “optimizing” your code on the live server, outside your version control processes, please stop doing the same to their design.

Originally published at www.smashingmagazine.com on August 9, 2016.

ProgrammableWeb: APIsTwilio Lookups

The Lookups API offers information about a phone number such as region-specific formatting, carrier details, and caller name information. Each request can obtain 1 or more types of data. This API returns information in JSON format, and requires API Keys for authentication. Twilio is a San Francisco based telephony infrastructure provider.
Date Updated: 2016-08-24
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebUsing Box Content APIs to Deliver Enterprise-Grade Security in Your Custom Apps

This is the first article of a three-part series on building custom applications with Box Content APIs. 

Amazon Web ServicesAWS Week in Review – Coming Back With Your Help!

Back in 2012 I realized that something interesting happened in AWS-land just about every day. In contrast to the periodic bursts of activity that were the norm back in the days of shrink-wrapped software, the cloud became a place where steady, continuous development took place.

In order to share all of this activity with my readers and to better illustrate the pace of innovation, I published the first AWS Week in Review in the spring of 2012. The original post took all of about 5 minutes to assemble, post and format. I got some great feedback on it and I continued to produce a steady stream of new posts every week for over 4 years. Over the years I added more and more content generated within AWS and from the ever-growing community of fans, developers, and partners.

Unfortunately, finding, saving, and filtering links, and then generating these posts grew to take a substantial amount of time. I reluctantly stopped writing new posts early this year after spending about 4 hours on the post for the week of April 25th.

After receiving dozens of emails and tweets asking about the posts, I gave some thought to a new model that would be open and more scalable.

Going Open
The AWS Week in Review is now a GitHub project (https://github.com/aws/aws-week-in-review). I am inviting contributors (AWS fans, users, bloggers, and partners) to contribute.

Every Monday morning I will review and accept pull requests for the previous week, aiming to publish the Week in Review by 10 AM PT. In order to keep the posts focused and highly valuable, I will approve pull requests only if they meet our guidelines for style and content.

At that time I will also create a file for the week to come, so that you can populate it as you discover new and relevant content.

Content & Style Guidelines
Here are the guidelines for making contributions:

  • Relevance -All contributions must be directly related to AWS.
  • Ownership – All contributions remain the property of the contributor.
  • Validity – All links must be to publicly available content (links to free, gated content are fine).
  • Timeliness – All contributions must refer to content that was created on the associated date.
  • Neutrality – This is not the place for editorializing. Just the facts / links.

I generally stay away from generic news about the cloud business, and I post benchmarks only with the approval of my colleagues.

And now a word or two about style:

  • Content from this blog is generally prefixed with “I wrote about POST_TITLE” or “We announced that TOPIC.”
  • Content from other AWS blogs is styled as “The BLOG_NAME wrote about POST_TITLE.”
  • Content from individuals is styled as “PERSON wrote about POST_TITLE.”
  • Content from partners and ISVs is styled as “The BLOG_NAME wrote about POST_TITLE.”

There’s room for some innovation and variation to keep things interesting, but keep it clean and concise. Please feel free to review some of my older posts to get a sense for what works.

Over time we might want to create a more compelling visual design for the posts. Your ideas (and contributions) are welcome.

Sections
Over the years I created the following sections:

  • Daily Summaries – content from this blog, other AWS blogs, and everywhere else.
  • New & Notable Open Source.
  • New SlideShare Presentations.
  • New YouTube Videos including APN Success Stories.
  • New AWS Marketplace products.
  • New Customer Success Stories.
  • Upcoming Events.
  • Help Wanted.

Some of this content comes to my attention via RSS feeds. I will post the OPML file that I use in the GitHub repo and you can use it as a starting point. The New & Notable Open Source section is derived from a GitHub search for aws. I scroll through the results and pick the 10 or 15 items that catch my eye. I also watch /r/aws and Hacker News for interesting and relevant links and discussions.

Over time, it is possible that groups or individuals may become the primary contributor for a section. That’s fine, and I would be thrilled to see this happen. I am also open to the addition to new sections, as long as they are highly relevant to AWS.

Adding Content / Creating a Pull Request
It is very easy to participate in this process. You don’t need to use any shell commands or text editors. Start by creating a GitHub account and logging in. I set up two-factor authentication for my account and you might want to do the same.

Now, find a piece of relevant content. As an example, I’ll use the presentation Amazon Aurora for Enterprise Database Applications. I visit the current aws-week-in-review file and click on the Edit button (the pencil icon):

Then I insert the new content (line 81):

I could have inserted several pieces of new content if desired.

Next, I enter a simple commit message, indicate that the commit should go to a branch (this is important), and click on Propose file change.

And that’s it! In my role as owner of the file, I’ll see the pull request, review it, and then merge it in to the master branch.

Automation
Earlier this year I tried to automate the process, but I did not like the results. You are welcome to give this a shot on your own. I do want to make sure that we continue to exercise human judgement in order to keep the posts as valuable as possible.

Let’s Do It
I am super excited about this project and I cannot wait to see those pull requests coming in. Please let me know (via a blog comment) if you have any suggestions or concerns.

I should note up front that I am very new to Git-based collaboration and that this is going to be a learning exercise for me. Do not hesitate to let me know if there’s a better way to do things!

Jeff;

 

ProgrammableWebTranslational Software Releases Genomics API to Speed Up Precision Medicine

Translational Software, clinical decision support tools developer, recently announced an API that labs and tech providers can utilize to hasten the development of medical apps.

Andy Budd (Clearleft)Are we moving towards a post-Agile age?

Agile has been the dominant development methodology in our industry for some time now. While some teams are just getting to grips with Agile, others extended it to the point that it’s no longer recognisable as Agile. In fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting. Something I’ve been calling Post-Agile thinking. But what exactly is Post-Agile, and how did it come about?

The age of Waterfall

Agile emerged from the world of corporate IT. In this world it was common for teams of business analysts to spend months gathering requirements. These requirements would be thrown into the Prince2 project management system, from which a detailed specification—and Gantt chart—would eventually emerge. The development team would come up with a budget to deliver the required spec, and once they had been negotiated down by the client, work would start.

Systems analysis and technical architects would spend months modelling the data structure of the system. The more enlightened companies would hire Information Architects—and later UX Designers—to understand user needs and create hundreds of wireframes describing the user interface.

Humans are inherently bad at estimating future states and have the tendency to assume the best outcome—this is called estimation bias. As projects grow in size, they also grow in surface area and visibility, gathering more and more input from the organisation. As time marches on, the market changes, team members come and go, and new requirements get uncovered. Scope creep inevitably sets in.

To manage scope creep, digital teams required every change in scope to come in the form of a formal change request. Each change would be separately estimated, and budgets would dramatically increase. This is the reason you still hear of government IT projects going over budget by hundreds of millions of dollars. The Waterfall process, as it became known, makes this almost inevitable.

Untimely the traditional IT approach put too much responsibility in the hands of planners and middle managers, who were often removed from the day-to-day needs of the project.

The age of Agile

In response to the failures of traditional IT projects, a radical new development philosophy called Agile began to emerge. This new approach favoured just-in-time planning, conversations over documentation, and running code; effectively trying to counter all the things that went wrong with the typical IT project. The core tenets of this new philosophy were captured in the agile manifesto, a document which has largely stood the test of time.

As happens with most philosophies, people started to develop processes, practices and rituals to help explain how the tenets should be implemented in different situations. Different groups interpreted the manifesto differently, and specific schools started to emerge.

The most common Agile methodology we see on the web today is Scrum, although Kanban is another popular approach.

Rather than spending effort on huge scope documents which invariably change, Agile proponents will typically create a prioritised backlog of tasks. The project is then broken down into smaller chunks of activity which pull tasks from the backlog. These smaller chunks are easier to estimate and allow for much more flexibility. This opens up the possibility for regular re-prioritisation in the face of a changing market.

Agile—possibly unknowingly—adopted the military concepts of situational awareness and command intent to move day-to-day decision making from the planners to the front-line teams. This effectively put control back in the hands of the developers.

This approach has demonstrated many benefits over the traditional IT project. But over time, Agile has became decidedly less agile as dogmas crept in. Today many Agile projects feel as formal and conservative as the approaches they overthrew.

The post-Agile age

Perhaps we’re moving towards a post-Agile world? A world that is informed by the spirit of Agile, but has much more flexibility and nuance built in.

This post-Agile world draws upon the best elements of Agile, while ditching the dogma. It also draws upon the best elements of Design Thinking and even—God forbid—the dreaded Waterfall process.

People working in a post-Agile way don’t care which canon an idea comes from, as long as it works.. The post-Agile practitioner cherrypicks from the best tools available, rather than sticking with a rigid framework. Post-Agile is less of a philosophy and more of a toolkit that has been built up over years of practice.

I believe Lean Startup and Lean UX are early manifestations of post-Agile thinking. Both of these approaches sound like new brands of project management, and each has its own dogma. If you dig below the surface, both of these practices are surprisingly lacking in process. Instead they represent a small number of tools—like the business model canvas—and a loose set of beliefs such as testing hypotheses in the most economical way possible.

My initial reaction to Lean was to perceive it as the emperor’s new clothes for this very reason. It came across as a repackaging of what many designers and developers had been doing already. With a general distrust for trademarks and brand names, I naturally pushed back.

What I initially took as a weakness, I now believe is its strength. With very little actual process, designers and developers around the world have imbued Lean with their own values, added their own processes, and made it their own. Lean has become all things to all people, the very definition of a post-Agile approach.

I won’t go into detail how this relates to other movements like post-punk, post-modernism, or the rise of post-factual politics; although I do believe they have similar cultural roots.

Ultimately, post-Agile thinking is what happens when people have lived with Agile for a long time and start to adapt the process. It’s the combination of the practices they have adopted, the ones they have dropped, the new tools they have rolled in, as well as the ones they have rolled back.

Post-Agile is what comes next. Unless you truly believe that Scrum or Kanban is the pinnacle of design and development practice, there is always something new and more interesting around the corner. Let’s drop the dogma and enter this post-Agile world.

Jeremy Keith (Adactio)Why do pull quotes exist on the web?

There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph. Or it’s interrupted by a big piece of text that’s spoiling a sentence that you are about to read in subsequent paragraphs.

There you are reading an article when suddenly it’s interrupted by a big piece of text that’s repeating something you just read in the previous paragraph.

To be honest, I find pull quotes pretty annoying in printed magazines too, but I can at least see the justification for them there: if you’re flipping through a magazine, they act as eye-catching inducements to stop and read (in much the same way that good photography does or illustration does). But once you’re actually reading an article, they’re incredibly frustrating.

You either end up learning to blot them out completely, or you end up reading the same sentence twice.

You either end up learning to blot them out completely, or you end up reading the same sentence twice. Blotting them out is easier said than done on a small-screen device. At least on a large screen, pull quotes can be shunted off to the side, but on handheld devices, pull quotes really make no sense at all.

Are pull quotes online an example of a skeuomorph? “An object or feature which imitates the design of a similar artefact made from another material.”

I think they might simply be an example of unexamined assumptions. The default assumption is that pull quotes on the web are fine, because everyone else is doing pull quotes on the web. But has anybody ever stopped to ask why? It was this same spiral of unexamined assumptions that led to the web drowning in a sea of splash pages in the early 2000s.

I think they might simply be an example of unexamined assumptions.

I’m genuinely curious to hear the design justification for pull quotes on the web (particularly on mobile), because as a reader, I can give plenty of reasons for their removal.

ProgrammableWeb: APIsDimelo

Dimelo is a French customer engagement firm that provides social customer service technologies. Dimelo unifies social media messages, forum and communities, chat, and mobile messages in one system. The Dimelo API uses webhooks to deliver a digital platform that can be integrated with Facebook pages, processes, applications, and websites. Developers need to contact Dimelo directly in order to obtain access to documentation.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsMicrosoft Azure Active Directory Graph REST

This API provides programmatic access to Azure Active Directory and allows apps to perform; create, read, update, and delete (CRUD) operations on directory data and directory objects. This includes; users, groups, and organizational contacts. Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOutlook Notifications REST

This REST API allows apps to learn about changes to the user's mail, calendar, or contact data secured by Azure Active Directory in Office 365. Data includes; Hotmail.com, Live.com, MSN.com, Outlook.com, and Passport.com.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOffice 365 Data Extensions REST

This REST API allows apps to store custom data in a message, event, or contact of the user's account. The account can be on Office 365 or Microsoft and includes; Hotmail.com, Live.com, MSN.com, Outlook.com and Passport.com.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsOutlook People REST

This REST API gains access to data secured by Azure Active Directory in Office 365. It allows you to get information about the people from across mail, contacts, and social networks. This includes access to Microsoft accounts in: office.com, hotmail.com, live.com and office365.com.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNASA EONET Categories

This EONET REST API can be used to filter the output of the Categories API and the Layers API. Categories are the types of events by which individual events are cataloged. EONET is The Earth Observatory Natural Event Tracker, a prototype web service with the goal of: providing a curated source of continuously updated natural event metadata and providing a service that links those natural events to theme related web service-enabled image sources. NASA open data supports NASA’s scientists and engineers with information technology such as infusion, procurement, and future IT workforce development.
Date Updated: 2016-08-23
Tags: [field_primary_category], [field_secondary_categories]

Cameron MollMoving sale! 55% off all letterpress prints

<figure class="tmblr-full" data-orig-height="700" data-orig-width="1040">image</figure>

Starting today through Friday, August 26, all products are priced 55% off. My new workspace will have less storage space for posters, so it’s time to clear out some inventory. 📦💨

At the end of August I’ll close the doors to my incredible office space in downtown Sarasota, Florida. It’s been an absolutely wonderful place to work the past four years, serving dual purpose as inventory for my letterpress posters and workspace for Authentic Jobs.

But it’s time to move on.

<figure class="tmblr-full" data-orig-height="1600" data-orig-width="2400">image</figure>

When I designed the first poster in 2007, it was purely a passion project. After posting the work online, dozens of readers requested I make copies available for purchase, and the rest is history as they say. It’s been a genuine privilege shipping thousands of posters to more than 30 countries around the world. Honestly, I’ve been blown away by the response over the years.

<figure class="tmblr-full" data-orig-height="1067" data-orig-width="1600">image</figure>

It would mean the world to me to have my artwork grace your walls as you help clear out some inventory. It’s a win-win for everyone—I save a little space, you save a lotta moola.

Sale ends Friday, August 26 or while supplies last!

Shop now→

ProgrammableWebSnapLogic Announces Additions to its Library of Connectors

SnapLogic, an integration platform as a service (iPaaS) provider, has announced the release of its Summer 2016 Elastic Integration Platform which includes new additions and updates to its library of connectors (called Snaps) and enhancements to the core platform. The release includes a new Hive Snap, new Teradata Snap, and enhanced encryption for the Hadooplex for increased data security.

ProgrammableWeb: APIsAquaplot

Aquaplot offers sea routing functionality and distance computation for ships. Developers can build integrations related to route planning, route optimization, fleet maneuver planning, and traffic monitoring. The Aquaplot API allows developers to retrieve the distance between 2 coordinates in water. The database contains 3,500 ports. This API uses JSON for responses, and GeoJSON will be available in the near future.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNASA TechPort OpenData Support REST

This REST API can be used to export NASA's TechPort data into an XML format to be further processed and analyzed as technology project data to be available as machine-readable. NASA open data supports NASA's scientists and engineers with information technology such as infusion, procurement, and future IT workforce development.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsNASA Vesta Trek REST

This API is a part of a collection of APIs that power the Vesta Trek NASA web-based portals for exploration. Map layers are available through OGC RESTful WMTS protocol. NASA open data supports NASA’s scientists and engineers with information technology such as infusion, procurement, and future IT workforce development.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsVictorOps

The VictorOps API provides integration of incidents, on-call, and reports for life-cycle IT projects. It is available in REST architecture with GET & POST requests that display JSON responses.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsWalgreens Digital Offers

The offers API enables third party applications to search and clip digital coupons directly to Walgreens customers Balance Rewards Cards. Platforms include; iPhone, iPad, Android, Android Tablet, Blackberry and more. Walgreens is an online and location based retail outlet.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsW3C Media Recorder

This W3C webservice API allows you to make basic recordings using the MediaRecorder object. The World Wide Web Consortium (W3C) is an international community where Member organizations, a full-time staff, and the public work together to develop Web standards.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSkyscanner Referrals Service

This API is a referral service that allows you access to the required Skyscanner web page based on the given query including; date, origin and/or destination. Skyscanner is a global travel search engine that you can plan and book directly from millions of travel options.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSkyscanner Location Autosuggest Service

This API allows you to get a list of places in Skyscanner that match the query string. It also allows you to get information about a specific place given it's ID. Skyscanner is a global travel search engine that you can plan and book directly from millions of travel options.
Date Updated: 2016-08-22
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow Open Financial APIs Will Lead to Integrated Banking

If there were ever a time to develop a banking app it would be now. It is estimated that a quarter of the top 50 global banks will have a banking app store within the next two years. A plethora of 3rd party banking apps have emerged in recent years, causing trouble for slow-to-adapt banks. Banking apps, and their ecosystems, offer banks a multitude of new revenues streams, as well as help to broaden partner and user bases. For those less convinced of app viability, a cursory glance at market trends should be convincing enough.

ProgrammableWebDaily API RoundUp: Pokémon Go Slack Integration, Basecamp 3, EdX, W3C Screen Orientation

Every day, the ProgrammableWeb team is busy, updating its three primary directories for APIs, clients (language-specific libraries or SDKs for consuming or providing APIs), and source code samples.

ProgrammableWebHow and Why Etsy Adopted an API-First Architecture

Despite currently boasting an architecture that has a reputation for flexibility, continuous experimentation, and regular daily deploys, Etsy’s system was suffering from significant performance problems just a few years ago.

ProgrammableWebHow to Deploy On-Premise File Sharing and Sync with Pydio

These days, many projects rely on cloud based file-sharing tools like Dropbox, Google Drive, iCloud and OneDrive. These solutions often sport a user friendly interface and offer a huge storage quota, but they are ‘free with limitations’. The lack of control and integration into existing infrastructures can drive organizations towards alternatives, many of which are found in the open source realm.

Amazon Web ServicesAmazon WorkSpaces Update – Hourly Usage and Expanded Root Volume

In my recent post, I Love My Amazon WorkSpace, I shared the story of how I became a full-time user and big fan of Amazon WorkSpaces. Since writing the post I have heard similar sentiments from several other AWS customers.

Today I would like to tell you about some new and recent developments that will make WorkSpaces more economical, more flexible, and more useful:

  • Hourly WorkSpaces – You can now pay for your WorkSpace by the hour.
  • Expanded Root Volume – Newly launched WorkSpaces now have an 80 GB root volume.

Let’s take a closer look at these new features.

Hourly WorkSpaces
If you only need part-time access to your WorkSpace, you (or your organization, to be more precise) will benefit from this feature. In addition to the existing monthly billing, you can now use and pay for a WorkSpace on an hourly basis, allowing you to save money on your AWS bill. If you are a part-time employee, a road warrior, share your job with another part-timer, or work on multiple short-term projects, this feature is for you. It is also a great fit for corporate training, education, and remote administration.

There are now two running modes – AlwaysOn and AutoStop:

  • AlwaysOn – This is the existing mode. You have instant access to a WorkSpace that is always running, billed by the month.
  • AutoStop – This is new. Your WorkSpace starts running and billing when you log in, and stops automatically when you remain disconnected for a specified period of time.

A WorkSpace that is running in AutoStop mode will automatically stop a predetermined amount of time after you disconnect (1 to 48 hours). Your WorkSpaces Administrator can also force a running WorkSpace to stop. When you next connect, the WorkSpace will resume, with all open documents and running programs intact. Resuming a stopped WorkSpace generally takes less than 90 seconds.

Your WorkSpaces Administrator has the ability to choose your running mode when launching your WorkSpace:

WorkSpaces Configuration

The Administrator can change the AutoStop time and the running mode at any point during the month. They can also track the number of working hours that your WorkSpace accumulates during the month using the new UserConnected CloudWatch metric, and switch from AutoStop to AlwaysOn when this becomes more economical. Switching from hourly to monthly billing takes place upon request; however, switching the other way takes place at the the start of the following month.

All new Amazon WorkSpaces can take advantage of hourly billing today. If you’re using a custom image for your WorkSpaces, you’ll need to refresh your custom images from the latest Amazon WorkSpaces bundles. The ability for existing WorkSpaces to switch to hourly billing will be added in the future.

To learn more about pricing for hourly WorkSpaces, visit the WorkSpaces Pricing page.

Expanded Root Volume
By popular demand we have expanded the size of the root volume for newly launched WorkSpaces to 80 GB, allowing you to run more applications and store more data at no additional cost. Your WorkSpaces Administrator can rebuild existing WorkSpaces in order to upgrade them to the larger root volumes (read Rebuild a WorkSpace to learn more). Rebuilding a WorkSpace will restore the root volume (C:) to the most recent image of the bundle that was used to create the WorkSpace. It will also restore the data volume (D:) from the last automatic snapshot.

Some WorkSpaces Resources
While I have your attention, I would like to let you know about a couple of other important WorkSpaces resources:

Available Now
The features that I described above are available now and you can start using them today!

Jeff;

ProgrammableWeb: APIsSkygear

Skygear provides a backend which can integrate functionality with applications such as chat, push notifications, user management, geolocation, sync capabilities, and bots. Additionally, Skygear offers deployment, data storage, and reusable plugins; so, developers will not need to start from scratch to build applications like social platforms or messenger bots. For pricing, Skygear offers 3 hosting plans, and a free option is available as well.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSkyscanner Markets Service

This API allows you to return a list of countries supported by Skyscanner, a global travel search engine that you can plan and book directly from millions of travel options.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSkyscanner Locales Service

This API allows you to return the list of localizations supported by Skyscanner, a global travel search engine that you can plan and book directly from millions of travel options.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsSkyscanner Currencies Service

This API is used to get a list of currencies supported by Skyscanner, a global travel search engine that you can plan and book directly from millions of travel options.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsEredivisie Live Scores

The Eredivisie Live Scores API provides access to the analytical summaries of the previous, ongoing, and upcoming matches of the top-flight league in Netherlands. The API generates team and player profiles as well as live updates of match results and league table standings. It is a useful reference for pre-match and post-match analyses for users seeking for records of teams and players in the Dutch football league. Such information includes season-long and match day details of starting lineups, substitute players, player fitness, match events, match day scorers, and league’s top scorers. The Eredivisie Live Scores API generates requests and responses in JSON format. It is a fully released and well-documented version that is supported by Curl, Java, Node.js, PHP, Python, Objective-C, Ruby, and .NET wrappers. It is available in Basic, Pro, Ultra, and Mega subscription plans that charge fixed monthly costs of between $0 and $600 plus prorated costs for extra content exceeding the fixed limits.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsRestpack

The Restpack API is a collection of RESTful utilities, as a service (microservices). Developers can send requests and responses in JSON format & authenticate with token.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsChannel PEAR

The Channel RESTful API is available to interact with PEAR library. It supports JSON responses and HTTP authentication. As a service, PEAR provides a collaborative and cloud based live stream platform.
Date Updated: 2016-08-19
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWebHow to Connect Klipfolio to RESTful APIs like Google Analytics

If you’ve ever worked in Google Analytics, you’re probably used to exporting data into Excel sheets to do a deeper analysis. I’ll bet you do it regularly, and manually.

If you’re responsible for monitoring and reporting on Web performance, you’re probably also making use of Google Analytics dashboards. Do you find them frustratingly limiting? How many steps does it take you to get to the information you’re looking for?

ProgrammableWeb: APIsSlack Events Webhooks

The Events API allows developers to build applications that respond to activities in Slack. User and bot based event subscriptions can be received in JSON format. Once event types are specified, Slack will provide a stream of data with this RESTful API. This API uses OAuth 2.0 for authentication; developers need to register their applications to obtain a Client ID and Client Secret for response procedures.
Date Updated: 2016-08-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsCycle

The Cycle API integrates platform as a service features for business that manage containers for software transport. It is available with SSL certificates, providing JSON architecture and OAuth2 as authentication method. Cycle.io is an IT container deployment tool.
Date Updated: 2016-08-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIBM Bluemix Customer Agreement

This BAIN REST API allows you to maintain a structured legal customer agreement and is powered by IBM's API Management solution in Bluemix. The customer agreement is linked to as many Sales Product Agreements as needed for all in-force products. Code examples available for cURL, Ruby, Python, PHP, Java, Node and Go. BAIN Is the Banking Industry Architecture Network that defines SOA and semantic definitions for IT services in the banking industry.
Date Updated: 2016-08-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIBM Bluemix Sales Product Agreement

This BAIN REST API captures the legal terms of conditions in force for a sold product and is powered by IBM's API Management solution in Bluemix. This can include many details that influence product and service fulfillment such as applicable fees, rates and selected features and options. Code examples available for cURL, Ruby, Python, PHP, Java, Node and Go. BAIN Is the Banking Industry Architecture Network that defines SOA and semantic definitions for IT services in the banking industry.
Date Updated: 2016-08-18
Tags: [field_primary_category], [field_secondary_categories]

ProgrammableWeb: APIsIBM Bluemix Card Authorization

This BAIN REST API executes the decision based authorization and recording of proposed card transactions through the merchant network powered by IBM's API Management solution in Bluemix. The card authorization triggers a verbal check of the customer details for security and the authorization is given. Code samples available in cURL, Ruby, Python, PHP, Java, Node and Go. BAIN Is the Banking Industry Architecture Network that defines SOA and semantic definitions for IT services in the banking industry.
Date Updated: 2016-08-18
Tags: [field_primary_category], [field_secondary_categories]

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>