Planet MozillaBeer and Tell – August 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and
Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Osmose: PyJEXL

First up was Osmose (that’s me!), who shared PyJEXL, an implementation of JEXL in Python. JEXL is an expression language based on JavaScript that computes the value of an expression when applied to a given context. For example, the statement 2 + foo, when applied to a context where foo is 7, evaluates to 9. JEXL also allows for defining custom functions and operators to be used in expressions.

JEXL is implemented as a JavaScript library; PyJEXL allows you to parse and evaluate JEXL expressions in Python, as well as analyze and validate expressions.

pmac: rtoot.org

Next up was pmac, who shared rtoot.org, the website for “The Really Terrible Orchestra of the Triangle”. It’s a static site for pmac’s local orchestra built using lektor, an easy-to-use static site generator. One of the more interesting features pmac highlighted was lektor’s data model. He showed a lektor plugin he wrote that added support for CSV fields as part of the data model, and used it to describe the orchestra’s rehearsal schedule.

The site is deployed as a Docker container on a Dokku instance. Dokku replicates the Heroku style of deployments by accepting Git pushes to trigger deploys. Dokku also has a great Let’s Encrypt plugin for setting up and renewing HTTPS certificates for your apps.

groovecoder: Scary/Tracky JS

Next was groovecoder, who previewed a lightning talk he’s been working on called “Scary JavaScript (and other Tech) that Tracks You Online”. The talk gives an overview of various methods that sites can use to track what sites you’ve visited without your consent. Some examples given include:

  • Reading the color of a link to a site to see if it is the “visited link” color (this has been fixed in most browsers).
  • Using requestAnimationFrame to time how long it took the browser to draw a link; visited links will take longer as the browser has to change their color and then remove it to avoid the previous vulnerability.
  • Embed a resource from another website as a video, and time how long it takes for the browser to attempt and fail to parse the video; a low time indicates the resource was served from the cache and you’ve seen it before.
  • Cookie Syncing

groovecoder plans to give the talk at a few local conferences, leading up to the Hawaii All Hands where he will give the talk to Mozilla.

rdalal: Therapist

rdalal was next with Therapist, a tool to help create robust, easily-configurable pre-commit hooks for Git. It is configured via a YAML file at your project root, and can run any command you want to run before committing code, such as code linting or build commands. Therapist is also able to detect which files were changed by the commit and only run commands on those to help save time.

gregtatum: River – Paths Over Time

Last up was gregtatum, who shared River – Paths Over Time. Inspired by the sight of rivers from above during the flight back from the London All Hands, Rivers is a simulation of rivers flowing and changing over time. The animation is drawn on a 2d canvas using rectangles that are drawn and then slowly faded over time; this slow fade creates the illusion of “drips” seeping into the ground as rivers shift position.

The source code for the simulation is available on Github, and is configured with several parameters that can be tweaked to change the behavior of the rivers.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Planet MozillaReps Program Objectives – Q3 2016

With “RepsNext” we increased alignment between the Reps program and the Participation team. The main outcome is setting quarterly Objectives and Key Results together. This enables all Reps to see which contributions have a direct link to the Participation team . Of course, Reps are welcome to keep doing amazing work in other areas.

Objective 1 Reps are fired up about the Activate campaign and lead their local community to success through supporting that campaign with the help of the Review Team
KR1 80+ core mozillians coached by new mentors contribute to at least one focus activity.
KR2 75+ Reps get involved in the Activity Campaign, participating in at least one activity and/or mobilizing others.
KR3 The Review Team is working effectively by decreasing the budget review time by 30%

 

Objective 2 New Reps coaches and regional coaches bring leadership within the Reps program to a new level and increase the trust of local communities towards the Reps program
KR1 40 mentors and new mentors took coaching training and start at least 2 coaching relationships.
KR2 40% of the communities we act on report better shape/effectiveness thanks to the regional coaches
KR3 At least 25% of the communities Regional Coaches work with report feeling represented by the Reps program

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this?

Let’s keep the conversation going! Please provide your comments in Discourse.

Planet MozillaTenFourFox 45.3.0b2 available

TenFourFox 45.3 beta 2 is now available (downloads, hashes, release notes). The issue with video frame skipping gone awry seems to be due to Mozilla's MediaSource implementation, which is (charitably) somewhat incomplete on 45 and not well-tested on systems of our, uh, vintage. Because audio is easier to decode than video the audio decoder thread will already have a speed advantage, but in addition the video decoder thread appears to only adjust itself for the time spent in decoding, not the time spent pushing the frames to the compositor or blitting them to the screen (which are also fairly costly in our software-rendered world). A frame that's simple to decode but requires a lot of painting will cause the video thread to get further and further behind queueing frames that the compositor can't keep up with. Even if you keep skipping keyframes, you'll end up more than one keyframe behind and you'll never catch up, and when this occurs the threads irrevocably go out of sync. The point at which this happens for many videos is almost deterministic and correlates well with time spent in compositing and drawing; you can invariably trigger it by putting the video into full screen mode which dramatically increases the effort required to paint frames. In a few cases the system would just give up and refuse to play further (hitting an assertion in debug mode).

After a few days of screaming and angry hacking (backporting some future bug fixes, new code for dumping frames that can never be rendered, throwing out the entire queue if we get so far behind we'll never reasonably catch up) I finally got the G5 to be able to hold its own in Reduced Performance but it's just too far beyond the G4 systems to do so. That's a shame since when it does work it appears to work pretty well. Nevertheless, for release I've decided to turn MediaSource off entirely and it now renders mostly using the same pipeline as 38 (fortunately, the other improvements to pixel pushing do make this quicker than 38 was; my iBook G4/1.33 now maintains a decent frame rate for most standard definition videos in Automatic Performance mode). If the feature becomes necessary in the future I'm afraid it will only be a reasonable option for G5 machines in its current state, but at least right now disabling MediaSource is functional and sufficient for the time being to get video support back to where it was.

The issue with the Amazon Music player is not related to or fixed by this change, however. I'm thinking that I'll have to redo our minimp3 support as a platform MP3 decoder so that we use more of Mozilla's internal media machinery; that refactoring is not likely to occur by release, so we'll ship with the old decoder that works most other places and deal with it in a future update. I don't consider this a showstopper unless it's affecting other sites.

Other changes in 45.3 beta 2 are some additional performance improvements. Most of these are trivial (including some minor changes to branching and proper use of single-precision math in the JIT), but two are of note, one backport that reduces the amount of allocation with scrolling and improves overall scrolling performance as a result, and another one I backported that reduces the amount of scanning required for garbage collection (by making garbage collection only run in the "dirty" zones after cycle collection instead of effectively globally). You'll notice that RAM use is a bit higher at any given moment, but it does go back down, and the browser has many fewer random pauses now because much less work needs to be done. Additionally in this beta unsigned extensions should work again and the update source is now set to the XML for 45. One final issue I want to repair before launch is largely server-side, hooking up the TenFourFox help button redirector to use Mozilla's knowledge base articles where appropriate instead of simply redirecting to our admittedly very sparse set of help documents on Tenderapp. This work item won't need a browser update; it will "just work" when I finish the code on Floodgap's server.

At this point barring some other major crisis I consider the browser shippable as 45.4 on time on 13 September, with Amazon Music being the highest priority issue to fix, and then we can start talking about what features we will add while we remain on feature parity. There will also be a subsequent point release of TenFourFoxBox to update foxboxes for new features in 45; stay tuned for that. Meanwhile, we could sure use your help on a couple other things:

  • Right now, our localized languages for the 45 launch are German, Spanish, French and Japanese, with an Italian translation in progress. We can launch with that, but we're losing a few languages there we had with 38. You can help.

  • Even minor releases of the browser get a small bump in support tickets on Tenderapp; a major release (such as 31 to 38, and now 38 to 45) generally brings lots of things out of the woodwork that we've never seen during the beta cycle. Most of these reports are spurious, but sometimes serious bugs have been uncovered even though they looked obviously bogus at first glance, and all of them need to be triaged regardless. If you're a reasonably adept user and you've got a decent understanding of how the browser works, help us help your fellow users by being a volunteer on Tenderapp. There's no minimum time commitment required; any help you can offer is appreciated, even if just temporary. Right now it's just Chris Trusch and me trying to field these reports, which is unsatisfactory to cover all of them (Theo used to but we haven't seen him in awhile). If you'd like to help, post in the comments and we'll make you a "blue name" on Tenderapp too.

That's it for now. See what you think.

Planet WebKitWebDriver Support in Safari 10

As web content has become increasingly interactive, responsive, and complicated, ensuring a good user experience across multiple platforms and browsers is a huge challenge for web developers and QA organizations. Not only must web content be loaded, executed, and rendered correctly, but a user must be able to interact with the content as intended, no matter the browser, platform, hardware, screen size, or other conditions. If your shopping cart application has a clean layout and beautiful typography, but the checkout button doesn’t work because of a configuration mishap in production—or an incorrect CSS rule hides the checkout button in landscape mode—then your users are not going to have a great experience.

Ensuring a good user experience via automation is difficult in part because each browser has its own way of interpreting keyboard and mouse inputs, performing navigations, organizing windows and tabs, and so on. Of course, a human could manually test every interaction and workflow of their website on every browser prior to deploying an update to production, but this is slow, costly, and impractical for all but the largest companies. The Selenium open source project was created to address this gap in automation capabilities for browsers by providing a common API for automation. WebDriver is Selenium’s cross-platform, cross-browser automation API. Using WebDriver, a developer can write an automated test that exercises their web content, and run the test against any browser with a WebDriver-compliant driver.

Safari + WebDriver

I am thrilled to announce that Safari now provides native support for the WebDriver API. Starting with Safari 10 on OS X El Capitan and macOS Sierra, Safari comes bundled with a new driver implementation that’s maintained by the Web Developer Experience team at Apple. Safari’s driver is launchable via the /usr/bin/safaridriver executable, and most client libraries provided by Selenium will automatically launch the driver this way without further configuration.

I’ll first introduce WebDriver by example and explain a little bit about how it’s implemented and how to use Safari’s new driver implementation. I will introduce some important safeguards that are unique to Safari’s driver implementation, and discuss how these may affect you when retargeting an existing WebDriver test suite to run using Safari’s driver.

WebDriver By Example

WebDriver is a browser automation API that enables people to write automated tests of their web content using Python, Java, PHP, JavaScript, or any other language with a library that understands the de facto Selenium Wire Protocol or upcoming W3C WebDriver protocol.

Below is a WebDriver test suite for the WebKit feature status page written in Python. The first test case searches the feature status page for “CSS” and ensures that at least one result appears. The second test case counts the number of visible results when various filters are applied to make sure that each result shows up in at most one filter. Note that in the Python WebDriver library each method call synchronously blocks until the operation completes (such as navigating or executing JavaScript), but some client libraries provide a more asynchronous API.

from selenium.webdriver.common.by import By
from selenium import webdriver
import unittest

class WebKitFeatureStatusTest(unittest.Testcase):

  def test_feature_status_page_search(self):
    self.driver.get("https://webkit.org/status/")

    # Enter "CSS" into the search box.
    search_box = self.driver.find_element_by_id("search")
    search_box.send_keys("CSS")
    value = search_box.get_attribute("value")
    self.assertTrue(len(value) > 0)
    search_box.submit()

    # Count the results.
    feature_count = self.shown_feature_count()
    self.assertTrue(len(feature_count) > 0)

  def test_feature_status_page_filters(self):
    self.driver.get("https://webkit.org/status/")

    filters = self.driver.find_element(By.CSS_SELECTOR, "ul#status-filters li input[type=checkbox]")
    self.assertTrue(len(filters) is 7)

    # Make sure every filter is turned off.
    for checked_filter in filter(lambda f: f.is_selected(), filters):
      checked_filter.click()

    # Count up the number of items shown when each filter is checked.
    unfiltered_count = self.shown_feature_count()
    running_count = 0
    for filt in filters:
      filt.click()
      self.assertTrue(filt.is_selected())
      running_count += self.shown_feature_count()
      filt.click()

    self.assertTrue(running_count is unfiltered_count)

  def shown_feature_count(self):
    return len(self.driver.execute_script("return document.querySelectorAll('li.feature:not(.is-hidden)')"))

def setup_module(module):
  WebKitFeatureStatusTest.driver = webdriver.Safari()

def teardown_module(module):
  WebKitFeatureStatusTest.driver.quit()

if __name__ == '__main__':
  unittest.main()

Note that this code example is for illustration purposes only. This test is contemporaneous with a specific version of the Feature Status page and uses page-specific DOM classes and selectors. So, it may or may not work on later versions of the Feature Status page. As with any test, it may need to be modified to work with later versions of the software it tests.

WebDriver Under the Hood

WebDriver is specified in terms of a REST API; Safari’s driver provides its own local web server that accepts REST-style HTTP requests. Under the hood, the Python Selenium library translates each method call on self.driver into a REST API command and sends the corresponding HTTP request to local web server hosted by Safari’s driver. The driver validates the contents of the request and forwards the command to the appropriate browser instance. Typically, the low-level details of performing most of these commands are delegated to WebAutomationSession and related classes in WebKit’s UIProcess and WebProcess layers. When a command has finished executing, the driver finally sends an HTTP response for the REST API command. The Python client library interprets the HTTP response and returns the result back to the test code.

Running the Example in Safari

To run this WebDriver test using Safari, first you need to grab a recent release of the Selenium open source project. Selenium’s Java and Python client libraries offer support for Safari’s native driver implementation starting in the 3.0.0-beta1 release. Note that the Apple-developed driver is unrelated to the legacy SafariDriver mentioned in the Selenium project. The old SafariDriver implementation is no longer maintained and should not be used. You do not need to download anything besides Safari 10 to get the Apple-developed driver.

Once you have obtained, installed, and configured the test to use the correct Selenium library version, you need to configure Safari to allow automation. As a feature intended for developers, Safari’s WebDriver support is turned off by default. To turn on WebDriver support, do the following:

  • Ensure that the Develop menu is available. It can be turned on by opening Safari preferences (Safari > Preferences in the menu bar), going to the Advanced tab, and ensuring that the Show Develop menu in menu bar checkbox is checked.
  • Enable Remote Automation in the Develop menu. This is toggled via Develop > Allow Remote Automation in the menu bar.
  • Authorize safaridriver to launch the webdriverd service which hosts the local web server. To permit this, run /usr/bin/safaridriver once manually and complete the authentication prompt.

Once you have enabled Remote Automation, copy and save the test as test_webkit.py. Then run the following:

python test_webkit.py

Safari-exclusive Safeguards

While it’s awesome to be able to write automated tests that run in Safari, a REST API for browser automation introduces a potential vector for malicious software. To support a valuable automation API without sacrificing a user’s privacy or security, Safari’s driver implementation introduces extra safeguards that no other browser or driver has implemented to date. These safeguards also double as extra insurance against “flaky tests” by providing enhanced test isolation. Some of these safeguards are described below.

When running a WebDriver test in Safari, test execution is confined to special Automation windows that are isolated from normal browsing windows, user settings, and preferences. Automation windows are easy to recognize by their orange Smart Search field. Similar to browsing in a Private Browsing window, WebDriver tests that run in an Automation window always start from a clean slate and cannot access Safari’s normal browsing history, AutoFill data, or other sensitive information. Aside from the obvious privacy benefits, this restriction also helps to ensure that tests are not affected by a previous test session’s persistent state such as local storage or cookies.

<figure class="widescreen" style="margin-bottom: 0;">The Smart Search field in Automation windows is bright orange to warn users that the window is not meant for normal browsing.</figure>

As a WebDriver test is executing in an Automation window, any attempts to interact with the window or web content could derail the test by unexpectedly changing the state of the page. To prevent this from happening, Safari installs a “glass pane” over the Automation window while the test is running. This blocks any stray interactions (mouse, keyboard, resizing, and so on) from affecting the Automation window. If a running test gets stuck or fails, it’s possible to interrupt the running test by “breaking” the glass pane. When an automation session is interrupted, the test’s connection to the browser is permanently severed and the automation window remains open until closed manually.

<figure class="widescreen" style="margin-bottom: 0;">If a user interacts with an active Automation Window, a dialog will appear that allows the user to end the automation session and interact with the test page.</figure>

Along with being able to interact with a stopped test, Safari’s driver implementation also allows for the full use of Web Inspector during and after the execution of a WebDriver test. This is very useful when trying to figure out why a test has hung or is providing unexpected results. The safaridriver launcher has two special options just for debugging. The --inspect option will preload the Web Inspector and script debugger in the background; to pause test execution and bring up Web Inspector’s Debugger tab, you can simply evaluate a debugger; statement in the test page. The --profile option will preload the Web Inspector and start a timeline recording in the background; if you later want to see details of what happened during the test, you can open the Timelines tab in Web Inspector to see the captured timeline recording in its entirety.

Safari’s driver restricts the number of active WebDriver sessions. Only one Safari browser instance can be running at any given time, and only one WebDriver session can be attached to the browser instance at a time. This ensures that the user behavior being simulated (mouse, keyboard, touch, etc.) closely matches what a user can actually perform with real hardware in the macOS windowing environment. For example, the system’s text insertion caret can only be active in one window and form control at a time; if Safari’s driver were to allow multiple tests to run concurrently in separate windows or browsers, the text insertion behavior (and resulting DOM focus/blur events) would not be representative of what a user could perform themselves.

Summary

With the introduction of native WebDriver support in Safari 10, it’s now possible to run the same automated tests of web content on all major browsers equally, without writing any browser-specific code. I hope WebDriver support will help web developers  catch user-visible regressions in their web content much more quickly and with less manual testing. Safari’s support comes with new, exclusive safeguards to simultaneously protect user security and privacy and also help you write more stable and consistent tests. You can try out Safari’s WebDriver support today by installing a beta of macOS Sierra or Safari 10.

This is our first implementation of WebDriver for Safari. If you encounter unexpected behavior in Safari’s WebDriver support, or have other suggestions for improvement, please file a bug at https://bugreport.apple.com/. For quick questions or feedback, email web-evangelist@apple.com or tweet @webkit or @brrian on Twitter. Happy testing!

Planet MozillaOutreachy Participant Presentations Summer 2016

Outreachy Participant Presentations Summer 2016 Outreachy program participants from the summer 2016 cohort present their contributions and learnings. Mozilla has hosted 15 Outreachy participants this summer, working on technical projects...

Planet MozillaSongsearch – using ServiceWorker to make a 4 MB CSV easily searchable in a browser

A few days ago I got an email about an old project of mine that uses YQL to turn a CSV into a “web services” interface. The person asking me for help was a Karaoke DJ who wanted to turn his available songs into a search interface. Maintenance of the dataset happens in Excel, so all that was needed was a way to display it on a laptop.

The person had a need and wanted a simple solution. He was OK with doing some HTML and CSS, but felt pretty lost at anything database related or server side. Yes, the best solution for that would be a relational DB, as it would also speed up searches a lot. As I don’t know for how much longer YQL is reliably usable, I thought I use this opportunity to turn this into an offline app using ServiceWorker and do the whole work in client side JavaScript. Enter SongSearch – with source available on GitHub.

the songsearch interface in action

Here’s how I built it:

  • The first step was to parse a CSV into an array in JavaScript. This has been discussed many a time and I found this solution on StackOverflow to be adequate to my uses.
  • I started by loading the file in and covering the interface with a loading message. This is not pretty, and had I had my own way and more time I’d probably look into streams instead.
  • I load the CSV with AJAX and parse it. From there on it is memory for me to play with.
  • One thing that was fun to find out was how to get the currently selected radio button in a group. This used to be a loop. Nowadays it is totally fine to use document.querySelector(‘input[type=”radio”][name=”what”]:checked’).value where the name is the name of your radio group :)
  • One thing I found is that searching can be slow – especially when you enter just one character and the whole dataset gets returned. Therefore I wanted to show a searching message. This is not possible when you do it in the main thread as it is blocked during computation. The solution was to use a WebWorker. You show the searching message before starting the worker, and hide it once it messaged back. One interesting bit I did was to include the WebWorker code also as a resource in the page, to avoid me having to repeat the songsearch method.
  • The final part was to add a ServiceWorker to make it work offline and cache all the resources. The latter could even be done using AppCache, but this may soon be deprecated.

As you can see the first load gets the whole batch, but subsequent reloads of the app are almost immediate as all the data is cached by the ServiceWorker.

Loading before and after the serviceworker cached the content

It’s not rocket science, it is not pretty, but it does the job and I made someone very happy. Take a look, maybe there is something in there that inspires you, too. I’m especially “proud” of the logo, which wasn’t at all done in Keynote :)

Planet MozillaMitigating MIME Confusion Attacks in Firefox

Scanning the content of a file allows web browsers to detect the format of a file regardless of the specified Content-Type by the web server. For example, if Firefox requests script from a web server and that web server sends that script using a Content-Type of “image/jpg” Firefox will successfully detect the actual format and will execute the script. This technique, colloquially known as “MIME sniffing”, compensates for incorrect, or even complete absence of metadata browsers need to interpret the contents of a page resource. Firefox uses contextual clues (the HTML element that triggered the fetch) or also inspects the initial bytes of media type loads to determine the correct content type. While MIME sniffing increases the web experience for the majority of users, it also opens up an attack vector known as MIME confusion attack.

Consider a web application which allows users to upload image files but does not verify that the user actually uploaded a valid image, e.g., the web application just checks for a valid file extension. This lack of verification allows an attacker to craft and upload an image which contains scripting content. The browser then renders the content as HTML opening the possibility for a Cross-Site Scripting attack (XSS). Even worse, some files can even be polyglots, which means their content satisfies two content types. E.g., a GIF can be crafted in a way to be valid image and also valid JavaScript and the correct interpretation of the file solely depends on the context.

Starting with Firefox 50, Firefox will reject stylesheets, images or scripts if their MIME type does not match the context in which the file is loaded if the server sends the response header “X-Content-Type-Options: nosniff” (view specification). More precisely, if the Content-Type of a file does not match the context (see detailed list of accepted Content-Types for each format underneath) Firefox will block the file, hence prevent such MIME confusion attacks and will display the following message in the console:

The resource from “https://example.com/bar.jpg” was blocked due to MIME type mismatch (X-Content-Type-Options: nosniff).

Valid Content-Types for Stylesheets:
– “text/css”

Valid Content-Types for images:
– have to start with “image/”

Valid Content-Types for Scripts:
– “application/javascript”
– “application/x-javascript”
– “application/ecmascript”
– “application/json”
– “text/ecmascript”
– “text/javascript”
– “text/json”

Planet Mozilla0039 Firefox V2




0039 Firefox V2 - this is a remake of a maze that I originally drew on a flight from Oregon to the Mozilla All Hands meeting in Orlando, Florida in 2015.  I wanted to redo it as the original was numbered 0003 and my skills had advanced so far that I felt I needed to revisit this subject. 

Because it is a corporate logo, this maze is not for sale in the same manner as my other mazes.  I've not worked out a plan on how to distribute this maze with the company. 

The complete image posted here is only 1/4 of the full resolution of the original from which I make prints.  In full resolution it prints spectacularly well at 24" x 24".

Planet MozillaWhat’s Up with SUMO – 25th August

Hello, SUMO Nation!

Another hot week behind us, and you kept us going – thank you for that! Here is a portion of the latest and greatest news from the world of SUMO – your world :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 31st of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

  • All quiet, keep up the awesome work, people :-)

Knowledge Base & L10n

Firefox

  • for Android
    • Version 49 will offer offline caching for some web pages. Take a bit of the net with you outside of the network’s range!
  • for iOS
    • Nothing new to report about iOS for now… Stay tuned for news in the future!

… and we’re done! We hope you have a great week(end) and that you come back soon to keep rocking the helpful web with us!

P.S. Just in case you missed it, here’s a great read about the way we want you to make Mozilla look better in the future.

Planet MozillaParsing milestoned XML in Python

I am trying to write a tool in Python (using Python 3.4 to be able to use the latest Python standard library on Windows without using any external libraries on Windows) for some manipulation with the source code for the Bible texts.

Let me first explain what is the milestoned XML, because many normal Python programmers dealing with normal XML documents may not be familiar with it. There is a problem with using XML markup for documents with complicated structure. One rather complete article on this topic is DeRose (2016).

Briefly [1] , the problem in many areas (especially in documents processing) is with multiple possible hierarchies overlapping each other (e.g., in Bibles there are divisions of text which are going across verse and chapters boundaries and sometimes terminating in the middle of verse, many especially English Bibles marks Jesus’ sayings with a special element, and of course this can go over several verses etc.). One of the ways how to overcome obvious problem that XML doesn't allow overlapping elements is to use milestones. So for example the book of Bible could be divided not like

<book>
<chapter>
<verse>text</verse>
...
</chapter>
...
</book>

but just putting milestones in the text, i.e.:

<book>
<chapter n="1" />
<verse sID="ID1.1" />text of verse 1.1
<verse eID="ID1.1" /> ....
</book>

So, in my case the part of the document may look like

text text
<verse/>
textB textB <czap> textC textC <verse/> textD textD </czap>

And I would like to get from some kind of iterator this series of outputs:

[(1, 1, "text text", ['text text']),
 (1, 2, "textB textB textC textC",
  ['<verse/>', 'textB textB', '<czap>', 'textC textC']),
 (1, 3, "textD textD", ['<verse/>', 'textD textD', '</czap>'])]

(the first two numbers should be number of the chapter and verse respectively).

My first attempt was in its core this iterator:

def __iter__(self) -> Tuple[int, int, str]:
    """
    iterate through the first level elements

    NOTE: this iterator assumes only all milestoned elements on the first
    level of depth. If this assumption fails, it might be necessary to
    rewrite this function (or perhaps ``text`` method) to be recursive.
    """
    collected = None

    for child in self.root:
        if child.tag in ['titulek']:
            continue
        if child.tag in ['kap', 'vers']:
            if collected and collected.strip():
                yield self.cur_chapter, self.cur_verse, \
                    self._list_to_clean_text(collected)
            if child.tag == 'kap':
                self.cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                self.cur_verse = int(child.get('n'))
            collected = child.tail or ''
        else:
            if collected is not None:
                if child.text is not None:
                    collected += child.text
                for sub_child in child:
                    collected += self._recursive_get_text(sub_child)
                if child.tail is not None:
                    collected += child.tail

(self.root is a product of ElementTree.parse(file_name).getroot()). The problem of this code lies in the note. When the <verse/> element is inside of <czap> one, it is ignored. So, obviously we have to make our iterator recursive. My first idea was to make this script parsing and regenerating XML:

#!/usr/bin/env python3
from xml.etree import ElementTree as ET
from typing import List

def start_element(elem: ET.Element) -> str:
    outx = ['<{} '.format(elem.tag)]
    for attr, attval in elem.items():
        outx.append('{}={} '.format(attr, attval))
    outx.append('>')
    return ''.join(outx)


def recursive_parse(elem: ET.Element) -> List[str]:
    col_xml = []
    col_txt = ''
    cur_chapter = chap

    if elem.text is None:
        col_xml.append(ET.tostring(elem))
        if elem.tail is not None:
            col_txt += elem.tail
    else:
        col_xml.extend([start_element(elem), elem.text])
        col_txt += elem.text
        for subch in elem:
            subch_xml, subch_text = recursive_parse(subch)
            col_xml.extend(subch_xml)
            col_txt += subch_text
        col_xml.append('</{}>'.format(elem.tag))

        if elem.tail is not None:
            col_xml.append(elem.tail)
            col_txt += elem.tail

    return col_xml, col_txt


if __name__ == '__main__':
    # write result XML to CRLF-delimited file with
    # ET.tostring(ET.fromstringlist(result), encoding='utf8')
    # or encoding='unicode'? Better for testing?
    xml_file = ET.parse('tests/data/Mat-old.xml')

    collected_XML, collected_TEXT = recursive_parse(xml_file.getroot())
    with open('test.xml', 'w', encoding='utf8', newline='\r\n') as outf:
       print(ET.tostring(ET.fromstringlist(collected_XML),
                         encoding='unicode'), file=outf)

    with open('test.txt', 'w', encoding='utf8', newline='\r\n') as outf:
        print(collected_TEXT, file=outf)

This works correctly in sense that the generated file test.xml is identical to the original XML file (after reformatting both files with tidy -i -xml -utf8). However, it is not iterator, so I would like to somehow combine the virtues of both snippets of code into one. Obviously, the problem is that return in my ideal code should serve two purposes. Once it should actually yield nicely formatted result from the iterator, second time it should just provide content of the inner elements (or not, if the inner element contains <verse/> element). If my ideal world I would like to get recursive_parse() to function as an iterator capable of something like this:

if __name__ == '__main__':
    xml_file = ET.parse('tests/data/Mat-old.xml')
    parser = ET.XMLParser(target=ET.TreeBuilder())

    with open('test.txt', 'w', newline='\r\n') as out_txt, \
            open('test.xml', 'w', newline='\r\n') as out_xml:
        for ch, v, verse_txt, verse_xml in recursive_parse(xml_file):
            print(verse_txt, file=out_txt)
            # or directly parser.feed(verse_xml)
            # if verse_xml is not a list
            parser.feed(''.join(verse_xml))

        print(ET.tostring(parser.close(), encoding='unicode'),
              file=out_xml)

So, my first attempt to rewrite the iterator (so far without the XML part I have):

def __iter__(self) -> Tuple[CollectedInfo, str]:
    """
    iterate through the first level elements
    """
    cur_chapter = 0
    cur_verse = 0
    collected_txt = ''
    # collected XML is NOT directly convertable into Element objects,
    # it should be treated more like a list of SAX-like events.
    #
    # xml.etree.ElementTree.fromstringlist(sequence, parser=None)
    # Parses an XML document from a sequence of string fragments.
    # sequence is a list or other sequence containing XML data fragments.
    # parser is an optional parser instance. If not given, the standard
    # XMLParser parser is used. Returns an Element instance.
    #
    # sequence = ["<html><body>", "text</bo", "dy></html>"]
    # element = ET.fromstringlist(sequence)
    # self.assertEqual(ET.tostring(element),
    #         b'<html><body>text</body></html>')
    # FIXME přidej i sběr XML útržků
    # collected_xml = None

    for child in self.root.iter():
        if child.tag in ['titulek']:
            collected_txt += '\n{}\n'.format(child.text)
            collected_txt += child.tail or ''
        if child.tag in ['kap', 'vers']:
            if collected_txt and collected_txt.strip():
                yield CollectedInfo(cur_chapter, cur_verse,
                                    re.sub(r'[\s\n]+', ' ', collected_txt,
                                           flags=re.DOTALL).strip()), \
                    child.tail or ''

            if child.tag == 'kap':
                cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                cur_verse = int(child.get('n'))
        else:
            collected_txt += child.text or ''

            for sub_child in child:
                for sub_info, sub_tail in MilestonedElement(sub_child):
                    if sub_info.verse == 0 or sub_info.chap == 0:
                        collected_txt += sub_info.text + sub_tail
                    else:
                        # FIXME what happens if sub_element contains
                        # multiple <verse/> elements?
                        yield CollectedInfo(
                            sub_info.chap, sub_info.verse,
                            collected_txt + sub_info.text), ''
                        collected_txt = sub_tail

            collected_txt += child.tail or ''

    yield CollectedInfo(0, 0, collected_txt), ''

Am I going the right way, or did I still not get it?

[1]From the discussion of the topic on the XSL list.
DeRose, Steven. 2016. “Proceedings of Extreme Markup Languages®.” Accessed August 25. http://conferences.idealliance.org/extreme/html/2004/DeRose01/EML2004DeRose01.html.

Planet MozillaConnected Devices Weekly Program Update, 25 Aug 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Planet MozillaWebExtensions in Firefox 50

Firefox 50 landed in Developer Edition this week, so we have another update on WebExtensions for you!Please use the WebExtensions API for any new add-on development, and consider porting your existing add-ons as soon as possible.

It’s also a great time to port because WebExtensions is compatible with multiprocess Firefox, which began rolling out in Firefox 48 to people without add-ons installed. When Firefox 49 reaches the release channel in September, we will begin testing multiprocess Firefox with add-ons. The goal is to turn it on for everyone in January 2017 with the release of Firefox 51.

If you need help porting to WebExtensions, please start with the compatibility checker, and check out these resources.

Since the last release, more than 79 bugs were closed on WebExtensions alone.

API Changes

In Firefox 50, a few more history APIs landed: the getVisits function, and two events–onVisited and onVisitRemoved.

Content scripts in WebExtensions now gain access to a few export helpers that existed in SDK add-ons: cloneInto, createObjectIn and exportFunction.

The webNavigation API has gained event filtering. This allows users of the webNavigation API to filter events based on some criteria. Details on the URL Filtering option are available here.

There’s been a change to debugging WebExtensions. If you go to about:debugging and click on debug you now get all the Firefox Developer Tools features that are available to you on a regular webpage.

Why is this significant? Besides providing more developer features, this will work across add-on reloads and allows the debugging of more parts of WebExtensions. More importantly, it means that we are now using the same debugger that the rest of the Firefox Dev Tools team is using. Reducing duplicated code is a good thing.

As mentioned in an earlier blog post, native messaging is now available. This allows you to communicate with other processes on the host’s operating system. It’s a commonly used API for password managers and security software, which need to communicate with external processes.

Documentation

The documentation for WebExtensions has been updated with some amazing resources over the last few months. This has included the addition of a few new areas:

The documentation is hosted on MDN and updates or improvements to the documentation are always welcome.

There are now 17 example WebExtensions on github. Recently added are history-deleter and cookie-bg-picker.

What’s coming

We are currently working on the proxy API. The intent is to ship a slightly different API than the one Chrome provides because we have access to better APIs in Firefox.

The ability to write WebExtensions APIs in an add-on has now landed in Firefox 51 through the implementation of WebExtensions Experiments. This means that you don’t need to build and compile all of Firefox in order to add in new APIs and get involved in WebExtensions. The policy for this functionality is currently under discussion and we’ll have more details soon.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Planet MozillaWeb QA Team Meeting, 25 Aug 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Planet MozillaReps weekly, 25 Aug 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaProfessional Development - Strategic Leadership

This week I spent some time on the Harvard campus taking a professional development class inStrategic Leadership.  It was an interesting experience, so I thought I would share some of the takeaways from the two days.
The 32 participants in the class represented many different countries and cultures - we had participants from the U.S., Europe, Asia , Africa and Latin America. They spanned many different industries - the Military, Education, BioTech, and Government.  The networking event the first evening afforded us an opportunity to meet the diverse array of people from all different countries and industries (there was also another class being held with another 25 students from all over the world). I let many of the participants know that we had community members in their respective areas and described what Mozilla was all about. Some of the participants were quite excited about the fact they got to meet someone that works for a company that makes the browser they use on a daily basis. They were also interested to learn about how community contributions help drive the Mozilla project.
In addition to reviewing some case studies, the content on the first day explored the realms of leadership - including the psychological dimensions and emotional dimensions and styles. Day 2 we took a deep dive into decision making - using emotional intelligence as a foundation and exploring the challenges of decision making.
The great thing about these types of courses is they leave ample time for reflection. You actually get to think about your style, and learn about yourself and your leadership styles and how you apply them to different situations in your life. And since the case studies are taken from real life situations, taking the time to think about how you would have handled the situation is definitely a good process to go through.
Both the recent TRIBE class I took and this one helped me gather some tools that I should be able to use when I working with both my co-workers and community members. The important thing to remember is that it is a process - although you won't see changes overnight, if you employ some of the tools they outline, you may be able to hone your leadership skills as time passes.

Planet MozillaEU Copyright Law Undermines Innovation and Creativity on the Internet. Mozilla is Fighting for Reform

Mozilla has launched a petition — and will be releasing public education videos — to reform outdated copyright law in the EU

 

The internet is an unprecedented platform for innovation, opportunity and creativity. It’s where artists create; where coders and entrepreneurs build game-changing technology; where educators and researchers unlock progress; and where everyday people live their lives.

The internet brings new ideas to life everyday, and helps make existing ideas better. As a result, we need laws that protect and enshrine the internet as an open, collaborative platform.

But in the EU, certain laws haven’t caught up with the internet. The current copyright legal framework is outdated. It stifles opportunity and prevents — and in many cases, legally prohibits — artists, coders and everyone else from creating and innovating online. This framework was enacted before the internet changed the way we live. As a result, these laws clash with life in the 21st century.

Copyright

Here are just a few examples of outdated copyright law in the EU:

  • It’s illegal to share a picture of the Eiffel Tower light display at night. The display is copyrighted — and tourists don’t have the artists’ express permission.
  • In some parts of the EU, making a meme is technically unlawful. There is no EU-wide fair use exception.
  • In some parts of the EU, educators can’t screen films or share teaching materials in the classroom due to restrictive copyright law.

It’s time our laws caught up with our technology. Now is the time to make a difference: This fall, the European Commission plans to reform the EU copyright framework.

Mozilla is calling on the EU Commission to enact reform. And we’re rallying and educating citizens to do the same. Today, Mozilla is launching a campaign to bring copyright law into the 21st century. Citizens can read and sign our petition. When you add your name, you’re supporting three big reforms:

1. Update EU copyright law for the 21st century.

Copyright can be valuable in promoting education, research, and creativity — if it’s not out of date and excessively restrictive. The EU’s current copyright laws were passed in 2001, before most of us had smartphones. We need to update and harmonise the rules so we can tinker, create, share, and learn on the internet. Education, parody, panorama, remix and analysis shouldn’t be unlawful.

2. Build in openness and flexibility to foster innovation and creativity.

Technology advances at a rapid pace, and laws can’t keep up. That’s why our laws must be future-proof: designed so they remain relevant in 5, 10 or even 15 years. We need to allow new uses of copyrighted works in order to expand growth and innovation. We need to build into the law flexibility — through a User Generated Content (UGC) exception and a clause like an open norm, fair dealing, or fair use — to empower everyday people to shape and improve the internet.

3. Don’t break the internet.

A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way. But that key principle is under threat. Some people are calling for licensing fees and restrictions on internet companies for basic things like creating hyperlinks or uploading content. Others are calling for new laws that would mandate monitoring and filtering online. These changes would establish gatekeepers and barriers to entry online, and would risk undermining the internet as a platform for economic growth and free expression.

At Mozilla, we’re committed to an exceptional internet. That means fighting for laws that make sense in the 21st century. Are you with us? Voice your support for modern copyright law in the EU and sign the petition today.

Planet MozillaAdd-ons Update – Week of 2016/08/24

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1228 listed add-on submissions were reviewed:

  • 1224 (92%) were reviewed in fewer than 5 days.
  • 57 (4%) were reviewed between 5 and 10 days.
  • 46 (3%) were reviewed after more than 10 days.

There are 203 listed add-ons awaiting review.

You can read about the improvements we’ve made in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The compatibility blog post for Firefox 49 is up, and the bulk validation was run. The blog post for Firefox 50 should be published in the next week.

Going back to Firefox 48, there are a couple of changes that are worth keeping in mind: (1) release and beta builds no longer have a preference to deactivate signing enforcement, and (2) multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank fiveNinePlusR for their recent contributions to the add-ons world. You can read more about their contributions in our recognition page.

Planet MozillaGoing from "Drive-Thru" Bug Filers to New Mozillians

Earlier this month at PyCon AU, VM Brasseur gave this talk on maximizing the contributions from "drive-thru" participants in your open source project. Her thesis is that most of your contributors are going to report one bug, fix one thing, and move on, so it's in your best interest to set up your project so that these one-and-done contributors have a great experience: a process which is "easy to understand, easy to follow, and which makes it easy to contribute" will make new contributors successful, and more likely to return.

Meanwhile, the Firefox team's trying to increase the number of repeat contributors, in particular, people filing quality bugs. We know that a prompt reply to a first code contribution encourages that contributor to do more, and believe the same is true of bug filing: that quickly triaging incoming bugs will encourage those contributors to file more, and better bugs, making for a healthy QA process and a higher quality product.

For now, it's rare that the first few bugs someone files in Bugzilla are "high quality" bugs - clearly described, reproducible and actionable - which means they're more likely to stall, or be closed as invalid or incomplete, which in turn means that contributor is less likely file another.

In order to have more repeat contributors, we have to work on the "drive-thru" bug filer's experience that Brasseur describes in her talk, to make new contributors successful, and make them repeat quality bug filers. Here's some of the things we're doing:

  • Michael Hoye's asking new contributors about their bug-filing experience with a quick survey.
  • We're measuring contributor behavior in a dashboard, to find out how often contributors, new and old are filing bugs.
  • Hamilton Ulmer from the Data team is looking at an extract of bugs from Bugzilla to characterize good vs. bad bugs (that is bugs that stall, or get closed as invalid.)
  • The BMO and Firefox teams have been working on UI improvements to Bugzilla (Modal View and Readable Bug Statuses) to make it easier to file and follow up on bugs.

These steps don't guarantee success, but will guide us going forward. Thanks to Michael Hoye for review and comments.



comment count unavailable comments

Planet WebKitCelebrating 15 Years of WebKit

15 years ago today, on August 24th, 2001, you can see the first check-in to the very same WebKit repository we have been using ever since. This was only a few weeks after the WebKit project was born on June 25th, 2001. Nowadays, WebKit is a well-established open source project. Our 15-year history is in the public repository.

WebKit has come a long way since those early days. It is a true open source project with a public repository. It supports multiple platforms. It has a bunch of exciting modern Web platform features, including many invented by the WebKit team. It has taken performance to far greater levels. And it is the de facto standard for the mobile web.

All of those developments were incredibly exciting to work on. The occasion today is an opportunity to pause and reflect on WebKit’s humble beginnings, and how far it has come. Let’s keep building on this history to make WebKit even more awesome.

Planet MozillaIt's time to upgrade from Test Pilot to Test Pilot

Were you one of the brave pioneers who participated in the original Test Pilot program? You deserve a hearty thanks!

Are you also someone who never bothered to uninstall the old Test Pilot add-on which has been sitting around in your browser ever since? Or maybe you were hoping the program would be revived and another experiment would appear? Or maybe you forgot all about it?

If that's you, I've got news: I'm releasing an upgrade for the old Test Pilot add-on today which will:

  • Open a new tab inviting you to join the new Test Pilot program
  • Uninstall itself and all related settings and data

If you're interested in joining the new program, please click through and install the new add-on when you're prompted. If you're not interested, just close the tab and be happy your browser has one less add-on it needs to load every time you start it up.

Random FAQs:

  • The old add-on is, well, old, so it can only be uninstalled after you restart Firefox. It's all automatic when you do, but if you're waiting for a prompt and don't see it, that could be the culprit.

  • If you already have the new Test Pilot add-on installed, you can just close the new tab. The old add-on will be uninstalled and your existing Test Pilot add-on (and any experiments you have running) will be unaffected.

A special thanks to Philipp Kewisch and Andreas Wagner for their help writing and reviewing the add-on.

Planet MozillaGoogle Cardboard 360° Photosphere Viewer with A-Frame

In my previous post "Embedding Google Cardboard Camera VR Photosphere with A-Frame", I wrote that some talented programmer would probably create a better solution for embedding Google Cardboard camera photosphere using A-Frame.

I didn't know that Chris Car had already created a sophisticated solution for this problem. You can view it here on A-Frame blog.

You first might have to use Google Cardboard camera converter tool to make your Google Cardboard photosphere.

.

Planet MozillaRandom Thoughts On Rust: crates.io And IDEs

I've always liked the idea of Rust, but to tell the truth until recently I hadn't written much Rust code. Now I've written several thousand lines of Rust code and I have some more informed comments :-). In summary: Rust delivers, with no major surprises, but of course some aspects of Rust are better than I expected, and others worse.

cargo and crates.io

cargo and crates.io are awesome. They're probably no big deal if you've already worked with a platform that has similar infrastructure for distributing and building libraries, but I'm mainly a systems programmer which until now meant C and C++. (This is one note in a Rust theme: systems programmers can have nice things.) Easy packaging and version management encourages modularising and publishing code. Knowing that publishing to crates.io gives you some protection against language/compiler breakage is also a good incentive.

There's been some debate about whether Rust should have a larger standard library ("batteries included"). IMHO that's unnecessary; my main issue with crates.io is discovery. Anyone can claim any unclaimed name and it's sometimes not obvious what the "best" library is for any given task. An "official" directory matching common tasks to blessed libraries would go a very long way. I know from browser development that an ever-growing centrally-supported API surface is a huge burden, so I like the idea of keeping the standard library small and keeping library development decentralised and reasonably independent of language/compiler updates. It's really important to be able to stop supporting an unwanted library, letting its existing users carry on using it without imposing a burden on anyone else. However, it seems likely that in the long term crates.io will accumulate a lot of these orphaned crates, which will make searches increasingly difficult until the discovery problem is addressed.

IDEs

So far I've been using Eclipse RustDT. It's better than nothing, and good enough to get my work done, but unfortunately not all that good. I've heard that others are better but none are fantastic yet. It's a bit frustrating because in principle Rust's design enables the creation of extraordinarily good IDE support! Unlike C/C++, Rust projects have a standard, sane build system and module structure. Rust is relatively easy to parse and is a lot simpler than C++. Strong static typing means you can do code completion without arcane heuristics. A Rust IDE could generate code for you in many situations, e.g.: generate a skeleton match body covering all cases of a sum type; generate a skeleton trait implementation; one-click #[derive] annotation; automatically add use statements and update Cargo.toml; automatically insert conversion trait calls and type coercions; etc. Rust has quite comprehensive style and naming guidelines that an IDE can enforce and assist with. (I don't like a couple of the standard style decisions --- the way rustfmt sometimes breaks very().long().method().call().chains() into one call per line is galling --- but it's much better to have them than a free-for-all.) Rust is really good at warning about unused cruft up to crate boundaries --- a sane module system at work! --- and one-click support for deleting it all would be great. IDEs should assist with semantic versioning --- letting you know if you've changed stable API but haven't revved the major version. All the usual refactorings are possible, but unlike mainstream languages you can potentially do aggressive code motion without breaking semantics, by leveraging Rust's tight grip on side effects. (More about this in another blog post.)

I guess one Rust feature that makes an IDE's job difficult is type inference. For C and C++ an IDE can get adequate information for code-completion etc by just parsing up to the user's cursor and ignoring the rest of the containing scope (which will often be invalid or non-existent). That approach would not work so well in Rust, because in many cases the types of variables depend on code later in the same function. The IDE will need to deal with partially-known types and try very hard to quickly recover from parse errors so later code in the same function can be parsed. It might be a good idea to track text changes and reuse previous parses of unchanged text instead of trying to reparse it with invalid context. On the other hand, IDEs and type inference have some synergy because the IDE can display inferred types.

Planet MozillaPracticing Open: Expanding Participation in Hiring Leadership

Last fall I came across a hiring practice that surprised me. We were hiring for a pretty senior position. When I looked into the interview schedule I realized that we didn’t have a clear process for the candidate to meet a broad cross-section of Mozillians.  We had a good clear process for the candidate to meet peers and people in the candidate’s organization.  But we didn’t have a mechanism to go broader.

This seemed inadequate to me, for two reasons.  First, the more senior the role, the broader a part of Mozilla we expect someone to be able to lead, and the broader a sense of representing the entire organization we expect that person to have.  Our hiring process should reflect this by giving the candidate and a broader section of people to interact.  

Second, Mozilla’s core DNA is from the open source world, where one earns leadership by first demonstrating one’s competence to one’s peers. That makes Mozilla a tricky place to be hired as a leader. So many roles don’t have ways to earn leadership through demonstrating competence before being hired. We can’t make this paradox go away. So we should tune our hiring process to do a few things:

  • Give candidates a chance to demonstrate their competence to the same set of people who hope they can lead. The more senior the role, the broader a part of Mozilla we expect someone to be able to lead.
  • Expand the number of people who have a chance to make at least a preliminary assessment of a candidate’s readiness for a role. This isn’t the same as the open source ideal of working with someone for a while. But it is a big difference from never knowing or seeing or being consulted about a candidate. We want to increase the number of people who are engaged in the selection and then helping the newly hired person succeed.

We made a few changes right away, and we’re testing out how broadly these changes might be effective.  Our immediate fix was to organize a broader set of people to talk to the candidate through a panel discussion. We aimed for a diverse group, from role to gender to geography. We don’t yet have a formalized way to do this, and so we can’t yet guarantee that we’re getting a representational group or that other potential criteria are met. However, another open source axiom is that “the perfect is the enemy of the good.” And so we started this with the goal of continual improvement. We’ve used the panel for a number of interviews since then.

We looked at this in more detail during the next senior leadership hire. Jascha Kaykas-Wolff, our Chief Marketing Officer, jumped on board, suggesting we try this out with the Vice President of Marketing Communications role he had open. Over the next few months Jane Finette (executive program manager, Office of the Chair) worked closely with Jascha to design and pilot a program of extending participation in the selection of our next VP of MarComm. Jane will describe that work in the next post. Here, I’ll simply note that the process was well received. Jane is now working on a similar process for the Director level.

Planet MozillaSending VAPID identified WebPush Notifications via Mozilla’s Push Service


Introduction

The Web Push API provides the ability to deliver real time events (including data) from application servers (app servers) to their client-side counterparts (applications), without any interaction from the user. In other parts of our Push documentation we provide a general reference for the application API and a basic client usage tutorial. This document addresses the app server side portion in detail, including integrating VAPID and Push message encryption into your server effectively, and how to avoid common issues.

Note: Much of this document presumes you’re familiar with programming as well as have done some light work in cryptography. Unfortunately, since this is new technology, there aren’t many libraries available that make sending messages painless and easy. As new libraries come out, we’ll add pointers to them, but for now, we’re going to spend time talking about how to do the encryption so that folks who need it, or want to build those libraries can understand enough to be productive.

Bear in mind that Push is not meant to replace richer messaging technologies like Google Cloud Messaging (GCM), Apple Push Notification system (APNs), or Microsoft’s Windows Notification System (WNS). Each has their benefits and costs, and it’s up to you as developers or architects to determine which system solves your particular set of problems. Push is simply a low cost, easy means to send data to your application.


Push Summary


The Push system looks like:
A diagram of the push process flow

Application — The user facing part of the program that interacts with the browser in order to request a Push Subscription, and receive Subscription Updates.
Application Server — The back-end service that generates Subscription Updates for delivery across the Push Server.
Push — The system responsible for delivery of events from the Application Server to the Application.
Push Server — The server that handles the events and delivers them to the correct Subscriber. Each browser vendor has their own Push Server to handle subscription management. For instance, Mozilla uses autopush.
Subscription — A user request for timely information about a given topic or interest, which involves the creation of an Endpoint to deliver Subscription Updates to. Sometimes also referred to as a “channel”.
Endpoint — A specific URL that can be used to send a Push Message to a specific Subscriber.
Subscriber — The Application that subscribes to Push in order to receive updates, or the user who instructs the Application to subscribe to Push, e.g. by clicking a “Subscribe” button.
Subscription Update — An event sent to Push that results in a Push Message being received from the Push Server.
Push Message — A message sent from the Application Server to the Application, via a Push Server. This message can contain a data payload.

The main parts that are important to Push from a server-side perspective are as follows — we’ll cover all of these points below in detail:


Identifying Yourself


Mozilla goes to great lengths to respect privacy, but sometimes, identifying your feed can be useful.

Mozilla offers the ability for you to identify your feed content, which is done using the Voluntary Application server Identification for web Push VAPID specification. This is a set of header values you pass with every subscription update. One value is a VAPID key that validates your VAPID claim, and the other is the VAPID claim itself — a set of metadata describing and defining the current subscription and where it has originated from.

VAPID is only useful between your servers and our push servers. If we notice something unusual about your feed, VAPID gives us a way to contact you so that things can go back to running smoothly. In the future, VAPID may also offer additional benefits like reports about your feeds, automated debugging help, or other features.

In short, VAPID is a bit of JSON that contains an email address to contact you, an optional URL that’s meaningful about the subscription, and a timestamp. I’ll talk about the timestamp later, but really, think of VAPID as the stuff you’d want us to have to help you figure out something went wrong.

It may be that you only send one feed, and just need a way for us to tell you if there’s a problem. It may be that you have several feeds you’re handling for customers of your own, and it’d be useful to know if maybe there’s a problem with one of them.

Generating your VAPID key

The easiest way to do this is to use an existing library for your language. VAPID is a new specification, so not all languages may have existing libraries.
Currently, we’ve collected several libraries under https://github.com/web-push-libs/vapid and are very happy to learn about more.

Fortunately, the method to generate a key is fairly easy, so you could implement your own library without too much trouble

The first requirement is an Elliptic Curve Diffie Hellman (ECDH) library capable of working with Prime 256v1 (also known as “p256” or similar) keys. For many systems, the OpenSSL package provides this feature. OpenSSL is available for many systems. You should check that your version supports ECDH and Prime 256v1. If not, you may need to download, compile and link the library yourself.

At this point you should generate a EC key for your VAPID identification. Please remember that you should NEVER reuse the VAPID key for the data encryption key you’ll need later. To generate a ECDH key using openssl, enter the following command in your Terminal:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem

This will create an EC private key and write it into vapid_private.pem. It is important to safeguard this private key. While you can always generate a replacement key that will work, Push (or any other service that uses VAPID) will recognize the different key as a completely different user.

You’ll need to send the Public key as one of the headers . This can be extracted from the private key with the following terminal command:

openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Creating your VAPID claim

VAPID uses JWT to contain a set of information (or “claims”) that describe the sender of the data. JWTs (or Javascript Web Tokens) are a pair of JSON objects, turned into base64 strings, and signed with the private ECDH key you just made. A JWT element contains three parts separated by “.”, and may look like:

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

  1. The first element is a “header” describing the JWT object. This JWT header is always the same — the static string {typ:"JWT",alg:"ES256"} — which is URL safe base64 encoded to eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9. For VAPID, this string should always be the same value.
  2. The second element is a JSON dictionary containing a set of claims. For our example, we’ll use the following claims:
    {
        "sub": "mailto:admin@example.com",
        "exp": "1463001340"
    }
    

    The required claims are as follows:

    1. sub : The “Subscriber” — a mailto link for the administrative contact for this feed. It’s best if this email is not a personal email address, but rather a group email so that if a person leaves an organization, is unavailable for an extended period, or otherwise can’t respond, someone else on the list can. Mozilla will only use this if we notice a problem with your feed and need to contact you.
    2. exp : “Expires” — this is an integer that is the date and time that this VAPID header should remain valid until. It doesn’t reflect how long your VAPID signature key should be valid, just this specific update. Normally this value is fairly short, usually the current UTC time + no more than 24 hours. A long lived “VAPID” header does introduce a potential “replay” attack risk, since the VAPID headers could be reused for a different subscription update with potentially different content.

     

    Feel free to add additional items to your claims. This info really should be the sort of things you want to get at 3AM when your server starts acting funny. For instance, you may run many AWS S3 instances, and one might be acting up. It might be a good idea to include the AMI-ID of that instance (e.g. “aws_id”:”i-5caba953″). You might be acting as a proxy for some other customer, so adding a customer ID could be handy. Just remember that you should respect privacy and should use an ID like “abcd-12345” rather than “Mr. Johnson’s Embarrassing Bodily Function Assistance Service”. Just remember to keep the data fairly short so that there aren’t problems with intermediate services rejecting it because the headers are too big.

Once you’ve composed your claims, you need to convert them to a JSON formatted string, ideally with no padding space between elements1, for example:

{"sub":"mailto:admin@example.com","exp":"1463087677"}

Then convert this string to a URL-safe base64-encoded string, with the padding ‘=’ removed. For example, if we were to use python:


   import base64
   import json
   # These are the claims
   claims = {"sub":"mailto:admin@example.com",
             "exp":"1463087677"}
   # convert the claims to JSON, then encode to base64
   body = base64.urlsafe_b64encode(json.dumps(claims))
   print body

would give us

eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0

This is the “body” of the JWT base string.

The header and the body are separated with a ‘.’ making the JWT base string.

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0

  • The final element is the signature. This is an ECDH signature of the JWT base string created using your VAPID private key. This signature is URL safe base64 encoded, “=” padding removed, and again joined to the base string with an a ‘.’ delimiter.

    Generating the signature depends on your language and library, but is done by the ecdsa algorithm using your private key. If you’re interested in how it’s done in Python or Javascript, you can look at the code in https://github.com/web-push-libs/vapid.

    Since your private key will not match the one we’ve generated, the signature you see in the last part of the following example will be different.

    eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA


  • Forming your headers


    The VAPID claim you assembled in the previous section needs to be sent along with your Subscription Update as an Authorization header Bearer token — the complete token should look like so:

    Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

    Note: the header should not contain line breaks. Those have been added here to aid in readability
    Important! The Authorization header ID will be changing soon from “Bearer” to “WebPush”. Some Push Servers may accept both, but if your request is rejected, you may want to try changing the tag. This, of course, is part of the fun of working with Draft specifications

    You’ll also need to send a Crypto-Key header along with your Subscription Update — this includes a p256ecdsa element that takes the VAPID public key as its value — formatted as a URL safe, base64 encoded DER formatted string of the raw keypair.

    An example follows:
    Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzCZMwrY7OypBBNwEnusGkdg9F84WqW1j5Ymjk

    Note: If you like, you can cheat here and use the content of “vapid_public.pem”. You’ll need to remove the “-----BEGIN PUBLIC KEY------” and “-----END PUBLIC KEY-----” lines, remove the newline characters, and convert all “+” to “-” and “/” to “_”.

    You can validate your work against the VAPID test page — this will tell you if your headers are properly encoded. In addition, the VAPID repo contains libraries for JavaScript and Python to handle this process for you.

    We’re happy to consider PRs to add libraries covering additional languages.


    Receiving Subscription Information


    Your Application will receive an endpoint and key retrieval functions that contain all the info you’ll need to successfully send a Push message. See
    Using the Push API for details about this. Your application should send this information, along with whatever additional information is required, securely to the Application Server as a JSON object.

    Such a post back to the Application Server might look like this:

    
    {
        "customerid": "123456",
        "subscription": {
            "endpoint": "https://updates.push.services.mozilla.com/push/v1/gAAAA…",
            "keys": {
                "p256dh": "BOrnIslXrUow2VAzKCUAE4sIbK00daEZCswOcf8m3TF8V…",
                "auth": "k8JV6sjdbhAi1n3_LDBLvA"
             }
        },
        "favoritedrink": "warm milk"
    }
    

    In this example, the “subscription” field contains the elements returned from a fulfilled PushSubscription. The other elements represent additional data you may wish to exchange.

    How you decide to exchange this information is completely up to your organization. You are strongly advised to protect this information. If an unauthorized party gained this information, they could send messages pretending to be you. This can be made more difficult by using a “Restricted Subscription”, where your application passes along your VAPID public key as part of the subscription request. A restricted subscription can only be used if the subscription carries your VAPID information signed with the corresponding VAPID private key. (See the previous section for how to generate VAPID signatures.)

    Subscription information is subject to change and should be considered “opaque”. You should consider the data to be a “whole” value and associate it with your user. For instance, attempting to retain only a portion of the endpoint URL may lead to future problems if the endpoint URL structure changes. Key data is also subject to change. The app may receive an update that changes the endpoint URL or key data. This update will need to be reflected back to your server, and your server should use the new subscription information as soon as possible.


    Sending a Subscription Update Without Data


    Subscription Updates come in two varieties: data free and data bearing. We’ll look at these separately, as they have differing requirements.

    Data Free Subscription Updates

    Data free updates require no additional App Server processing, however your Application will have to do additional work in order to act on them. Your application will simply get a “push” message containing no further information, and it may have to connect back to your server to find out what the update is. It is useful to think of Data Free updates like a doorbell — “Something wants your attention.”

    To send a Data Free Subscription, you POST to the subscription endpoint. In the following example, we’ll include the VAPID header information. Values have been truncated for presentation readability.

    
    curl -v -X POST\
      -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHR…"\
      -H "Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzC…"\
      -H "TTL: 0"\
      "https://updates.push.services.mozilla.com/push/v1/gAAAAABXAuZmKfEEyPbYfLXqtPW…"
    

    This should result in an Application getting a “push” message, with no data associated with it.

    To see how to store the Push Subscription data and send a Push Message using a simple server, see our Using the Push API article.

    Data Bearing Subscription Updates


    Data Bearing updates are a lot more interesting, but do require significantly more work. This is because we treat our own servers as potentially hostile and require “end-to-end” encryption of the data. The message you send across the Mozilla Push Server cannot be read. To ensure privacy, however, your application will not receive data that cannot be decoded by the User Agent.

    There are libraries available for several languages at https://github.com/web-push-libs, and we’re happy to accept or link to more.

    The encryption method used by Push is Elliptic Curve Diffie Hellman (ECDH) encryption, which uses a key derived from two pairs of EC keys. If you’re not familiar with encryption, or the brain twisting math that can be involved in this sort of thing, it may be best to wait for an encryption library to become available. Encryption is often complicated to understand, but it can be interesting to see how things work.

    If you’re familiar with Python, you may want to just read the code for the http_ece package. If you’d rather read the original specification, that is available. While the code is not commented, it’s reasonably simple to follow.

    Data encryption summary

    • Octet — An 8 bit byte of data (between \x00 and \xFF)
    • Subscription data — The subscription data to encode and deliver to the Application.
    • Endpoint — the Push service endpoint URL, received as part of the Subscription data.
    • Receiver key — The p256dh key received as part of the Subscription data.
    • Auth key — The auth key received as part of the Subscription data.
    • Payload — The data to encrypt, which can be any streamable content between 2 and 4096 octets.
    • Salt — 16 octet array of random octets, unique per subscription update.
    • Sender key — A new ECDH key pair, unique per subscription update.

    Web Push limits the size of the data you can send to between 2 and 4096 octets. You can send larger data as multiple segments, however that can be very complicated. It’s better to keep segments smaller. Data, whatever the original content may be, is also turned into octets for processing.

    Each subscription update requires two unique items — a salt and a sender key. The salt is a 16 octet array of random octets. The sender key is a ECDH key pair generated for this subscription update. It’s important that neither the salt nor sender key be reused for future encrypted data payloads. This does mean that each Push message needs to be uniquely encrypted.

    The receiver key is the public key from the client’s ECDH pair. It is base64, URL safe encoded and will need to be converted back into an octet array before it can be used. The auth key is a shared “nonce”, or bit of random data like the salt.

    Emoji Based Diagram
    Subscription Data Per Update Info Update
    🎯 Endpoint 🔑 Private Server Key 📄Payload
    🔒 Receiver key (‘p256dh’) 🗝 Public Server Key
    💩 Auth key (‘auth’) 📎 Salt
    🔐 Private Sender Key
    ✒️ Public Server Key
    🏭 Build using / derive
    🔓 message encryption key
    🎲 message nonce

    Encryption uses a fabricated key and nonce. We’ll discuss how the actual encryption is done later, but for now, let’s just create these items.

    Creating the Encryption Key and Nonce

    The encryption uses HKDF (Hashed Key Derivation Function) using a SHA256 hash very heavily.

    Creating the secret

    The first HKDF function you’ll need will generate the common secret (🙊), which is a 32 octet value derived from a salt of the auth (💩) and run over the string “Content-Encoding: auth\x00”.
    So, in emoji =>
    🔐 = 🔑(🔒);
    🙊 = HKDF(💩, “Content-Encoding: auth\x00”).🏭(🔐)

    An example function in Python could look like so:

    
    # receiver_key must have "=" padding added back before it can be decoded.
    # How that's done is an exercise for the reader.
    # 🔒
    receiver_key = subscription['keys']['p256dh']
    # 🔑
    server_key = pyelliptic.ECC(curve="prime256v1")
    # 🔐
    sender_key = server_key.get_ecdh_key(base64.urlsafe_b64decode(receiver_key))
    
    #🙊
    secret = HKDF(
        hashes.SHA256(),
        length = 32,
        salt = auth,
        info = "Content-Encoding: auth\0").derive(sender_key)
    
    The encryption key and encryption nonce

    The next items you’ll need to create are the encryption key and encryption nonce.

    An important component of these is the context, which is:

    • A string comprised of ‘P-256’
    • Followed by a NULL (“\x00”)
    • Followed by a network ordered, two octet integer of the length of the decoded receiver key
    • Followed by the decoded receiver key
    • Followed by a networked ordered, two octet integer of the length of the public half of the sender key
    • Followed by the public half of the sender key.

    As an example, if we have an example, decoded, completely invalid public receiver key of ‘RECEIVER’ and a sample sender key public key example value of ‘sender’, then the context would look like:

    # ⚓ (because it's the base and because there are only so many emoji)
    root = "P-256\x00\x00\x08RECEIVER\x00\x06sender"

    The “\x00\x08” is the length of the bogus “RECEIVER” key, likewise the “\x00\x06” is the length of the stand-in “sender” key. For real, 32 octet keys, these values will most likely be “\x00\x20” (32), but it’s always a good idea to measure the actual key rather than use a static value.

    The context string is used as the base for two more HKDF derived values, one for the encryption key, and one for the encryption nonce. In python, these functions could look like so:

    In emoji:
    🔓 = HKDF(💩 , “Content-Encoding: aesgcm\x00” + ⚓).🏭(🙊)
    🎲 = HKDF(💩 , “Content-Encoding: nonce\x00” + ⚓).🏭(🙊)

    
    #🔓
    encryption_key = HKDF(
        hashes.SHA256(),
        length=16,
        salt=salt,
        info="Content-Encoding: aesgcm\x00" + context).derive(secret)
    
    # 🎲
    encryption_nonce = HDKF(
        hashes.SHA256(),
        length=12,
        salt=salt,
        info="Content-Encoding: nonce\x00" + context).derive(secret)
    

    Note that the encryption_key is 16 octets and the encryption_nonce is 12 octets. Also note the null (\x00) character between the “Content-Encoding” string and the context.

    At this point, you can start working your way through encrypting the data 📄, using your secret 🙊, encryption_key 🔓, and encryption_nonce 🎲.

    Encrypting the Data

    The function that does the encryption (encryptor) uses the encryption_key 🔓 to initialize the Advanced Encryption Standard (AES) function, and derives the Galois/Counter Mode (G/CM) Initialization Vector (IV) off of the encryption_nonce 🎲, plus the data chunk counter. (If you didn’t follow that, don’t worry. There’s a code snippet below that shows how to do it in Python.) For simplicity, we’ll presume your data is less than 4096 octets (4K bytes) and can fit within one chunk.
    The IV takes the encryption_nonce and XOR’s the chunk counter against the final 8 octets.

    
    def generate_iv(nonce, counter):
        (mask,) = struct.unpack("!Q", nonce[4:])  # get the final 8 octets of the nonce
        iv = nonce[:4] + struct.pack("!Q", counter ^ mask)  # the first 4 octets of nonce,
                                                         # plus the XOR'd counter
        return iv
    

    The encryptor prefixes a “\x00\x00” to the data chunk, processes it completely, and then concatenates its encryption tag to the end of the completed chunk. The encryptor tag is a static string specific to the encryptor. See your language’s documentation for AES encryption for further information.

    
    def encrypt_chunk(chunk, counter, encryption_nonce, encryption_key):
        encryptor = Cipher(algorithms.AES(encryption_key),
                          modes.GCM(generate_iv(encryption_nonce, 
                                                  counter))
        return encryptor.update(b"\x00\x00" + chunk) +
               encryptor.finalize() +
               encryptor.tag
    
    def encrypt(payload, encryption_nonce, encryption_key):
        result = b""
        counter = 0
        for i in list(range(0, len(payload)+2, 4096):
            result += encrypt_chunk(
                payload[i:i+4096], 
                counter, 
                encryption_key,
                encryption_nonce)
    


    Sending the Data

    Encrypted payloads need several headers in order to be accepted.

    The Crypto-Key header is a composite field, meaning that different things can store data here. There are some rules about how things should be stored, but we can simplify and just separate each item with a semicolon “;”. In our case, we’re going to store three things, a “keyid”, “p256ecdsa” and “dh”.

    “keyid” is the string “p256dh”. Normally, “keyid” is used to link keys in the Crypto-Key header with the Encryption header. It’s not strictly required, but some push servers may expect it and reject subscription updates that do not include it. The value of “keyid” isn’t important, but it must match between the headers. Again, there are complex rules about these that we’re safely ignoring, so if you want or need to do something complex, you may have to dig into the Encrypted Content Encoding specification a bit.

    “p256ecdsa” is the public key used to sign the VAPID header (See Forming your Headers). If you don’t want to include the optional VAPID header, you can skip this.

    The “dh” value is the public half of the sender key we used to encrypt the data. It’s the same value contained in the context string, so we’ll use the same fake, stand-in value of “sender”, which has been encoded as a base64, URL safe value. For our example, the base64 encoded version of the string ‘sender’ is ‘c2VuZGVy’

    Crypto-Key: p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh

    The Encryption Header contains the salt value we used for encryption, which is a random 16 byte array converted into a base64, URL safe value.

    Encryption: keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA

    The TTL Header is the number of seconds the notification should stay in storage if the remote user agent isn’t actively connected. “0” (Zed/Zero) means that the notification is discarded immediately if the remote user agent is not connected; this is the default. This header must be specified, even if the value is “0”.

    TTL: 0

    Finally, the Content-Encoding Header specifies that this content is encoded to the aesgcm standard.

    Content-Encoding: aesgcm

    The encrypted data is set as the Body of the POST request to the endpoint contained in the subscription info. If you have requested that this be a restricted subscription and passed your VAPID public key as part of the request, you must include your VAPID information in the POST.

    As an example, in python:

    headers = {
        'crypto-key': 'p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh',
        'content-encoding': 'aesgcm',
        'encryption': 'keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA',
        'ttl': 0,
    }
    requests.post(subscription_info['subscription']['endpoint'], 
                  data=encrypted_data, 
                  headers=headers)
    

    A successful POST will return a response of 201, however, if the User Agent cannot decrypt the message, your application will not get a “push” message. This is because the Push Server cannot decrypt the message so it has no idea if it is properly encoded. You can see if this is the case by:

    • Going to about:config in Firefox
    • Setting the dom.push.loglevel pref to debug
    • Opening the Browser Console (located under Tools > Web Developer > Browser Console menu.

    When your message fails to decrypt, you’ll see a message similar to the following
    The debugging console displaying "The service worker for scope 'https://mozilla-services.github.io/WebPushDataTestPage/' encountered an error decryption the a push message:, with a message and where to look for more info

    You can use values displayed in the Web Push Data Encryption Page to audit the values you’re generating to see if they’re similar. You can also send messages to that test page and see if you get a proper notification pop-up, since all the key values are displayed for your use.

    You can find out what errors and error responses we return, and their meanings by consulting our server documentation.


    Subscription Updates


    Nothing (other than entropy) lasts forever. There may come a point where, for various reasons, you will need to update your user’s subscription endpoint. There are any number of reasons for this, but your code should be prepared to handle them.

    Your application’s service worker will get a onpushsubscriptionchange event. At this point, the previous endpoint for your user is now invalid and a new endpoint will need to be requested. Basically, you will need to re-invoke the method for requesting a subscription endpoint. The user should not be alerted of this, and a new endpoint will be returned to your app.

    Again, how your app identifies the customer, joins the new endpoint to the customer ID, and securely transmits this change request to your server is left as an exercise for the reader. It’s worth noting that the Push server may return an error of 410 with an errno of 103 when the push subscription expires or is otherwise made invalid. (If a push subscription has expired several months ago, the server may return a different errno value.

    Conclusion

    Push Data Encryption can be very challenging, but worthwhile. Harder encryption means that it is more difficult for someone to impersonate you, or for your data to be read by unintended parties. Eventually, we hope that much of this pain will be buried in libraries that allow you to simply call a function, and as this specification is more widely adopted, it’s fair to expect multiple libraries to become available for every language.

    See also:

    1. WebPush Libraries: A set of libraries to help encrypt and send push messages.
    2. VAPID lib for python or javascript can help you understand how to encode VAPID header data.

    Footnotes:

    1. Technically, you don’t need to strip the whitespace from JWS tokens. In some cases, JWS libraries take care of that for you. If you’re not using a JWS library, it’s still a very good idea to make headers and header lines as compact as possible. Not only does it save bandwidth, but some systems will reject overly lengthy header lines. For instance, the library that autopush uses limits header line length to 16,384 bytes. Minus things like the header, signature, base64 conversion and JSON overhead, you’ve got about 10K to work with. While that seems like a lot, it’s easy to run out if you do things like add lots of extra fields to your VAPID claim set.

    Planet MozillaNew blog design – I let the browser do most of the work…

    Unless you are reading the RSS feed or the AMP version of this blog, you’ll see that some things changed here. Last week I spent an hour redesigning this blog from scrarch. No, I didn’t move to another platform (WordPress does the job for me so far), but I fixed a few issues that annoyed me.

    So now this blog is fully responsive, has no dependencies on any CSS frameworks or scripts and should render much nicer on mobile devices where a lot of my readers are.

    It all started with my finding Dan Klammers Bytesize Icons – the icons which are now visible in the navigation on top if your screen is wide enough to allow for them. I loved their simplicity and that I could embed them, thus having a visual menu that doesn’t need any extra HTTP overhead. So I copied and pasted, coloured in the lines and that was that.

    The next thing that inspired me incredibly was the trick to use a font-size of 1em + 1vw on the body of the document to ensure a readable text regardless of the resolution. It was one of the goodies in Heydon Pickering’s On writing less damn code post. He attributed this trick to Vasilis who of course is too nice and told the whole attribution story himself.

    Next was creating the menu. For this, I used the power of flexbox and a single media query to ensure that my logo stays but the text links wrap into a few lines next to it. You can play with the code on JSBin.

    The full CSS of the blog is now about 340 lines of code and has no dependency on any libraries or frameworks. There is no JavaScript except for ads.

    The rest was tweaking some font sizes and colours and adding some enhancements like skip links to jump over the navigation. These are visible when you tab into the document, which seems a good enough solution seeing that we do not have a huge navigation as it is.

    Other small fixes:

    • The code display on older posts is now fixed. I used an older plugin not compatible with the current one in the past. The fix was to write yet another plugin to un-do what the old one needed and giving it the proper HTML structure.
    • I switched the ad to a responsive one, so there should be no problems with this breaking the layout. Go on, test it out, click it a few hundred times to give it a thorough test.
    • I’ve stopped fixed image sizes for quite a while now and used 100% as width. With this new layout I also gave them a max width to avoid wasted space and massive blurring.
    • For videos I will now start using Embed responsively not to break the layout either.

    All in all this was the work of an hour, live in my browser and without any staging. This is a blog, it is here for words, not to do amazing feats of code.

    Here are few views of the blog on different devices (courtesy of the Chrome Devtools):

    iPad:
    Blog on iPad

    iPhone 5:
    Blog on iPhone 5

    iPhone 6:
    Blog on iPhone 6

    Nexus 5:
    Blog on Nexus 5

    Hope you like it.

    All in all I love working on the web these days. Our CSS toys are incredibly powerful, browsers are much more reliable and the insights you get and tweaks you can do in developer tools are amazing. When I think back when I did the first layout here in 2006, I probably wouldn’t go through these pains nowadays. Create some good stuff, just do as much as is needed.

    Planet MozillaFirefox 49 Beta 7 Testday, August 26th

    Hello Mozillians,

    We are happy to announce that Friday, August 26th, we are organizing Firefox 49 Beta 7 Testday. We will be focusing our testing on WebGL Compatibility and Exploratory Testing. Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better! See you on Friday!

    Planet MozillaThis Week in Rust 144

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    Updates from Rust Community

    News & Blog Posts

    New Crates & Project Updates

    Crate of the Week

    No crate was selected for this week for lack of votes. Ain't that a pity?

    Submit your suggestions and votes for next week!

    Call for Participation

    Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

    Some of these tasks may also have mentors available, visit the task page for more information.

    If you are a Rust project owner and are looking for contributors, please submit tasks here.

    Updates from Rust Core

    167 pull requests were merged in the last two weeks.

    New Contributors

    • Alexandre Oliveira
    • Amit Levy
    • clementmiao
    • DarkEld3r
    • Dustin Bensing
    • Erik Uggeldahl
    • Jacob
    • JessRudder
    • Michael Layne
    • Nazım Can Altınova
    • Neil Williams
    • pliniker

    Approved RFCs

    Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

    Final Comment Period

    Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    fn work(on: RustProject) -> Money

    Tweet us at @ThisWeekInRust to get your job offers listed here!

    Quote of the Week

    No quote was selected for QotW.

    Submit your quotes for next week!

    This Week in Rust is edited by: nasa42, llogiq, and brson.

    Planet MozillaIntroducing es7-membrane: A new ECMAScript 2016 Membrane implementation

    I have a new ECMAScript membrane implementation, which I will maintain and use in a professional capacity, and which I’m looking for lots of help with in the form of code reviews and API design advice.

    For those of you who don’t remember what a membrane is, Tom van Cutsem wrote about membranes a few years ago, including the first implementations in JavaScript. I recently answered a StackOverflow question on why a membrane might be useful in the first place.

    Right now, the membrane supports “perfect” mirroring across object graphs:  as far as I can tell, separate object graphs within the same membrane never see objects or functions from another object graph.

    The word “perfect” is in quotes because there are probably bugs, facets I haven’t yet tested for (“What happens if I call Object.freeze() on a proxy from the membrane?”, for example).  There is no support yet for the real uses of proxies, such as hiding properties, exposing new ones, or special handling of primitive values.  That support is forthcoming, as I do expect I will need a membrane in my “Verbosio” project (an experimental XML editor concept, irrelevant to this group) and another for the company I work for.

    The good news is the tests pass in Node 6.4, the current Google Chrome release, and Mozilla Firefox 51 (trunk, debug build).  I have not tested any other browser or ECMAScript environment.  I also will be checking in lots of use cases over the next few weeks which will guide future work on the module.

    With all that said, I’d love to get some help.  That’s why I moved it to its own GitHub repository.

    • None of this code has been reviewed yet.  My colleagues at work are not qualified to do a code or API review on this.  (This isn’t a knock on them – proxies are a pretty obscure part of the ECMAScript universe…)  I’m looking for some volunteers to do those reviews.
    • I have two or three different options, design-wise, for making a Membrane’s proxies customizable while still obeying the rules of the Membrane.  I’m assuming there’s some supremely competent people from the es-discuss mailing list who could offer advice through the GitHub project’s wiki pages.
    • I’d like also to properly wrap the baseline code as ES6 modules using the import, export statements – but I’m not sure if they’re safe to use in current release browsers or Node.  (I’ve been skimming Dr. Axel Rauschmeyer’s “Exploring ES6” chapter on ES6 modules.)
      • Side note:  try { import /* … */ } catch (e) { /* … */ } seems to be illegal syntax, and I’d really like to know why.  (The error from trunk Firefox suggested import needed to be on the first line, and I had the import block on the second line, after the try statement.)
    • This is my first time publishing a serious open-source project to GitHub, and my absolute first attempt at publishing to NPM:
    • I’m not familiar with Node, nor with “proper” packaging of modules pre-ES6.  So my build-and-test systems need a thorough review too.
    • I’m having trouble properly setting up continuous integration.  Right now, the build reports as passing but is internally erroring out…
    • Pretty much any of the other GitHub/NPM-specific goodies (a static demo site, wiki pages for discussions, keywords for the npm package, a Tonic test case, etc.) don’t exist yet.
    • Of course, anyone who has interest in membranes is welcome to offer their feedback.

    If you’re not able to comment here for some reason, I’ve set up a GitHub wiki page for that purpose.

    Planet MozillaPy Bay 2016 - a First Report

    Py Bay 2016 - a First Report

    PyBay held their first local Python conference this last weekend (Friday, August 19 through Sunday, August 21). What a great event! I just wanted to get down some first impressions - I hope to do more after the slides and videos are up.

    First, the venue and arrangements were spot on. Check the twitter traffic for #PyBay2016 and @Py_Bay and you see numerous comments confirming that. And, I must say the food was worthy of San Francisco - very tasty. And healthy. With the weather cooperating to be nicely sunny around noon, the outdoor seating was appreciated by all who came from far away. The organizers even made it a truly Bay Area experience by arranging for both Yoga and Improv breakouts. The people were great - volunteers, organizers, speakers, and attendees. Props to all.

    The technical program was well organized, and I’m really looking forward to the videos for the talks I couldn’t attend. Here’s some quick highlights that I hope to backfill.

    • OpenTracing - a talk by Ben Sigelman - big one for distributed systems, it promises a straightforward way to identify critical path issues across a micro-service distributed architecture. Embraced by a number of big companies (Google, Uber), it builds on real world experience with distributed systems.

      Programs just need to add about 5 lines of setup, and one call per ‘traceable action’ (whatever that means for your environment). The output can be directed anywhere - one of the specialized UI’s or traditional log aggregation services.

      There are open source, commercial offerings, and libraries for many languages (Python, Go, etc.) & frameworks (SQLAlchemy, Flask, etc.). As an entry level, you can insert the trace calls and render to existing logging. The framework adds guids to simplify tracking across multiple hosts & processes.

    • Semantic logging - a lightning talk by Mahmoud Hashemi was a brief introduction to the lithoxyl package. The readme contains the selling point. (Especially since I took Raymond Hettinger‘s intermediate class on Python Friday, and he convincingly advocated for the ratio of business logic lines to “admin lines” as a metric of good code.)

    • Mahmoud Hashemi also did a full talk on profiling Python performance in enterprise applications, and ways to improve that performance. (And, yes, we write enterprise applications.)

    And there was lots more that I’ll try to cover later. And add in some links for the above as they become available.

    Planet Mozilla[worklog] Edition 032. The sento cheminey

    From the seventh floor, I see the cheminey of a local sento. It's not always on. The smoke is not coming out, it gives me a good visual feedback of the opening hours. It's here. It doesn't have notifications. It doesn't have ics, atom. It's just there. And it's fine as-is. The digital world seems sometimes to create complicated UI and UX over things which are just working. They become disruptive but not helpful.

    Tune of the week: Leo Ferré - Avec le temps

    Webcompat Life

    Progress this week:

    Today: 2016-08-22T13:46:36.150030
    300 open issues
    ----------------------
    needsinfo       4
    needsdiagnosis  86
    needscontact    18
    contactready    28
    sitewait        156
    ----------------------
    

    You are welcome to participate

    Webcompat issues

    (a selection of some of the bugs worked on this week).

    WebCompat.com dev

    Reading List

    • Return of the 10kB Web design contest. Good stuff. I spend a good chunk of my time in the network panel of devtools… And it's horrific.

    Follow Your Nose

    TODO

    • Document how to write tests on webcompat.com using test fixtures.
    • ToWrite: Amazon prefetching resources with <object> for Firefox only.

    Otsukare!

    Planet Mozillapyvideo last thoughts

    What is pyvideo?

    pyvideo.org is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with pyvideo.org.

    This is my last update. pyvideo.org is now in new and better hands and will continue going forward.

    Read more… (2 mins to read)

    Planet MozillaCaching Async Operations via Promises

    I was working on a bug in Normandy the other day and remembered a fun little trick for caching asynchronous operations in JavaScript.

    The bug in question involved two asynchronous actions happening within a function. First, we made an AJAX request to the server to get an "Action" object. Next, we took an attribute of the action, the implementation_url, and injected a <script> tag into the page with the src attribute set to the URL. The JavaScript being injected would then call a global function and pass it a class function, which was the value we wanted to return.

    The bug was that if we called the function multiple times with the same action, the function would make multiple requests to the same URL, even though we really only needed to download data for each Action once. The solution was to cache the responses, but instead of caching the responses directly, I found it was cleaner to cache the Promise returned when making the request instead:

    export function fetchAction(recipe) {
      const cache = fetchAction._cache;
    
      if (!(recipe.action in cache)) {
        cache[recipe.action] = fetch(`/api/v1/action/${recipe.action}/`)
          .then(response => response.json());
      }
    
      return cache[recipe.action];
    }
    fetchAction._cache = {};
    

    Another neat trick in the code above is storing the cache as a property on the function itself; it helps avoid polluting the namespace of the module, and also allows callers to clear the cache if they wish to force a re-fetch (although if you actually needed that, it'd be better to add a parameter to the function instead).

    After I got this working, I puzzled for a bit on how to achieve something similar for the <script> tag injection. Unlike an AJAX request, the only thing I had to work with was an onload handler for the tag. Eventually I realized that nothing was stopping me from wrapping the <script> tag injection in a Promise and caching it in exactly the same way:

    export function loadActionImplementation(action) {
      const cache = loadActionImplementation._cache;
    
      if (!(action.name in cache)) {
        cache[action.name] = new Promise((resolve, reject) => {
          const script = document.createElement('script');
          script.src = action.implementation_url;
          script.onload = () => {
            if (!(action.name in registeredActions)) {
              reject(new Error(`Could not find action with name ${action.name}.`));
            } else {
              resolve(registeredActions[action.name]);
            }
          };
          document.head.appendChild(script);
        });
      }
    
      return cache[action.name];
    }
    loadActionImplementation._cache = {};
    

    From a nitpicking standpoint, I'm not entirely happy with this function:

    • The name isn't really consistent with the "fetch" terminology from the previous function, but I'm not convinced they should use the same verb either.
    • The Promise code could probably live in another function, leaving this one to only concern itself about the caching.
    • I'm pretty sure this does nothing to handle the case of the script failing to load, like a 404.

    But these are minor, and the patch got merged, so I guess it's good enough.

    Planet MozillaTenFourFox 45 beta 2: not yet

    So far the TenFourFox 45 beta is doing very well. There have been no major performance regressions overall (Amazon Music notwithstanding, which I'll get to presently) and actually overall opinion is that 45 seems much more responsive than 38. On that score, you'll like beta 2 even more when I get it out: I raided some more performance-related Firefox 46 and 47 patches and backported them too, further improving scrolling performance by reducing unnecessary memory allocation and also substantially reducing garbage collection overhead. There are also some minor tweaks to the JavaScript JIT, including a gimme I should have done long ago where we use single-precision FPU instructions instead of double-precision plus a rounding step. This doesn't make a big difference for most scripts, but a few, particularly those with floating point arrays, will benefit from the reduction in code size and slight improvement in speed.

    Unfortunately, while this also makes Amazon Music's delay on starting and switching tracks better, it's still noticeably regressed compared to 38. In addition, I'm hitting an assertion with YouTube on certain, though not all, videos over a certain minimum length; in the release builds it just seizes up at that point and refuses to play the video further. Some fiddling in the debugger indicates it might be related to what Chris Trusch was reporting about frameskipping not working right in 45. If I'm really lucky all this is related and I can fix them all by fixing that root problem. None of these are showstoppers but that YouTube assertion is currently my highest priority to fix for the next beta; I have not solved it yet though I may be able to wallpaper it. I'm aiming for beta 2 this coming week.

    On the localization front we have French, German and Japanese translations, the latter from the usual group that operates separately from Chris T's work in issue 42. I'd like to get a couple more in the can before our planned release on or about September 13. If you can help, please sign up.

    Planet MozillaNow we’re talking!

    The responses we’ve received to posting the initial design concepts for an updated Mozilla identity have exceeded our expectations. The passion from Mozillians in particular is coming through loud and clear. It’s awesome to witness—and exactly what an open design process should spark.

    Some of the comments also suggest that many people have joined this initiative in progress and may not have enough context for why we are engaging in this work.

    Since late 2014, we’ve periodically been fielding a global survey to understand how well Internet users recognize and perceive Mozilla and Firefox. The data has shown that while we are known, we’re not understood.

    • Less than 30% of people polled know we do anything other than make Firefox.
    • Many confuse Mozilla with our Firefox browser.
    • Firefox does not register any distinct attributes from our closest competitor, Chrome.

    We can address these challenges by strengthening our core with renewed focus on Firefox, prototyping the future with agile Internet of Things experiments and revolutionary platform innovation, and growing our influence by being clear about the Internet issues we stand for. All efforts now underway.

    To support these efforts and help clarify what Mozilla stands for requires visual assets reinforcing our purpose and vision. Our current visual toolkit for Mozilla is limited to a rather generic wordmark and small color palette. And so naturally, we’ve all filled that void with our own interpretations of how the brand should look.

    Our brand will always need to be expressed across a variety of communities, projects, programs, events, and more. But without a strong foundation of a few clearly identifiable Mozilla assets that connect all of our different experiences, we are at a loss. The current proliferation of competing visual expressions contribute to the confusion that Internet users have about us.

    For us to be able to succeed at the work we need to do, we need others to join us. To get others to join us, we need to be easier to find, identify and understand. That’s the net impact we’re seeking from this work.

    Doubling down on open.

    The design concepts being shared now are initial directions that will continue to be refined and that may spark new derivations. It’s highly unusual for work at this phase of development to be shown publicly and for a forum to exist to provide feedback before things are baked. Since we’re Mozilla, we’re leaning into our open-source ethos of transparency and participation.

    It’s also important to remember that we’re showing entire design systems here, not just logos. We’re pressure-testing which of these design directions can be expanded to fit the brilliant variety of programs, projects, events, and more that encompass Mozilla.

    If you haven’t had a chance to read earlier blog posts about the formation of the narrative territories, have a look. This design work is built from a foundation of strategic thinking about where Mozilla is headed over the next five years, the makeup of our target audience, and what issues we care about and want to amplify. All done with an extraordinary level of openness.

    We’re learning from the comments, especially the constructive ones, and are grateful that people are taking the time to write them. We’ll continue to share work as it evolves. Thanks to everyone who has engaged so far, and here’s to keeping the conversation going.

    Planet MozillaSomething You Know And… Something You Know

    The email said:

    To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

    This is united.com’s idea of two-factor authentication:

    united.com screenshot asking two security questions because my device is unknown

    It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

    Planet MozillaFoundation Demos August 19 2016

    Foundation Demos August 19 2016 Foundation Demos August 19 2016

    Planet MozillaWebdev Beer and Tell: August 2016

    Webdev Beer and Tell: August 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

    Planet MozillaImproving Pytest-HTML: My Outreachy Project

    Improving Pytest-HTML: My Outreachy Project Ana Ribero of the Outreachy Summer 2016 program cohort describes her experience in the program and what she did to improve Mozilla's Pytest-HTML QA tools.

    Planet MozillaA Simpler Add-on Review Process

    In 2011, we introduced the concept of “preliminary review” on AMO. Developers who wanted to list add-ons that were still being tested or were experimental in nature could opt for a more lenient review with the understanding that they would have reduced visibility. However, having two review levels added unnecessary complexity for the developers submitting add-ons, and the reviewers evaluating them. As such, we have implemented a simpler approach.

    Starting on August 22nd, there will be one review level for all add-ons listed on AMO. Developers who want to reduce the visibility of their add-ons will be able to set an “experimental” add-on flag in the AMO developer tools. This flag won’t have any effect on how an add-on is reviewed or updated.

    All listed add-on submissions will either get approved or rejected based on the updated review policy. For unlisted add-ons, we’re also unifying the policies into a single set of criteria. They will still be automatically signed and post-reviewed at our discretion.

    We believe this will make it easier to submit, manage, and review add-ons on AMO. Review waiting times have been consistently good this year, and we don’t expect this change to have a significant impact on this. It should also make it easier to work on AMO code, setting up a simpler codebase for future improvements.  We hope this makes the lives of our developers and reviewers easier, and we thank you for your continued support.

    Planet MozillaAuditing the Trump Campaign

    When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

    • Project Name: Donald J. Trump for President
    • Project Website: https://www.donaldjtrump.com/
    • Project Description: Make America great again
    • What is the maintenance status of the project? Look at the polls, we are winning!
    • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

    Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

    If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

    Planet MozillaServo homebrew nightly buids

    Servo binaries available via Homebrew

    See github.com/servo/homebrew-servo

    $ brew install servo/servo/servo-bin
    $ servo -w http://servo.org # See `servo --help`
    

    Update (every day):

    $ brew update && brew upgrade servo-bin
    

    Switch to older version (earliest version being 2016.08.19):

    $ brew switch servo-bin YYYY.MM.DD
    

    File issues specific to the Homebrew package here, and Servo issues here.

    This package comes without browserhtml.

    Planet MozillaRemoving the PowerShell curl alias?

    PowerShell is a spiced up command line shell made by Microsoft. According to some people, it is a really useful and good shell alternative.

    Already a long time ago, we got bug reports from confused users who couldn’t use curl from their PowerShell prompts and it didn’t take long until we figured out that Microsoft had added aliases for both curl and wget. The alias had the shell instead invoke its own command called “Invoke-WebRequest” whenever curl or wget was entered. Invoke-WebRequest being PowerShell’s own version of a command line tool for fiddling with URLs.

    Invoke-WebRequest is of course not anywhere near similar to neither curl nor wget and it doesn’t support any of the command line options or anything. The aliases really don’t help users. No user who would want the actual curl or wget is helped by these aliases, and user who don’t know about the real curl and wget won’t use the aliases. They were and remain pointless. But they’ve remained a thorn in my side ever since. Me knowing that they are there and confusing users every now and then – not me personally, since I’m not really a Windows guy.

    Fast forward to modern days: Microsoft released PowerShell as open source on github yesterday. Without much further ado, I filed a Pull-Request, asking the aliases to be removed. It is a minuscule, 4 line patch. It took way longer to git clone the repo than to make the actual patch and submit the pull request!

    It took 34 minutes for them to close the pull request:

    “Those aliases have existed for multiple releases, so removing them would be a breaking change.”

    To be honest, I didn’t expect them to merge it easily. I figure they added those aliases for a reason back in the day and it seems unlikely that I as an outsider would just make them change that decision just like this out of the blue.

    But the story didn’t end there. Obviously more Microsoft people gave the PR some attention and more comments were added. Like this:

    “You bring up a great point. We added a number of aliases for Unix commands but if someone has installed those commands on WIndows, those aliases screw them up.

    We need to fix this.”

    So, maybe it will trigger a change anyway? The story is ongoing…

    Planet MozillaCulture Shock

    I’ve been meaning to get around to posting this for… maybe fifteen years now? Twenty? At least I can get it off my desk now.

    As usual, it’s safe to assume that I’m not talking about only one thing here.

    I got this document about navigating culture shock from an old family friend, an RCMP negotiator now long retired. I understand it was originally prepared for Canada’s Department of External Affairs, now Global Affairs Canada. As the story made it to me, the first duty posting of all new RCMP recruits used to (and may still?) be to a detachment stationed outside their home province, where the predominant language spoken wasn’t their first, and this was one of the training documents intended to prepare recruits and their families for that transition.

    It was old when I got it 20 years ago, a photocopy of a mimeograph of something typeset on a Selectric years before; even then, the RCMP and External Affairs had been collecting information about the performance of new hires in high-stress positions in new environments for a long time. There are some obviously dated bits – “writing letters back home” isn’t really a thing anymore in the stamped-envelope sense they mean and “incurring high telephone bills”, well. Kids these days, they don’t even know, etcetera. But to a casual search the broad strokes of it are still valuable, and still supported by recent data.

    Traditionally, the stages of cross—cultural adjustment have been viewed as a U curve. What this means is, that the first months in a new culture are generally exciting – this is sometimes referred to as the “honeymoon” or “tourist” phase. Inevitably, however, the excitement wears off and coping with the new environment becomes depressing, burdensome, anxiety provoking (everything seems to become a problem; housing, neighbors, schooling, health care, shopping, transportation, communication, etc.) – this is the down part of the U curve and is precisely the period of so-called “culture shock“. Gradually (usually anywhere from 6 months to a year) an individual learns to cope by becoming involved with, and accepted by, the local people. Culture shock is over and we are back, feeling good about ourselves and the local culture.

    Spoiler alert: It doesn’t always work out that way. But if you know what to expect, and what you’re looking for, you can recognize when things are going wrong and do something about it. That’s the key point, really: this slow rollercoaster you’re on isn’t some sign of weakness or personal failure. It’s an absolutely typical human experience, and like a lot of experiences, being able to point to it and give it a name also gives you some agency over it you may not have thought you had.

    I have more to say about this – a lot more – but for now here you go: “Adjusting To A New Environment”, date of publication unknown, author unknown (likely Canada’s Department of External Affairs.) It was a great help to me once upon a time, and maybe it will be for you.

    Planet MozillaIntern Presentations 2016, 18 Aug 2016

    Intern Presentations 2016 Group 5 of Mozilla's 2016 Interns presenting what they worked on this summer. Click the Chapters Tab for a topic list. Nathanael Alcock- MV Dimitar...

    Planet MozillaWhat’s Up with SUMO – 18th August

    Hello, SUMO Nation!

    It’s good to be back and know you’re reading these words :-) A lot more happening this week (have you heard about Activate Mozilla?), so go through the updates if you have not attended all our meetings – and do let us know if there’s anything else you want to see in the blog posts – in the comments!

    Welcome, new contributors!

    If you just joined us, don’t hesitate – come over and say “hi” in the forums!

    Contributors of the week

    Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

    Most recent SUMO Community meeting

    The next SUMO Community meeting

    • …is happening on the 24th of August!
    • If you want to add a discussion topic to the upcoming meeting agenda:
      • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
      • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
      • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

    Community

    Social

    Support Forum

    • The SUMO Firefox 48 Release Report is open for feedback: please add your links, tweets, bugs, threads and anything else that you would like to have highlighted in the report.
    • More details about the audit for the Support Forum:
      • The audit will only be happening this week and next
      • It will determine the forum contents that will be kept and/or refreshed
      • The Get Involved page will be rewritten and designed as it goes through a “Think-Feel-do” exercis.
      • Please take a few minutes this week to read through the document and make a comment or edit.
      • One of the main questions is “what are the things that we cannot live without in the new forum?” – if you have an answer, write more in the thread!
    • Join Rachel in the SUMO Vidyo room on Friday between noon and 14:00 PST for answering forum threads and general hanging out!

    Knowledge Base & L10n

    Firefox

    • for Android
      • Version 49 will not have many features, but will include bug and security fixes.
    • for iOS
      • Version 49 will not have many features, but will include bug and security fixes.

    … and that’s it for now, fellow Mozillians! We hope you’re looking forward to a great weekend and we hope to see you soon – online or offline! Keep rocking the helpful web!

    Planet MozillaThe Future of Programming

    Here’s a talk I watched some months ago, and could’ve sworn I’d written a blogpost about. Ah well, here it is:

    Bret Victor – The Future of Programming from Bret Victor on Vimeo.

    It’s worth the 30min of your attention if you have interest in programming or computer history (which you should have an interest in if you are a developer). But here it is in sketch:

    The year is 1973 (well, it’s 2004, but the speaker pretends it is 1973), and the future of programming is bright. Instead of programming in procedures typed sequentially in text files, we are at the cusp of directly manipulating data with goals and constraints that are solved concurrently in spatial representations.

    The speaker (Bret Victor) highlights recent developments in the programming of automated computing machines, and uses it to suggest the inevitability of a very different future than we currently live and work in.

    It highlights how much was ignored in my world-class post-secondary CS education. It highlights how much is lost by hiding research behind paywalled journals. It highlights how many times I’ve had to rewrite the wheel when, more than a decade before I was born, people were prototyping hoverboards.

    It makes me laugh. It makes me sad. It makes me mad.

    …that’s enough of that. Time to get back to the wheel factory.

    :chutten


    Planet MozillaConnected Devices Weekly Program Update, 18 Aug 2016

    Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

    Planet MozillaReps weekly, 18 Aug 2016

    Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Planet Mozilla'Tootsie Pop' Followup

    A little while back, I wrote up a tentative proposal I called the Tootsie Pop model for unsafe code. It’s safe to say that this model was not universally popular. =) There was quite a long and fruitful discussion on discuss. I wanted to write a quick post summarizing my main take-away from that discussion and to talk a bit about the plans to push the unsafe discussion forward.

    The importance of the unchecked-get use case

    For me, the most important lesson was the importance of the unchecked get use case. Here the idea is that you have some (safe) code which is indexing into a vector:

    <figure class="code">
    1
    2
    3
    4
    5
    6
    
    fn foo() {
        let vec: Vec<i32> = vec![...];
        ...
        vec[i]
        ...
    }
    
    </figure>

    You have found (by profiling, but of course) that this code is kind of slow, and you have determined that the bounds-check caused by indexing is a contributing factor. You can’t rewrite the code to use iterators, and you are quite confident that the index will always be in-bounds, so you decide to dip your tie into unsafe by calling get_unchecked:

    <figure class="code">
    1
    2
    3
    4
    5
    6
    
    fn foo() {
        let vec: Vec<i32> = vec![...];
        ...
        unsafe { vec.get_unchecked(i) }
        ...
    }
    
    </figure>

    Now, under the precise model that I proposed, this means that the entire containing module is considered to be within an unsafe abstraction boundary, and hence the compiler will be more conservative when optimizing, and as a result the function may actually run slower when you skip the bounds check than faster. (A very similar example is invoking str::from_utf8_unchecked, which skips over the utf-8 validation check.)

    Many people were not happy about this side-effect, and I can totally understand why. After all, this code isn’t mucking about with funny pointers or screwy aliasing – the unsafe block is a kind of drop-in replacement for what was there before, so it seems odd for it to have this effect.

    Where to go from here

    Since posting the last blog post, we’ve started a longer-term process for settling and exploring a lot of these interesting questions about the proper use of unsafe. At this point, we’re still in the data gathering phase. The idea here is to collect and categorize interesting examples of unsafe code. I’d prefer at this point not to be making decisions per se about what is legal or not – although in some cases someting may be quite unambiguous – but rather just try to get a good corpus with which we can evaluate different proposals.

    While I haven’t given up on the Tootsie Pop model, I’m also not convinced it’s the best approach. But whatever we do, I still believe we should strive for something that is safe and predictable by default – something where the rules can be summarized on a postcard, at least if you don’t care about getting every last bit of optimization. But, as the unchecked-get example makes clear, it is important that we also enable people to obtain full optimization, possibly with some amount of opt-in. I’m just not yet sure what’s the right setup to balance the various factors.

    As I wrote in my last post, I think that we have to expect that whatever guidelines we establish, they will have only a limited effect on the kind of code that people write. So if we want Rust code to be reliable in practice, we have to strive for rules that permit the things that people actually do: and the best model we have for that is the extant code. This is not to say we have to achieve total backwards compatibility with any piece of unsafe code we find in the wild, but if we find we are invalidating a common pattern, it can be a warning sign.

    Planet MozillaHTTP/2 connection coalescing

    Section 9.1.1 in RFC7540 explains how HTTP/2 clients can reuse connections. This is my lengthy way of explaining how this works in reality.

    Many connections in HTTP/1

    With HTTP/1.1, browsers are typically using 6 connections per origin (host name + port). They do this to overcome the problems in HTTP/1 and how it uses TCP – as each connection will do a fair amount of waiting. Plus each connection is slow at start and therefore limited to how much data you can get and send quickly, you multiply that data amount with each additional connection. This makes the browser get more data faster (than just using one connection).

    6 connections

    Add sharding

    Web sites with many objects also regularly invent new host names to trigger browsers to use even more connections. A practice known as “sharding”. 6 connections for each name. So if you instead make your site use 4 host names you suddenly get 4 x 6 = 24 connections instead. Mostly all those host names resolve to the same IP address in the end anyway, or the same set of IP addresses. In reality, some sites use many more than just 4 host names.

    24 connections

    The sad reality is that a very large percentage of connections used for HTTP/1.1 are only ever used for a single HTTP request, and a very large share of the connections made for HTTP/1 are so short-lived they actually never leave the slow start period before they’re killed off again. Not really ideal.

    One connection in HTTP/2

    With the introduction of HTTP/2, the HTTP clients of the world are going toward using a single TCP connection for each origin. The idea being that one connection is better in packet loss scenarios, it makes priorities/dependencies work and reusing that single connections for many more requests will be a net gain. And as you remember, HTTP/2 allows many logical streams in parallel over that single connection so the single connection doesn’t limit what the browsers can ask for.

    Unsharding

    The sites that created all those additional host names to make the HTTP/1 browsers use many connections now work against the HTTP/2 browsers’ desire to decrease the number of connections to a single one. Sites don’t want to switch back to using a single host name because that would be a significant architectural change and there are still a fair number of HTTP/1-only browsers still in use.

    Enter “connection coalescing”, or “unsharding” as we sometimes like to call it. You won’t find either term used in RFC7540, as it merely describes this concept in terms of connection reuse.

    Connection coalescing means that the browser tries to determine which of the remote hosts that it can reach over the same TCP connection. The different browsers have slightly different heuristics here and some don’t do it at all, but let me try to explain how they work – as far as I know and at this point in time.

    Coalescing by example

    Let’s say that this cool imaginary site “example.com” has two name entries in DNS: A.example.com and B.example.com. When resolving those names over DNS, the client gets a list of IP address back for each name. A list that very well may contain a mix of IPv4 and IPv6 addresses. One list for each name.

    You must also remember that HTTP/2 is also only ever used over HTTPS by browsers, so for each origin speaking HTTP/2 there’s also a corresponding server certificate with a list of names or a wildcard pattern for which that server is authorized to respond for.

    In our example we start out by connecting the browser to A. Let’s say resolving A returns the IPs 192.168.0.1 and 192.168.0.2 from DNS, so the browser goes on and connects to the first of those addresses, the one ending with “1”. The browser gets the server cert back in the TLS handshake and as a result of that, it also gets a list of host names the server can deal with: A.example.com and B.example.com. (it could also be a wildcard like “*.example.com”)

    If the browser then wants to connect to B, it’ll resolve that host name too to a list of IPs. Let’s say 192.168.0.2 and 192.168.0.3 here.

    Host A: 192.168.0.1 and 192.168.0.2
    Host B: 192.168.0.2 and 192.168.0.3

    Now hold it. Here it comes.

    The Firefox way

    Host A has two addresses, host B has two addresses. The lists of addresses are not the same, but there is an overlap – both lists contain 192.168.0.2. And the host A has already stated that it is authoritative for B as well. In this situation, Firefox will not make a second connect to host B. It will reuse the connection to host A and ask for host B’s content over that single shared connection. This is the most aggressive coalescing method in use.

    one connection

    The Chrome way

    Chrome features a slightly less aggressive coalescing. In the example above, when the browser has connected to 192.168.0.1 for the first host name, Chrome will require that the IPs for host B contains that specific IP for it to reuse that connection.  If the returned IPs for host B really are 192.168.0.2 and 192.168.0.3, it clearly doesn’t contain 192.168.0.1 and so Chrome will create a new connection to host B.

    Chrome will reuse the connection to host A if resolving host B returns a list that contains the specific IP of the connection host A is already using.

    The Edge and Safari ways

    They don’t do coalescing at all, so each host name will get its own single connection. Better than the 6 connections from HTTP/1 but for very sharded sites that means a lot of connections even in the HTTP/2 case.

    curl also doesn’t coalesce anything (yet).

    Surprises and a way to mitigate them

    Given some comments in the Firefox bugzilla, the aggressive coalescing sometimes causes some surprises. Especially when you have for example one IPv6-only host A and a second host B with both IPv4 and IPv4 addresses. Asking for data on host A can then still use IPv4 when it reuses a connection to B (assuming that host A covers host B in its cert).

    In the rare case where a server gets a resource request for an authority (or scheme) it can’t serve, there’s a dedicated error code 421 in HTTP/2 that it can respond with and the browser can then  go back and retry that request on another connection.

    Starts out with 6 anyway

    Before the browser knows that the server speaks HTTP/2, it may fire up 6 connection attempts so that it is prepared to get the remote site at full speed. Once it figures out that it doesn’t need all those connections, it will kill off the unnecessary unused ones and over time trickle down to one. Of course, on subsequent connections to the same origin the client may have the version information cached so that it doesn’t have to start off presuming HTTP/1.

    Planet Mozilla360/VR Meet-Up

    360/VR Meet-Up We explore the potential of this evolving medium and update you on the best new tools and workflows, covering: -Preproduction: VR pre-visualization, Budgeting, VR Storytelling...

    Planet MozillaPracticing “Open” at Mozilla

    Mozilla works to bring openness and opportunity for all into the Internet and online life.  We seek to reflect these values in how we operate.  At our founding it was easy to understand what this meant in our workflow — developers worked with open code and project management through bugzilla.  This was complemented with an open workflow through the social media of the day — mailing lists and the chat or “messenger” element, known as Internet Relay Chat (“irc”).  The tools themselves were also open-source and the classic “virtuous circle” promoting openness was pretty clear.

    Today the setting is different.  We were wildly successful with the idea of engineers working in open systems.  Today open source code and shared repositories are mainstream, and in many areas the best of practices and expected and default. On the other hand, the newer communication and workflow tools vary in their openness, with some particularly open and some closed proprietary code.  Access and access control is a constant variable.  In addition, at Mozilla we’ve added a bunch of new types of activities beyond engineering, we’ve increased the number of employees dramatically and we’re a bit behind on figuring out what practicing open in this setting means.  

    I’ve decided to dedicate time to this and look at ways to make sure our goals of building open practices into Mozilla are updated and more fully developed.  This is one of the areas of focus I mentioned in an earlier post describing where I spend my time and energy.  

    So far we have three early stage pilots underway sponsored by the Office of the Chair:

    • Opening up the leadership recruitment process to more people
    • Designing inclusive decision-making practices
    • Fostering meaningful conversations and exchange of knowledge across the organization, with a particular focus on bettering communications between Mozillians and leadership.

    Follow-up posts will have more info about each of these projects.  In general the goal of these experiments is to identify working models that can be adapted by others across Mozilla. And beyond that, to assist other Mozillians figure out new ways to “practice open” at Mozilla.

    Planet MozillaAnnouncing Rust 1.11

    The Rust team is happy to announce the latest version of Rust, 1.11. Rust is a systems programming language focused on safety, speed, and concurrency.

    As always, you can install Rust 1.11 from the appropriate page on our website, and check out the detailed release notes for 1.11 on GitHub. 1109 patches were landed in this release.

    What’s in 1.11 stable

    Much of the work that went into 1.11 was with regards to compiler internals that are not yet stable. We’re excited about features like MIR becoming the default and the beginnings of incremental compilation, and the 1.11 release has laid the groundwork.

    As for user-facing changes, last release, we talked about the new cdylib crate type.

    The existing dylib dynamic library format will now be used solely for writing a dynamic library to be used within a Rust project, while cdylibs will be used when compiling Rust code as a dynamic library to be embedded in another language. With the 1.10 release, cdylibs are supported by the compiler, but not yet in Cargo. This format was defined in RFC 1510.

    Well, in Rust 1.11, support for cdylibs has landed in Cargo! By adding this to your Cargo.toml:

    crate-type = ["cdylib"]
    

    You’ll get one built.

    In the standard library, the default hashing function was changed, from SipHash 2-4 to SipHash 1-3. We have been thinking about this for a long time, as far back as the original decision to go with 2-4:

    we proposed SipHash-2-4 as a (strong) PRF/MAC, and so far no attack whatsoever has been found, although many competent people tried to break it. However, fewer rounds may be sufficient and I would be very surprised if SipHash-1-3 introduced weaknesses for hash tables.

    See the detailed release notes for more.

    Library stabilizations

    See the detailed release notes for more.

    Cargo features

    See the detailed release notes for more.

    Contributors to 1.11

    We had 126 individuals contribute to 1.11. Thank you so much!

    • Aaklo Xu
    • Aaronepower
    • Aleksey Kladov
    • Alexander Polyakov
    • Alexander Stocko
    • Alex Burka
    • Alex Crichton
    • Alex Ozdemir
    • Alfie John
    • Amanieu d’Antras
    • Andrea Canciani
    • Andrew Brinker
    • Andrew Paseltiner
    • Andrey Tonkih
    • Andy Russell
    • Ariel Ben-Yehuda
    • bors
    • Brian Anderson
    • Carlo Teubner
    • Carol (Nichols || Goulding)
    • CensoredUsername
    • cgswords
    • cheercroaker
    • Chris Krycho
    • Chris Tomlinson
    • Corey Farwell
    • Cristian Oliveira
    • Daan Sprenkels
    • Daniel Firth
    • diwic
    • Eduard Burtescu
    • Eduard-Mihai Burtescu
    • Emilio Cobos Álvarez
    • Erick Tryzelaar
    • Esteban Küber
    • Fabian Vogt
    • Felix S. Klock II
    • flo-l
    • Florian Berger
    • Frank McSherry
    • Georg Brandl
    • ggomez
    • Gleb Kozyrev
    • Guillaume Gomez
    • Hendrik Sollich
    • Horace Abenga
    • Huon Wilson
    • Ivan Shapovalov
    • Jack O’Connor
    • Jacob Clark
    • Jake Goulding
    • Jakob Demler
    • James Alan Preiss
    • James Lucas
    • James Miller
    • Jamey Sharp
    • Jeffrey Seyfried
    • Joachim Viide
    • John Ericson
    • Jonas Schievink
    • Jonathan L
    • Jonathan Price
    • Jonathan Turner
    • Joseph Dunne
    • Josh Stone
    • Jupp Müller
    • Kamal Marhubi
    • kennytm
    • Léo Testard
    • Liigo Zhuang
    • Loïc Damien
    • Luqman Aden
    • Manish Goregaokar
    • Mark Côté
    • marudor
    • Masood Malekghassemi
    • Mathieu De Coster
    • Matt Kraai
    • Mátyás Mustoha
    • M Farkas-Dyck
    • Michael Necio
    • Michael Rosenberg
    • Michael Woerister
    • Mike Hommey
    • Mitsunori Komatsu
    • Morten H. Solvang
    • Ms2ger
    • Nathan Moos
    • Nick Cameron
    • Nick Hamann
    • Nikhil Shagrithaya
    • Niko Matsakis
    • Oliver Middleton
    • Oliver Schneider
    • Paul Jarrett
    • Pavel Pravosud
    • Peter Atashian
    • Peter Landoll
    • petevine
    • Reeze Xia
    • Scott A Carr
    • Sean McArthur
    • Sebastian Thiel
    • Seo Sanghyeon
    • Simonas Kazlauskas
    • Srinivas Reddy Thatiparthy
    • Stefan Schindler
    • Steve Klabnik
    • Steven Allen
    • Steven Burns
    • Tamir Bahar
    • Tatsuya Kawano
    • Ted Mielczarek
    • Tim Neumann
    • Tobias Bucher
    • Tshepang Lekhonkhobe
    • Ty Coghlan
    • Ulrik Sverdrup
    • Vadim Petrochenkov
    • Vincent Esche
    • Wangshan Lu
    • Will Crichton
    • Without Boats
    • Wojciech Nawrocki
    • Zack M. Davis
    • 吴冉波

    Planet MozillaHerding Automation Infrastructure

    For every commit to Firefox, we run a battery of builds and automated tests on the resulting source tree to make sure that the result still works and meets our correctness and performance quality criteria. This is expensive: every new push to our repository implies hundreds of hours of machine time. However, this type of quality control is essential to ensure that the product that we’re shipping to users is something that we can be proud of.

    But what about evaluating the quality of the product which does the building and testing? Who does that? And by what criteria would we say that our automation system is good or bad? Up to now, our procedures for this have been rather embarassingly adhoc. With some exceptions (such as OrangeFactor), our QA process amounts to motivated engineers doing a one-off analysis of a particular piece of the system, filing a few bugs, then forgetting about it. Occasionally someone will propose turning build and test automation for a specific platform on or off in mozilla.dev.planning.

    I’d like to suggest that the time has come to take a more systemic approach to this class of problem. We spend a lot of money on people and machines to maintain this infrastructure, and I think we need a more disciplined approach to make sure that we are getting good value for that investment.

    As a starting point, I feel like we need to pay closer attention to the following characteristics of our automation:

    • End-to-end times from push submission to full completion of all build and test jobs: if this gets too long, it makes the lives of all sorts of people painful — tree closures become longer when they happen (because it takes longer to either notice bustage or find out that it’s fixed), developers have to wait longer for try pushes (making them more likely to just push directly to an integration branch, causing the former problem…)
    • Number of machine hours consumed by the different types of test jobs: our resources are large (relatively speaking), but not unlimited. We need proper accounting of where we’re spending money and time. In some cases, resources used to perform a task that we don’t care that much about could be redeployed towards an underresourced task that we do care about. A good example of this was linux32 talos (performance tests) last year: when the question was raised of why we were doing performance testing on this specific platform (in addition to Linux64), no one could come up with a great justification. So we turned the tests off and reconfigured the machines to do Windows performance tests (where we were suffering from a severe lack of capacity).

    Over the past week, I’ve been prototyping a project I’ve been calling “Infraherder” which uses the data inside Treeherder’s job database to try to answer these questions (and maybe some others that I haven’t thought of yet). You can see a hacky version of it on my github fork.

    Why implement this in Treeherder you might ask? Two reasons. First, Treeherder already stores the job data in a historical archive that’s easy to query (using SQL). Using this directly makes sense over creating a new data store. Second, Treeherder provides a useful set of front-end components with which to build a UI with which to visualize this information. I actually did my initial prototyping inside an ipython notebook, but it quickly became obvious that for my results to be useful to others at Mozilla we needed some kind of real dashboard that people could dig into.

    On the Treeherder team at Mozilla, we’ve found the New Relic software to be invaluable for diagnosing and fixing quality and performance problems for Treeherder itself, so I took some inspiration from it (unfortunately the problem space of our automation is not quite the same as that of a web application, so we can’t just use New Relic directly).

    There are currently two views in the prototype, a “last finished” view and a “total” view. I’ll describe each of them in turn.

    Last finished

    This view shows the counts of which scheduled automation jobs were the “last” to finish. The hypothesis is that jobs that are frequently last indicate blockers to developer productivity, as they are the “long pole” in being able to determine if a push is good or bad.

    Right away from this view, you can see the mochitest devtools 9 test is often the last to finish on try, with Windows 7 mochitest debug a close second. Assuming that the reasons for this are not resource starvation (they don’t appear to be), we could probably get results into the hands of developers and sheriffs faster if we split these jobs into two seperate ones. I filed bugs 1294489 and 1294706 to address these issues.

    Total Time

    This view just shows which jobs are taking up the most machine hours.

    Probably unsurprisingly, it seems like it’s Android test jobs that are taking up most of the time here: these tests are running on multiple layers of emulation (AWS instances to emulate Linux hardware, then the already slow QEMU-based Android simulator) so are not expected to have fast runtime. I wonder if it might not be worth considering running these tests on faster instances and/or bare metal machines.

    Linux32 debug tests seem to be another large consumer of resources. Market conditions make turning these tests off altogether a non-starter (see bug 1255890), but how much value do we really derive from running the debug version of linux32 through automation (given that we’re already doing the same for 64-bit Linux)?

    Request for comments

    I’ve created an RFC for this project on Google Docs, as a sort of test case for a new process we’re thinking of using in Engineering Productivity for these sorts of projects. If you have any questions or comments, I’d love to hear them! My perspective on this vast problem space is limited, so I’m sure there are things that I’m missing.

    Planet MozillaThe Future Of The Planet

    I’m not sure who said it first, but I’ve heard a number of people say that RSS solved too many problems to be allowed to live.

    I’ve recently become the module owner of Planet Mozilla, a venerable communication hub and feed aggregator here at Mozilla. Real talk here: I’m not likely to get another chance in my life to put “seize control of planet” on my list of quarterly deliverables, much less cross it off in less than a month. Take that, high school guidance counselor and your “must try harder”!

    I warned my boss that I’d be milking that joke until sometime early 2017.

    On a somewhat more serious note: We have to decide what we’re going to do with this thing.

    Planet Mozilla is a bastion of what’s by now the Old Web – Anil Dash talks about it in more detail here, the suite of loosely connected tools and services that made the 1.0 Web what it was. The hallmarks of that era – distributed systems sharing information streams, decentralized and mutually supportive without codependency – date to a time when the economics of software, hardware, connectivity and storage were very different. I’ve written a lot more about that here, if you’re interested, but that doesn’t speak to where we are now.

    Please note that when talk about “Mozilla’s needs” below, I don’t mean the company that makes Firefox or the non-profit Foundation. I mean the mission and people in our global community of communities that stand up for it.

    I think the following things are true, good things:

    • People still use Planet heavily, sometimes even to the point of “rely on”. Some teams and community efforts definitely rely heavily on subplanets.
    • There isn’t a better place to get a sense of the scope of Mozilla as a global, cultural organization. The range and diversity of articles on Planet is big and weird and amazing.
    • The organizational and site structure of Planet speaks well of Mozilla and Mozilla’s values in being open, accessible and participatory.
    • Planet is an amplifier giving participants and communities an enormous reach and audience they wouldn’t otherwise have to share stories that range from technical and mission-focused to human and deeply personal.

    These things are also true, but not all that good:

    • It’s difficult to say what or who Planet is for right now. I don’t have and may not be able to get reliable usage metrics.
    • The egalitarian nature of feeds is a mixed blessing: On one hand, Planet as a forum gives our smallest and most remote communities the same platform as our executive leadership. On the other hand, headlines ranging from “Servo now self-aware” and “Mozilla to purchase Alaska” to “I like turnips” are all equal citizens of Planet, sorted only by time of arrival.
    • Looking at Planet via the Web is not a great experience; if you’re not using a reader even to skim, you’re getting a dated user experience and missing a lot. The mobile Web experience is nonexistent.
    • The Planet software is, by any reasonable standards, a contraption. A long-running and proven contraption, for sure, but definitely a contraption.

    Maintaining Planet isn’t particularly expensive. But it’s also not free, particularly in terms of opportunity costs and user-time spent. I think it’s worth asking what we want Planet to accomplish, whether Planet is the right tool for that, and what we should do next.

    I’ve got a few ideas about what “next” might look like; I think there are four broad categories.

    1. Do nothing. Maybe reskin the site, move the backing repo from Subversion to Github (currently planned) but otherwise leave Planet as is.
    2. Improve Planet as a Planet, i.e: as a feed aggregator and communication hub.
    3. Replace Planet with something better suited to Mozilla’s needs.
    4. Replace Planet with nothing.

    I’m partial to the “Improve Planet as a Planet” option, but I’m spending a lot of time thinking about the others. Not (or at least not only) because I’m lazy, but because I still think Planet matters. Whatever we choose to do here should be a use of time and effort that leaves Mozilla and the Web better off than they are today, and better off than if we’d spent that time and effort somewhere else.

    I don’t think Planet is everything Planet could be. I have some ideas, but also don’t think anyone has a sense of what Planet is to its community, or what Mozilla needs Planet to be or become.

    I think we need to figure that out together.

    Hi, Internet. What is Planet to you? Do you use it regularly? Do you rely on it? What do you need from Planet, and what would you like Planet to become, if anything?

    These comments are open and there’s a thread open at the Mozilla Community discourse instance where you can talk about this, and you can always email me directly if you like.

    Thanks.

    * – Mozilla is not to my knowledge going to purchase Alaska. I mean, maybe we are and I’ve tipped our hand? I don’t get invited to those meetings but it seems unlikely. Is Alaska even for sale? Turnips are OK, I guess.

    Planet WebKitRelease Notes for Safari Technology Preview Release 11

    Safari Technology Preview Release 11 is now available for download for both macOS Sierra betas and OS X El Capitan. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 203771–204289.

    JavaScript

    • Updated behavior for a class that extends null to follow ECMAScript 2016 specifications (r204058)
    • Made the second argument for Function.prototype.apply allow null, undefined or array-like data (r203790)

    Web APIs

    • Add support for DOMTokenList.replace() (r204161)
    • Add support for Element.getAttributeNames() (r203852)
    • Made the first parameter mandatory for canvas.getContext() and probablySupportsContext() (r203845)
    • Made the first two parameters madatory for window.postMessage() (r203846)
    • Made the first parameter mandatory for Document.execCommand() and queryCommand*() (r203784)
    • Made the first parameter mandatory for HTMLMediaElement.canPlayType() (r203806)
    • Made the first parameter mandatory for indexed property getters (r203788)
    • Made the first parameter mandatory for Range.createContextualFragment() (r203796)
    • Made the first parameter mandatory for setTimeout() and setInterval() (r203805)
    • Made the first parameter mandatory for SVGDocument.createEvent() (r203821)
    • Made the parameter mandatory for a named property getter (r203797)
    • Made the parameter mandatory for table.deleteRow() and body.deleteRow() (r203840)
    • Made the parameter mandatory for tr.deleteCell() (r203833)
    • Made the parameters mandatory for CanvasGradient.addColorStop() (r203820)
    • Made the parameters mandatory for DOMParser.parseFromString() (r203800)
    • Made the parameters mandatory for Event.initEvent() (r203848)
    • Made the parameters mandatory for insertAdjacentText() and insertAdjacentHTML() (r203803)
    • Enabled strict type checking for nullable attribute setters for wrapper types (r203949)
    • Enabled strict type checking for operations’ nullable parameters for wrapper types (r203941)

    ApplePay

    • Stopped accepting the deprecated requiredShippingAddressFields and requiredBillingAddressFields properties (r203789)

    Web Inspector

    • Improved the appearance of the Timeline tab when editing the enabled timelines (r204125)
    • Changed the Jump to Line keyboard shortcut to Control-G (⌃G) (r204099)
    • Fixed the grid column resizing element positions (r203991)
    • Added waterfall view to the Network tab and Network Timeline (r203843)
    • Fixed positioning of popovers when the window resizes (r204264)
    • Fixed popover drawing (r204136)
    • Polished UI for the Edit Breakpoint, Open Quickly, and Goto Line dialogs (r204152, r204124)
    • Update the Visual Styles Sidebar to use a one column layout when narrow (r203807)
    • Fixed the behavior of Home and End keys to match system behavior (r203804)

    MathML

    CSS

    • Aligned the CSS supports rule with the specification (r203782)
    • Addressed incorrect results for the color-gamut media query (r203903)

    Rendering

    • Fixed performance issues with <marquee> with the truespeed attribute (r204197)

    Media

    • Fixed captions not rendering in the PiP window for a hidden video element (r203799)
    • Addressed media controls not displaying for some autoplaying videos at certain browser dimensions (r203928)
    • Fixed media stream video element to show black when all video tracks disabled (r203826)

    Accessibility

    • Added localizable strings when inserting list types (r203801)
    • Improved the accessibility of media controls (r203913)

    Content Blockers

    • Enabled Content Blockers to block WebSocket connections (r204127)

    Planet MozillaThe Joy of Coding - Episode 68

    The Joy of Coding - Episode 68 mconley livehacks on real Firefox bugs while thinking aloud.

    Planet MozillaNow for the fun part.

    On our open design journey together, we’ve arrived at an inflection point. Today our effort—equal parts open crit, performance art piece, and sociology experiment—takes its logical next step, moving from words to visuals. A roomful of reviewers lean forward in their chairs, ready to weigh in on what we’ve done so far. Or so we hope.

    We’re ready. The work with our agency partner, johnson banks, has great breadth and substantial depth for first-round concepts (possibly owing to our rocket-fast timeline). Our initial response to the work has, we hope, helped make it stronger and more nuanced. We’ve jumped off this cliff together, holding hands and bracing for the splash.

    Each of the seven concepts we’re sharing today leads with and emphasizes a particular facet of the Mozilla story. From paying homage to our paleotechnic origins to rendering us as part of an ever-expanding digital ecosystem, from highlighting our global community ethos to giving us a lift from the quotidian elevator open button, the concepts express ideas about Mozilla in clever and unexpected ways.

    There are no duds in the mix. The hard part will be deciding among them, and this is a good problem to have.

    We have our opinions about these paths forward, our early favorites among the field. But for now we’re going to sit quietly and listen to what the voices from the concentric rings of our community—Mozillians, Mozilla fans, designers, technologists, and beyond—have to say in response about them.

    Tag, you’re it.

    Here’s what we’d like you to do, if you’re up for it. Have a look at the seven options and tell us what you think. To make comments about an individual direction and to see its full system, click on its image below.

    Which of these initial visual expressions best captures what Mozilla means to you? Which will best help us tell our story to a youthful, values-driven audience? Which brings to life the Mozilla personality: Gutsy, Independent, Buoyant, For Good?

    If you want to drill down a level, also consider which design idea:

    • Would resonate best around the world?
    • Has the potential to show off modern digital technology?
    • Is most scalable to a variety of Mozilla products, programs, and messages?
    • Would stand the test of time (well…let’s say 5-10 years)?
    • Would make people take notice and rethink Mozilla?

    This is how we’ve been evaluating each concept internally over the past week or so. It’s the framework we’ll use as we share the work for qualitative and quantitative feedback from our key audiences.

    How you deliver your feedback is up to you: writing comments on the blog, uploading a sketch or a mark-up, shooting a carpool karaoke video….bring it on. We’ll be taking feedback on this phase of work for roughly the next two weeks.

    If you’re new to this blog, a few reminders about what we’re not doing. We are not crowdsourcing the final design, nor will there be voting. We are not asking designers to work on spec. We welcome all feedback but make no promise to act on it all (even if such a thing were possible).

    From here, we’ll reduce these seven concepts to three, which we’ll refine further based partially on feedback from people like you, partially on what our design instincts tell us, and very much on what we need our brand identity to communicate to the world. These three concepts will go through a round of consumer testing and live critique in mid-September, and we’ll share the results here. We’re on track to have a final direction by the end of September.

    We trust that openness will prevail over secrecy and that we’ll all learn something in the end. Thanks for tagging along.

    jb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.key

    Footnotes

    Updated: .  Michael(tm) Smith <mike@w3.org>