Planet MozillaElement interactability checks with geckodriver and Firefox 58

When you are using Selenium and geckodriver to automate your tests in Firefox you might see a behavior change with Firefox 58 when using the commands Element Click or Element Send Keys. For both commands we have enabled the interactability checks by default now. That means that if such an operation has to be performed for any kind of element it will be checked first, if a click on it or sending keys to it would work from a normal user perspective at all. If not a not-interactable error will be thrown.

If you are asking now why this change was necessary, the answer is that we are more WebDriver specification conformant now.

While pushing this change out by default, we are aware of corner cases where we accidentally might throw such a not-interactability error, or falsely assume the element is interactable. If you are hitting such a condition it would be fantastic to let us know about it as best by filing an geckodriver issue with all the required information so that it is reproducible for us.

In case the problem causes issues for your test suites, but you totally want to use Firefox 58, you can use the capability moz:webdriverClick and turn off those checks. Simply set it to False, and the former behavior will happen. But please note that this workaround will only work for Firefox 58, and maybe Firefox 59, because then the old and legacy behavior will be removed.

That’s why please let us know about misbehavior when using Firefox 58, so that we have enough time to get it fixed for Firefox 59, or even 58.


Planet MozillaFirefox DeveloperEdition 58 Beta 12 Testday, December 22nd

Hello Mozillians,

We are happy to announce that Friday, December 22nd, we are organizing Firefox 58 Beta 12 Testday. We will be focusing our testing on Graphics and Web Compatibility. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Please note that the contribution deadline for this Testday is 27th of December!

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaToday’s net neutrality vote – an unsurprising, unfortunate disappointment

We are incredibly disappointed that the FCC voted this morning – along partisan lines – to remove protections for the open internet. This is the result of broken processes, broken politics, and broken policies. As we have said over and over, we’ll keep fighting for the open internet, and hope that politicians decide to protect their constituents rather than increase the power of ISPs.

This fight isn’t over. With our allies and our users, we will turn to Congress and the courts to fix the broken policies.

The partisan divide only exists in Washington.  The internet is a global, public resource and if closed off — with only some content and services available unless you pay more to your ISP — the value of that resource declines. According to polls from earlier this year, American internet users agree. Three-quarters of the public support net neutrality. This isn’t a partisan issue.

We’ll keep fighting. We’re encouraged by net neutrality victories in India and elsewhere.  Americans deserve and need better than this.

The post Today’s net neutrality vote – an unsurprising, unfortunate disappointment appeared first on The Mozilla Blog.

Planet MozillaA new Firefox and a new Firefox icon

Firefox Quantum is all about being fast, responsive and modern. All the work we did in making this browser fast under the hood, we also put into making it look … Read more

The post A new Firefox and a new Firefox icon appeared first on The Firefox Frontier.

Planet Mozilla5 Things That Make Firefox Focus Great at Saving Mobile Data

Here’s a new view on 2018: start the year with more Focus—focus in your online life, on what you’re doing, and away from the distractions. At least that’s going to … Read more

The post 5 Things That Make Firefox Focus Great at Saving Mobile Data appeared first on The Firefox Frontier.

Planet MozillaFirefox Focus Adds Quick Access Without Sacrificing Users’ Privacy

It’s been a little over a year since we launched Firefox Focus. We’ve had tremendous success since then, we launched in 27+ languages, launched on Android, and hit over 1 million downloads on Android within the first month of launch.

Today, we’re introducing a new feature: quicker access to your most visited sites, as well as the ability to add any search engine to your Focus app. They were the most requested items from our users and are aligned with our goals on what makes Focus so great.

We know our users want choice and miss the convenience of having their favorite websites and search engines at their fingertips, but they don’t want to sacrifice their privacy. Since the moment we’ve built Focus, our goal has been to get our users quickly to the information and sites all while keeping their data safe from unwanted targeting.

We all have our popular go-to sites that we visit regularly — whether it’s checking the latest news on your favorite news site or checking the scores of your beloved sports team. Now, you can add the sites you visit frequently to your personal autocomplete list within the app. This means that only you can see the sites’ URL in this list. So, when you’re ready to check your favorite sports team’s scores, you simply type in a couple letters and autocomplete will finish the job.

Autocomplete on Android (Left) and iOS (Right)


Check it out here:

There’s also something new for users where they can add search engines from any site that has a search field. If you want to search from somewhere outside our list of suggested search engines, go ahead and add it! For example, if you want to see a movie this weekend but don’t want to waste hours on a bad movie, you can check We know that choice is important to our power users so this new function allows them to set up their preferred way for searching the web.

One of the reasons users love Focus is because of the faster load times due to our auto-blocking of ads and trackers. It quickly gets you to the places where you want to go and sets us apart from other browsers. We built Focus as the quickest and easiest privacy browser built with you in mind.

The latest version of Firefox Focus can be downloaded on Google Play and in the App Store.

The post Firefox Focus Adds Quick Access Without Sacrificing Users’ Privacy appeared first on The Mozilla Blog.

Planet MozillaNew Council Members – Autumn 2017

We are very happy to announce our 2 new council members Prathamesh Chavan and Mayur Patil are fully onboarded and already working moving the Mozilla Reps program forward.

Of course we would like to thank a lot the 2 outcoming members: Michael Kohler and Alex Lakatos.

Michael and Alex: you have worked extremely hard to move the program forward and your input and strategic thinking have inspired the rest of the Reps.

Prathamesh and Mayur: a warm welcome. We we are very excited to have you and can’t wait to build the program together.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the Reps wiki.

Don’t forget to congratulate the new Council members on the Discourse topic!


Planet MozillaActual Input Latency: cross-browser measurement and the Hasal testing framework

Editor’s Note: This post is also featured on the 2017 Performance Calendar.

This is a story about an engineering team at Mozilla, based in Taipei, that was tasked with measuring performance and solving some specific performance bottlenecks in Firefox. It is also a story about user-reported performance issues that were turned into actionable insights. It is the story of how we developed Hasal, a framework for testing web performance between different browsers. Hasal takes a new approach to testing that considers user perception in the process.

Back in 2016, Firefox performance was problematic. We had many issues to address. One of the critical issues had to do with user reports about performance problems of Firefox on office web applications — regarding slow page-loads as well as sluggish response times.

Taking these seriously, but also experiencing many of these issues during our daily work, identifying the root cause for these performance issues was a top priority for us. Unfortunately, user sentiments is very often unspecific which makes identifying specific problems and evaluating their impact hard. So, we decided to take a new approach.

We were trying to implement a framework without using WebDriver, which relies on specialized browser APIs and JavaScript injection to manipulate the browser. Instead we used Sikuli, which submits native-like I/O signals to the computer to simulate user action. We built a framework that lets you define precise user interactions on a web application and measure input latency time using image recognition and real-time measurement instead of DOM events.

After working hard for about a year, we successfully implemented Hasal to compare the Actual Input Latency between different browsers on various web applications.

What is Actual Input Latency?

Based on Google’s “Measurement Performance with the RAIL Mode”, actual input latency is one of the key metrics that extends measurements of web performance beyond page load and focuses on responsiveness and in-app navigation.

In typical web applications, browsers spend most of their time on waiting for network resources to be downloaded and running JavaScript. The goal of the Hasal implementation was to focus on the latter, meaning that we measure the time elapsed between an I/O event and the web application’s first response as perceived by a user. Therefore, we measure only the time from user input to the screen’s initial change. We define this measure of time as the actual input latency.


Implementation Concepts

Basic concept

Our goal was to implement a tool built from the perspective of user perception. We designed our workflow so that each and every simulated user step would be based on image recognition, and every numeric measurement would be based on visual analysis.

Basic flow


Our framework is based on the PyUnit framework in combination with Sikuli. You can see our workflow in the figure above. First, we have some prerequisite tasks in the setup() function. Next, it executes the simulated user steps in the run() function of a designated test. Last, we get the final output from teardown() function.

Each simulated user interaction is screen-captured for analysis. Hasal relies on video recording and image extraction to get the desired results. The details of how we do this will be explained in the next few paragraphs.

Run details

When entering the run() function, we send a simulated I/O event to specific icons, images, or areas of the screen through the JVM (Java Virtual Machine). This way, we simulate how people interact with the web application. Also, we send the event log to the terminal at the same time via JVM. This is considered a marker to identify when the I/O event triggered.

Video analysis

In the teardown() function, Hasal finishes desktop video recording and starts to analyze the video. The basic idea of getting the measured time is to calculate the actual frames played during two key frames. The first key frame is marked when the indication shows up in terminal, and we assume that there is a very little time delay between indication shown in terminal and submission of I/O event. The second key frame is the first screen change in a certain area of the browser. In other words, the second key frame is the indicator of web application’s response in the first place.

By calculating the actual frame number between the first key frame and second key frame, we can get the real response time of web application’s response to user action. For example, if we were recording in 90fps (frames per second), and we have 10 frames between 2 key frames as shown above, we will get an Actual Input Latency of 111.11 ms.

An Example of Actual Input Latency

To better illustrate the idea of how to get from a defined user interaction on a web application to Actual Input Latency measurements with Hasal, here is a small example from one of our test cases from our “social” category (as described in a previous post about performance testing).

In one testing scenario that looks at latency for opening an online chat, we measure the Actual Input Latency from the initial click to when the chat screen shows up. Here are the testing steps. These steps are the ones that are translated into the linked script:

The user wants to open a chat window on Facebook. Therefore, they select a friend from their list and click on the respective friend to launch the window and wait for it to load.


  • Open the browser and enter the URL in URL bar
  • Login to Facebook and wait for the page to be fully loaded
  • Make sure the friend list is not hidden


  • Move mouse cursor to one of the avatars in the friend list
  • Take snapshot (Snapshot 1)
  • Send the MouseDown Event
  • Send the MouseUp Event and simultaneously send the Message to the Terminal Console and take snapshot (Snapshot 2)
  • Wait for the chat window to be launched


  • Close the chat screen
  • Close the browser

The result of input latency will based on the Snapshot 1, 2 and the frames converted from video to come out the actual frames played.

More details on how each web application was tested can be found in the repository.

Current Results

-Performance improvements in Firefox Quantum

This framework was initially targeted to find the performance differences between Firefox and other browsers in specific web applications. After we finished examining targeted applications, we started to help finding performance gaps in other major web applications.

<figure>Comparing response times for targeted web apps in Firefox<figcaption>The test result is based on the snapshot of web app.</figcaption></figure>

We have dedicated the Hasal framework to measure and improve Firefox Quantum performance. Over time, we have seen great improvements on Actual Input Latency for different web applications. In the above figures, we can see that Firefox Quantum’s response time has improved by up to 6x. Based on Hasal results, we have filed more than 200 bugs of which more than 80% were fixed for our Firefox Quantum release.

Other findings

From time to time, we’ve seen some performance regressions in our tests without any real changes to our browser. After confirming with the respective third-party web service providers, we found out that we have been able to detect performance regressions on their side through our testing.

Limitations of the Hasal framework

The work on Hasal has been a constant iterative approach of implementation and testing. During the whole time of its development, we worked closely with other engineering teams at Mozilla to make Hasal as useful as possible. However, some limitations still exist which are important to keep in mind:

Measured time is limited by FPS

Since our measurement is based on captured frames, any action within one frame can’t be measured by this framework. In our laboratory environment where we record with 90fps, this threshold is at 11.11ms and any response faster than 11.11ms cannot be detected.

Overhead can vary in different browsers

Since the framework heavily relies on capturing desktop video, there is potential overhead introduced. We have tried to choose a recorder with little overhead that records from hardware directly. However, this approach could introduce different overhead in the different browsers due to different implementations for leveraging graphics card computation power.

JVM versioning can also affect the measured time

As the Hasal approach also relies heavily on the assumption that sending an indicator to the terminal should only have a very short delay compared to sending I/O events to the browser, we have done a lot of profiling to ensure that this assumption is correct. According to our profiling results, we are almost certain. However, we still found that different JVM version could break our assumption in certain environments. Sometimes, a newer JVM version can increase the time delay between sending the indicator to the terminal and sending the I/O events. We actually found that upgrading Java introduced a delay of 20ms.


While the current Hasal implementation has proven useful to guide engineering work on fixing critical performance issues, there are some open issues that we will need to target next to make Hasal useful as a general framework for testing performance.


This framework is a combination of many tools. So, it requires a time-consuming installation script to deploy the framework. That also raises the barrier to use, due to the difficulty of installing and reproducing our test results in other environments. Therefore, trying to make it simple and easier to install in others’ environment will be our next step in the future.


The overall concept of our framework should apply in mobile devices as well. However, we might need to change few things before we can proceed. First of all, the video recorder might need to be replaced by snapshot or screen capture software to minimize the CPU power consumption and improve the efficiency. Also, the host connected to the mobile device should be responsible for calculating the results.

Reducing overhead

We already talked about the issue of potentially introducing non-negligibleoverhead in some scenarios by relying on a software desktop video recorder. So, we’ve also considered having an alternative solution to record the whole desktop screen. For example, an external HDMI recorder or external high-speed camera would be a potential choice for us to further investigate.


When you find a scenario with a significant performance issue, typically you file a bug for it. However, a bug does not provide the detailed information during testing such as detailed profiler data needed to take the next action. That’s missing in our current framework. It’s not easy to figure out a way to combine the actual visual representation with the code stack. But we are trying to integrate them via indicators and markers in profiles. This could help us understand two different things in the same timeline and let engineers understand more about the situation.

Let us know what you think. Thanks!

<figure>Bobby Chien, Fu-Hung Yen, Mike Lien, Shako Ho, and Walter Chen of the Hasal team<figcaption>Bobby Chien, Fu-Hung Yen, Mike Lien, Shako Ho, and Walter Chen of the Hasal team .</figcaption></figure>

Planet MozillaNotes sync feature is now available with end-to-end encryption

We are happy to announce that the Notes sync feature is now being rolled out with the latest update of the add-on. Notes sync allows you to access your notepad from different computers and backup Notes onto Mozilla’s servers. Start using the sync feature by pressing the sync button at the bottom the Notes toolbar:

<figure><figcaption>Notes sync powered by Firefox Accounts & Kinto</figcaption></figure>

This feature is powered by Firefox Accounts for authentication and Kinto for notes storage. Your notes are end-to-end encrypted using a key derived from your Firefox Accounts password. The sync encryption feature is based on the Firefox Sync encryption model. For example, this note string “I can sync now!” is locally encrypted with A256GCM content encryption and sent to the Kinto server instance:


Keep in mind that if you reset your Firefox Accounts password then you will not be able to decrypt your Notes data that was stored on the server. The data that was encrypted with your old password will be overwritten by your local Notes content.

Don’t forget to provide feedback using the “Give Feedback” button in Notes and report any bugs you find in the Notes issue tracker.


Notes sync feature is now available with end-to-end encryption was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaMozilla Joins Net Neutrality Blackout for ‘Break the Internet’ Day

Today, is blacked out to support net neutrality.

We’re joining with others across the web — from Github and Reddit to Etsy and Imgur — for a Break the Internet Day of Action. The idea: to show how broadly we all value an open internet. And to ask Americans to call their members of Congress and urge them to stop the FCC’s plan to end net neutrality.

In two days, the FCC will vote on a order that would gut current net neutrality protections, allowing internet service providers (ISPs) to create fast lanes and slow lanes online. ISPs like Verizon and AT&T would be able to block, throttle, and prioritize internet access for all Americans.

Says Ashley Boyd, Mozilla’s VP of Advocacy: “Right now, ISPs can’t discriminate between the different websites where Americans shop, socialize, and read the news. But without net neutrality, Americans’ favorite online services, marketplaces, or news websites could load far slower — or not at all.”

Says Denelle Dixon, Mozilla’s Chief Business and Legal Officer: “Net neutrality is about free speech, competition, and innovation. If the FCC votes to roll back net neutrality, the decision would harm every day internet users and small businesses — and end the internet as we know it.”

Net neutrality is something we can all agree on: our recent public opinion poll shows overwhelming support — across party lines — for net neutrality, with over three quarters of Americans (76%) supporting net neutrality.

Mozilla is a nonprofit committed to net neutrality and a healthy internet for all. If you are seeking additional ways to help, a donation to Mozilla allows us to continue our work.

The post Mozilla Joins Net Neutrality Blackout for ‘Break the Internet’ Day appeared first on The Mozilla Blog.

Planet MozillaSee it, understand it, click it: The new Firefox is organized better

We’ve released an all-new Firefox to the world, and it includes a lot of new browser technology that’s super fast. We’ve made many of these changes over the past year … Read more

The post See it, understand it, click it: The new Firefox is organized better appeared first on The Firefox Frontier.

Planet MozillaEarly Returns on Firefox Quantum Point to Growth

When we set out to launch Firefox Quantum earlier this year, we knew we had a hugely improved product. It not only felt faster — with a look and feel that tested off the charts — it was measurably faster. Thanks to multiple changes under the hood, we doubled Firefox’s speed while using 30% less memory than Chrome.

In less than a month, Firefox Quantum has already been installed by over 170M people around the world. We’re just getting started and early returns are super encouraging.

Here’s a look at the top initial indicators we are seeing for Firefox Quantum since our launch:

  1. Our biggest release to date. Firefox Quantum is Firefox’s fastest path to 100M+ profiles and half a billion daily hours of use on a new release in recent memory – a product of both seamless release and strong uptick in new profiles. And, millions of users are still getting their first introduction to Firefox Quantum every day.
  2. More users are coming from Chrome. We’ve seen a 44% growth in downloads from people who are using the Chrome browser compared to the same time last year.
  3. Mobile is growing too. Our four core mobile apps are experiencing strong growth resulting from the launch of Firefox Quantum. Firefox for iOS and Android has shown a 24% lift in installs, and Firefox Focus for Android and iOS showed a 48% lift in installs.
  4. The Add-ons ecosystem remains strong. Over 1,000 Firefox Quantum-ready extensions have been added to since Firefox Quantum was released.
  5. Screenshots is a breakout hit. It’s seeing very strong early traction. Users have taken over 30 million screenshots since the feature launched in late September.                                                                                           
  6. Developer Support for Firefox Quantum:  Improved browser speed and stability, as well as high quality developer tools are giving developers more reasons to try and continue to use Firefox developer tools. Since the September 26th developer edition of Firefox Quantum was released, we saw a 10% increase in daily use of developer tools and a 53% increase in Developer edition downloads.

Here’s what the world is saying about Firefox Quantum:

7.  Social media Is buzzing. While the fun part was reading what new users had to say, the data also showed a significant positive shift in sentiment across Facebook, Twitter, Reddit, Tumblr, YouTube, and blogs. (Source: Crimson Hexagon social media analytics)

8.  Our Ads are taking off. We took a unique approach to marketing the new Firefox browser, where we focused on what it feels like — and even sounds like — to use our blazingly fast new browser. They came to life in television spots and promoted videos with our “Wait Face” and Reggie Watts video which, combined, had more than half a billion impressions, and a third of those views are from start to finish.

9.  Firefox in the news. We’ve received an overwhelming amount of positive coverage for the new Firefox browser. We’ve seen hundreds of headlines across the globe from Wired’s “Ciao, Chrome: Firefox Quantum Is The Browser Built for 2017, ” to El Pais’ “Firefox wants to reinvent browsers with Quantum” and Usbek & Rica’s “Firefox Quantum: ‘We’re back’.”

We’re incredibly heartened by the positive response since the launch. We’ll continue to watch these numbers, and plan to check back in the new year with an update!


The post Early Returns on Firefox Quantum Point to Growth appeared first on The Mozilla Blog.

Planet MozillaJavaScript Startup Bytecode Cache

(firefox 58)

We want to make Firefox load pages as fast as possible, to make sure that you can get all the goods from the loaded pages available as soon as possible on your screen.

The JavaScript Startup Bytecode Cache (JSBC) is an optimization made to improve the time to fully load a page. As with many optimizations, this is a trade-off between speed and memory. In exchange for faster page-load, we store extra data in a cache.

The Context

When Firefox loads a web page, it is likely that this web page will need to run some JavaScript code. This code is transferred to the browser from the network, from the network cache, or from a service worker.

JavaScript is a general purpose programming language that is used to improve web pages. It makes them more interactive, it can request dynamically loaded content, and it can improve the way web pages are programmed with libraries.

JavaScript libraries are collections of code that are quite wide in terms of scope and usage. Most of the code of these libraries is not used (~70%) while the web page is starting up. The web page’s startup lasts beyond the first paint, it goes beyond the point when all resources are loaded, and can even last a few seconds longer after the page feels ready to be used.

When all the bytes of one JavaScript source are received we run a syntax parser. The goal of the syntax parser is to raise JavaScript syntax errors as soon as possible. If the source is large enough, it is syntax parsed on a different thread to avoid blocking the rest of the web page.

As soon as we know that there are no syntax errors, we can start the execution by doing a full parse of the executed functions to generate bytecode. The bytecode is a format that simplifies the execution of the JavaScript code by an interpreter, and then by the Just-In-Time compilers (JITs). The bytecode is much larger than the source code, so Firefox only generates the bytecode of executed functions.

Sequence diagram describing a script request, with the overhead of the Syntax parser and the Full parser in the content process. The content process request a script through IPC, the cache reply with the source, the source is parsed, then full parsed, and executed.

The Design

The JSBC aims at improving the startup of web pages by saving the bytecode of used functions in the network cache.

Saving the bytecode in the cache removes the need for the syntax-parser and replaces the full parser with a decoder. A decoder has the advantages of being smaller and faster than a parser. Therefore, when the cache is present and valid, we can run less code and use less memory to get the result of the full parser.

Having a bytecode cache however causes two problems. The first problem concerns the cache. As JavaScript can be updated on the server, we have to ensure that the bytecode is up to date with the current source code. The second problem concerns the serialization and deserialization of JavaScript. As we have to render a page at the same time, we have to ensure that we never block the main loop used to render web pages.

Alternate Data Type

While designing the JSBC, it became clear that we should not re-implement a cache.

At first sight a cache sounds like something that maps a URL to a set of bytes. In reality, due to invalidation rules, disk space, the mirroring of the disk in RAM, and user actions, handling a cache can become a full time job.

Another way to implement a cache, as we currently do for Asm.js and WebAssembly, is to map the source content to the decoded / compiled version of the source. This is impractical for the JSBC for two reasons: invalidation and user actions would have to be propagated from the first cache to this other one; and we need the full source before we can get the bytecode, so this would race between parsing and a disk load, which due to Firefox’s sandbox rules will need to deal with interprocess communication (IPC).

The approach chosen for the JSBC wasn’t to implement any new caching mechanism, but to reuse the one already available in the network cache. The network cache is usually used to handle URLs except those handle by a service worker or those using some Firefox internal privileged protocols.

The bytecode is stored in the network cache alongside the source code as “alternate content”; the user of the cache can request either one.

To request a resource, a page that is sandboxed in a content process creates a channel. This channel is then mirrored through IPC in the parent process, which resolves the protocol and dispatches it to the network. If the resource is already available in the cache, the cached version is used after verifying the validity of the resource using the ETag provided by the server. The cached content is transferred through IPC back to the content process.

To request bytecode, the content process annotates the channel with a preferred data type. When this annotation is present, the parent process, which has access to the cache, will look for an alternate content with the same data type. If there is no alternate content or if the data type differs, then the original content (the JavaScript source) is transferred. Otherwise, the alternate content (the bytecode) is transferred back to the content process with an extra flag repeating the data type.

Sequence diagram describing a script request, with the overhead of the Syntax parser and the Full parser in the content process. The content process request a script through IPC, the cache reply with the source, the source is parsed, then full parsed, and executed.

To save the bytecode, the content process has to keep the channel alive after having requested an alternate data type. When the bytecode is ready to be encoded, it opens a stream to the parent process. The parent process will save the given stream as the alternate data for the resource that was initially requested.

This API was implemented in Firefox’s network stack by Valentin Gosu and Michal Novotny, which was necessary to make this work possible. The first advantage of this interface is that it can also be implemented by Service Workers, which is currently being added in Firefox 59 by Eden Chuang and the service worker team. The second advantage of this interface is that it is not specific to JavaScript at all, and we could also save other forms of cached content, such as decoded images or precompiled WebAssembly scripts.

Serialization & Deserialization

SpiderMonkey already had a serialization and deserialization mechanism named XDR. This part of the code was used in the past to encode and decode Firefox’s internal JavaScript files, to improve Firefox startup. Unfortunately, XDR serialization and deserialization cannot handle lazily-parsed JavaScript functions and block the main thread of execution.

Saving Lazy Functions

Since 2012, Firefox parses functions lazily. XDR was meant to encode fully parsed files. With the JSBC, we need to avoid parsing unused functions. Most of the functions that are shipped to users are either never used, or not used during page startup. Thus we added support for encoding functions the way they are represented when they are unused, which is with start and end offsets in the source. Thus unused functions will only consume a minimal amount of space in addition to that taken by the source code.

Once a web page has started up, or in case of a different execution path, the source might be needed to delazify functions that were not cached. As such, the source must be available without blocking. The solution is to embed the source within the bytecode cache content. Instead of storing the raw source, the same way it is served by the network cache, it is stored in UCS2 encoding as compressed chunks¹, the same way we represent it in memory.

Non-Blocking Transcoding

XDR is a main-thread only blocking process that can serialize and deserialize bytecode. Blocking the main thread is problematic on the web, as it hangs the browser, making it unresponsive until the operation finishes. Without rewriting XDR, we managed to make this work such that it does not block the event loop. Unfortunately, deserialization and serialization could not both be handled the same way.

Deserialization was the easier of the two. As we already support parsing JavaScript sources off the main thread, decoding is just a special case of parsing, which produces the same output with a different input. So if the decoded content is large enough, we will transfer it to another thread in order to be decoded without blocking the main thread.

Serialization was more difficult. As it uses resources handled by the garbage collector, it must remain on the main thread. Thus, we cannot use another thread as with deserialization. In addition, the garbage collector might reclaim the bytecode of unused functions, and some objects attached to the bytecode might be mutated after execution, such as the object held by the JSOP_OBJECT opcode. To work around these issues, we are incrementally encoding the bytecode as soon as it is created, before its first execution.

To incrementally encode with XDR without rewriting it, we encode each function separately, along with location markers where the encoded function should be stored. Before the first execution we encode the JavaScript file with all functions encoded as lazy functions. When the function is requested, we generate the bytecode and replace the encoded lazy functions with the version that contains the bytecode. Before saving the serialization in the cache, we replace all location markers with the actual content, thus linearizing the content as if we had produced it in a single pass.

The Outcome

The Threshold

The JSBC is a trade-off between encoding/decoding time and memory usage, where the right balance is determined by the number of times the page is visited. As this is a trade-off, we have the ability to choose where to set the cursor, based on heuristics.

To find the threshold, we measured the time needed to encode, the time gained by decoding, and the distribution of cache hits. The best threshold is the value that minimizes the cost function over all page loads. Thus we are comparing the cost of loading a page without any optimization (x1), the cost of loading a page and encoding the bytecode (x1.02 — x1.1), and the cost of decoding the bytecode (x0.52 — x0.7). In the best/worst case, the cache hit threshold would be 1 or 2, if we only considered the time aspect of the equation.

As a human, it seems that we should not penalize the first visit of a website and save content in the disk. You might read a one time article on a page that you will never visit again, and saving the cached bytecode to disk for future visits sounds like a net loss. For this reason, the current threshold is set to encode the bytecode only on the 4th visit, thus making it available on the 5th and subsequent visits.

The Results

The JSBC is surprisingly effective, and instead of going deeper into explanations, let’s see how it behaves on real websites that frequently load JavaScript, such as Wikipedia, Google Search results, Twitter, Amazon and the Facebook Timeline.

This graph represents the average time between the start of navigation and when the onload event for each website is fired, with and without the JavaScript Startup Bytecode Cache (JSBC). The error bars are the first quartile, median and third quartile values, over the set of approximately 30 page loads for each configuration. These results were collected on a new profile with the apptelemetry addon and with the tracking protection enabled.

While this graph shows the improvement for all pages (wikipedia: 7.8%, google: 4.9%, twitter: 5.4%, amazon: 4.9% and facebook: 12%), this does not account for the fact that these pages continue to load even after the onload event. The JSBC is configured to capture the execution of scripts until the page becomes idle.

Telemetry results gathered during an experiment on Firefox Nightly’s users reported that when the JSBC is enabled, page loads are on average 43ms faster, while being effective on only half of the page loads.

The JSBC neither improves benchmarks nor regresses them. Benchmarks’ behaviour does not represent what users actually do when visiting a website — they would not reload the pages 15 times to check the number of CPU cycles. The JSBC is tuned to capture everything until the pages becomes idle. Benchmarks are tuned to avoid having an impatient developer watching a blank screen for ages, and thus they do not wait for the bytecode to be saved before starting over.

Thanks to Benjamin Bouvier and Valentin Gosu for proofreading this blog post and suggesting improvements, and a special thank you to Steve Fink and Jérémie Patonnier for improving this blog post.

¹ compressed chunk: Due to a recent regression this is no longer the case, and it might be interesting for a new contributor to fix.

Planet MozillaThe curl year 2017

I’m about to take an extended vacation for the rest of the year and into the beginning of the next, so I decided I’d sum up the year from a curl angle already now, a few weeks early. (So some numbers will grow a bit more after this post.)


So what did we do this year in the project, how did curl change?

The first curl release of the year was version 7.53.0 and the last one was 7.57.0. In the separate blog posts on 7.55.0, 7.56.0 and 7.57.0 you’ll note that we kept up adding new goodies and useful features. We produced a total of 9 releases containing 683 bug fixes. We announced twelve security problems. (Down from 24 last year.)

At least 125 different authors wrote code that was merged into curl this year, in the 1500 commits that were made. We never had this many different authors during a single year before in the project’s entire life time! (The 114 authors during 2016 was the previous all-time high.)

We added more than 160 new names to the THANKS document for their help in improving curl. The total amount of contributors is now over 1660.

This year we truly started to use travis for CI builds and grew from a mere two builds per commit and PR up to nineteen (with additional ones run on appveyor and elsewhere). The current build set is a very good verification that that most things still compile and work after a PR is merged. (see also the testing curl article).

Mozilla announced that they too will use colon-slash-slash in their logo. Of course we all know who had it that in their logo first… =)


In March 2017, we had our first ever curl get-together as we arranged curl up 2017 a weekend in Nuremberg, Germany. It was very inspiring and meeting parts of the team in real life was truly a blast. This was so good we intend to do it again: curl up 2018 will happen.

curl turned 19 years old in March. In May it surpassed 5,000 stars on github.

Also in May, we moved over the official curl site (and my personal site) to get hosted by Fastly. We were beginning to get problems to handle the bandwidth and load, and in one single step all our worries were graciously taken care of!

We got curl entered into the OSS-fuzz project, and Max Dymond even got a reward from Google for his curl-fuzzing integration work and thanks to that project throwing heaps of junk at libcurl’s APIs we’ve found and fixed many issues.

The source code (for the tool and library only) is now at about 143,378 lines of code. It grew around 7,057 lines during the year. The primary reasons for the code growth were:

  1. the new libssh-powered SSH backend (not yet released)
  2. the new mime API (in 7.56.0) and
  3. the new multi-SSL backend support (also in 7.56.0).

Your maintainer’s view

Oh what an eventful year it has been for me personally.

The first interim meeting for QUIC took place in Japan, and I participated from remote. After all, I’m all set on having curl support QUIC and I’ll keep track of where the protocol is going! I’ve participated in more interim meetings after that, all from remote so far.

I talked curl on the main track at FOSDEM in early February (and about HTTP/2 in the Mozilla devroom). I’ve then followed that up and have also had the pleasure to talk in front of audiences in Stockholm, Budapest, Jönköping and Prague through-out the year.


I went to London and “represented curl” in the third edition of the HTTP workshop, where HTTP protocol details were discussed and disassembled, and new plans for the future of HTTP were laid out.


In late June I meant to go to San Francisco to a Mozilla “all hands” conference but instead I was denied to board the flight. That event got a crazy amount of attention and I received massive amounts of love from new and old friends. I have not yet tried to enter the US again, but my plan is to try again in 2018…

I wrote and published my h2c tool, meant to help developers convert a set of HTTP headers into a working curl command line.

The single occasion that overshadows all other events and happenings for me this year by far, was without doubt when I was awarded the Polhem Prize and got a gold medal medal from no other than his majesty the King of Sweden himself. For all my work and years spent on curl no less.

Not really curl related, but in November I was also glad to be part of the huge Firefox Quantum release. The biggest Firefox release ever, and one that has been received really well.

I’ve managed to commit over 800 changes to curl through the year, which is 54% of the totals and more commits than I’ve done in curl during a single year since 2005 (in which I did 855 commits). I explain this increase mostly on inspiration from curl up and the prize, but I think it also happened thanks to excellent feedback and motivation brought by my fellow curl hackers.

We’re running towards the end of 2017 with me being the individual who did most commits in curl every single month for the last 28 months.


More things to come!

Planet MozillaSome northwest area tech conferences and their approximate dates

Some northwest area tech conferences and their approximate dates

Somebody asked me recently about what conferences a developer in the pacific northwest looking to attend more FOSS events should consider. Here’s an incomplete list of conferences I’ve attended or hear good things about, plus the approximate times of year to expect their CFPs.

The Southern California Linux Expo (SCaLE) is a large, established Linux and FOSS conference in Pasadena, California. Look for its CFP at in September, and expect the conference to be scheduled in late February or early March each year.

If you don’t mind a short flight inland, OpenWest is a similar conference held in Utah each year. Look for its CFP in March at, and expect the conference to happen around July. I especially enjoy the way that OpenWest brings the conference scene to a bunch of fantastic technologists who don’t always make it onto the national or international conference circuit.

Moving northward, there are a couple DevOps Days conferences in the area: Look for a PDX DevOps Days CFP around March and conference around September, and keep an eye out in case Boise DevOps Days returns.

If you’re into a balance of intersectional community and technical content, consider OSBridge ( held in Portland around June, and OSFeels ( held around July in Seattle.

In Washington state, LinuxFest Northwest (CFP around December, conference around April, in Bellingham, and SeaGL (, CFP around June, conference around October) in Seattle are solid grass-roots FOSS conferences. For infosec in the area, consider toorcamp (, registration around March, conference around June) in the San Juan Islands.

And finally, if a full conference seems like overkill, considering attending a BarCamp event in your area. Portland has CAT BarCamp ( at Portland State University around October, and Corvallis has Beaver BarCamp ( each April.

This is by no means a complete list of conferences in the area, and I haven’t even tried to list the myriad specialized events that spring up around any technology. Meetup, and for the Portland area, are also great places to find out about meetups and events.

Planet MozillaThis Week in Rust 212

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is printpdf, a pure Rust PDF-writing library that already has a lot of features (though I note a lot of bool-taking methods). Thanks to Felix Schütt for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

105 pull requests were merged in the last week

New Contributors

  • Agustin Chiappe Berrini
  • Jonathan Strong
  • JRegimbal
  • Timo

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Although rusting is generally a negative aspect of iron, a particular form of rusting, known as “stable rust,” causes the object to have a thin coating of rust over the top, and if kept in low relative humidity, makes the “stable” layer protective to the iron below

Wikipedia on rusting of iron.

Thanks to leodasvacas for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaHigh-level Problems with Git and How to Fix Them

I have a... complicated relationship with Git.

When Git first came onto the scene in the mid 2000's, I was initially skeptical because of its horrible user interface. But once I learned it, I appreciated its speed and features - especially the ease at which you could create feature branches, merge, and even create commits offline (which was a big deal in the era when Subversion was the dominant version control tool in open source and you needed to speak with a server in order to commit code). When I started using Git day-to-day, it was such an obvious improvement over what I was using before (mainly Subversion and even CVS).

When I started working for Mozilla in 2011, I was exposed to the Mercurial version control, which then - and still today - hosts the canonical repository for Firefox. I didn't like Mercurial initially. Actually, I despised it. I thought it was slow and its features lacking. And I frequently encountered repository corruption.

My first experience learning the internals of both Git and Mercurial came when I found myself hacking on hg-git - a tool that allows you to convert Git and Mercurial repositories to/from each other. I was hacking on hg-git so I could improve the performance of converting Mercurial repositories to Git repositories. And I was doing that because I wanted to use Git - not Mercurial - to hack on Firefox. I was trying to enable an unofficial Git mirror of the Firefox repository to synchronize faster so it would be more usable. The ulterior motive was to demonstrate that Git is a superior version control tool and that Firefox should switch its canonical version control tool from Mercurial to Git.

In what is a textbook definition of irony, what happened instead was I actually learned how Mercurial worked, interacted with the Mercurial Community, realized that Mozilla's documentation and developer practices were... lacking, and that Mercurial was actually a much, much more pleasant tool to use than Git. It's an old post, but I summarized my conversion four and a half years ago. This started a chain of events that somehow resulted in me contributing a ton of patches to Mercurial, taking stewardship of, and becoming a member of the Mercurial Steering Committee - the governance group for the Mercurial Project.

I've been an advocate of Mercurial over the years. Some would probably say I'm a Mercurial fanboy. I reject that characterization because fanboy has connotations that imply I'm ignorant of realities. I'm well aware of Mercurial's faults and weaknesses. I'm well aware of Mercurial's relative lack of popularity, I'm well aware that this lack of popularity almost certainly turns away contributors to Firefox and other Mozilla projects because people don't want to have to learn a new tool. I'm well aware that there are changes underway to enable Git to scale to very large repositories and that these changes could threaten Mercurial's scalability advantages over Git, making choices to use Mercurial even harder to defend. (As an aside, the party most responsible for pushing Git to adopt architectural changes to enable it to scale these days is Microsoft. Could anyone have foreseen that?!)

I've achieved mastery in both Git and Mercurial. I know their internals and their command line interfaces extremely well. I understand the architecture and principles upon which both are built. I'm also exposed to some very experienced and knowledgeable people in the Mercurial Community. People who have been around version control for much, much longer than me and have knowledge of random version control tools you've probably never heard of. This knowledge and exposure allows me to make connections and see opportunities for version control that quite frankly most do not.

In this post, I'll be talking about some high-level, high-impact problems with Git and possible solutions for them. My primary goal of this post is to foster positive change in Git and the services around it. While I personally prefer Mercurial, improving Git is good for everyone. Put another way, I want my knowledge and perspective from being part of a version control community to be put to good wherever it can.

Speaking of Mercurial, as I said, I'm a heavy contributor and am somewhat influential in the Mercurial Community. I want to be clear that my opinions in this post are my own and I'm not speaking on behalf of the Mercurial Project or the larger Mercurial Community. I also don't intend to claim that Mercurial is holier-than-thou. Mercurial has tons of user interface failings and deficiencies. And I'll even admit to being frustrated that some systemic failings in Mercurial have gone unaddressed for as long as they have. But that's for another post. This post is about Git. Let's get started.

The Staging Area

The staging area is a feature that should not be enabled in the default Git configuration.

Most people see version control as an obstacle standing in the way of accomplishing some other task. They just want to save their progress towards some goal. In other words, they want version control to be a save file feature in their workflow.

Unfortunately, modern version control tools don't work that way. For starters, they require people to specify a commit message every time they save. This in of itself can be annoying. But we generally accept that as the price you pay for version control: that commit message has value to others (or even your future self). So you must record it.

Most people want the barrier to saving changes to be effortless. A commit message is already too annoying for many users! The Git staging area establishes a higher barrier to saving. Instead of just saving your changes, you must first stage your changes to be saved.

If you requested save in your favorite GUI application, text editor, etc and it popped open a select the changes you would like to save dialog, you would rightly think just save all my changes already, dammit. But this is exactly what Git does with its staging area! Git is saying I know all the changes you made: now tell me which changes you'd like to save. To the average user, this is infuriating because it works in contrast to how the save feature works in almost every other application.

There is a counterargument to be made here. You could say that the editor/application/etc is complex - that it has multiple contexts (files) - that each context is independent - and that the user should have full control over which contexts (files) - and even changes within those contexts - to save. I agree: this is a compelling feature. However, it isn't an appropriate default feature. The ability to pick which changes to save is a power-user feature. Most users just want to save all the changes all the time. So that should be the default behavior. And the Git staging area should be an opt-in feature.

If intrinsic workflow warts aren't enough, the Git staging area has a horrible user interface. It is often referred to as the cache for historical reasons. Cache of course means something to anyone who knows anything about computers or programming. And Git's use of cache doesn't at all align with that common definition. Yet the the terminology in Git persists. You have to run commands like git diff --cached to examine the state of the staging area. Huh?!

But Git also refers to the staging area as the index. And this terminology also appears in Git commands! git help commit has numerous references to the index. Let's see what git help glossary has to say::

    A collection of files with stat information, whose contents are
    stored as objects. The index is a stored version of your working tree.
    Truth be told, it can also contain a second, and even a third
    version of a working tree, which are used when merging.

index entry
    The information regarding a particular file, stored in the index.
    An index entry can be unmerged, if a merge was started, but not
    yet finished (i.e. if the index contains multiple versions of that

In terms of end-user documentation, this is a train wreck. It tells the lay user absolutely nothing about what the index actually is. Instead, it casually throws out references to stat information (requires the user know what the stat() function call and struct are) and objects (a Git term for a piece of data stored by Git). It even undermines its own credibility with that truth be told sentence. This definition is so bad that it would probably improve user understanding if it were deleted!

Of course, git help index says No manual entry for gitindex. So there is literally no hope for you to get a concise, understandable definition of the index. Instead, it is one of those concepts that you think you learn from interacting with it all the time. Oh, when I git add something it gets into this state where git commit will actually save it.

And even if you know what the Git staging area/index/cached is, it can still confound you. Do you know the interaction between uncommitted changes in the staging area and working directory when you git rebase? What about git checkout? What about the various git reset invocations? I have a confession: I can't remember all the edge cases either. To play it safe, I try to make sure all my outstanding changes are committed before I run something like git rebase because I know that will be safe.

The Git staging area doesn't have to be this complicated. A re-branding away from index to staging area would go a long way. Adding an alias from git diff --staged to git diff --cached and removing references to the cache from common user commands would make a lot of sense and reduce end-user confusion.

Of course, the Git staging area doesn't really need to exist at all! The staging area is essentially a soft commit. It performs the save progress role - the basic requirement of a version control tool. And in some aspects it is actually a better save progress implementation than a commit because it doesn't require you to type a commit message! Because the staging area is a soft commit, all workflows using it can be modeled as if it were a real commit and the staging area didn't exist at all! For example, instead of git add --interactive + git commit, you can run git commit --interactive. Or if you wish to incrementally add new changes to an in-progress commit, you can run git commit --amend or git commit --amend --interactive or git commit --amend --all. If you actually understand the various modes of git reset, you can use those to uncommit. Of course, the user interface to performing these actions in Git today is a bit convoluted. But if the staging area didn't exist, new high-level commands like git amend and git uncommit could certainly be invented.

To the average user, the staging area is a complicated concept. I'm a power user. I understand its purpose and how to harness its power. Yet when I use Mercurial (which doesn't have a staging area), I don't miss the staging area at all. Instead, I learn that all operations involving the staging area can be modeled as other fundamental primitives (like commit amend) that you are likely to encounter anyway. The staging area therefore constitutes an unnecessary burden and cognitive load on users. While powerful, its complexity and incurred confusion does not justify its existence in the default Git configuration. The staging area is a power-user feature and should be opt-in by default.

Branches and Remotes Management is Complex and Time-Consuming

When I first used Git (coming from CVS and Subversion), I thought branches and remotes were incredible because they enabled new workflows that allowed you to easily track multiple lines of work across many repositories. And ~10 years later, I still believe the workflows they enable are important. However, having amassed a broader perspective, I also believe their implementation is poor and this unnecessarily confuses many users and wastes the time of all users.

My initial zen moment with Git - the time when Git finally clicked for me - was when I understood Git's object model: that Git is just a content indexed key-value store consisting of a different object types (blobs, trees, and commits) that have a particular relationship with each other. Refs are symbolic names pointing to Git commit objects. And Git branches - both local and remote - are just refs having a well-defined naming convention (refs/heads/<name> for local branches and refs/remotes/<remote>/<name> for remote branches). Even tags and notes are defined via refs.

Refs are a necessary primitive in Git because the Git storage model is to throw all objects into a single, key-value namespace. Since the store is content indexed and the key name is a cryptographic hash of the object's content (which for all intents and purposes is random gibberish to end-users), the Git store by itself is unable to locate objects. If all you had was the key-value store and you wanted to find all commits, you would need to walk every object in the store and read it to see if it is a commit object. You'd then need to buffer metadata about those objects in memory so you could reassemble them into say a DAG to facilitate looking at commit history. This approach obviously doesn't scale. Refs short-circuit this process by providing pointers to objects of importance. It may help to think of the set of refs as an index into the Git store.

Refs also serve another role: as guards against garbage collection. I won't go into details about loose objects and packfiles, but it's worth noting that Git's key-value store also behaves in ways similar to a generational garbage collector like you would find in programming languages such as Java and Python. The important thing to know is that Git will garbage collect (read: delete) objects that are unused. And the mechanism it uses to determine which objects are unused is to iterate through refs and walk all transitive references from that initial pointer. If there is an object in the store that can't be traced back to a ref, it is unreachable and can be deleted.

Reflogs maintain the history of a value for a ref: for each ref they contain a log of what commit it was pointing to, when that pointer was established, who established it, etc. Reflogs serve two purposes: facilitating undoing a previous action and holding a reference to old data to prevent it from being garbage collected. The two use cases are related: if you don't care about undo, you don't need the old reference to prevent garbage collection.

This design of Git's store is actually quite sensible. It's not perfect (nothing is). But it is a solid foundation to build a version control tool (or even other data storage applications) on top of.

The title of this section has to do with sub-optimal branches and remotes management. But I've hardly said anything about branches or remotes! And this leads me to my main complaint about Git's branches and remotes: that they are very thin veneer over refs. The properties of Git's underlying key-value store unnecessarily bleed into user-facing concepts (like branches and remotes) and therefore dictate sub-optimal practices. This is what's referred to as a leaky abstraction.

I'll give some examples.

As I stated above, many users treat version control as a save file step in their workflow. I believe that any step that interferes with users saving their work is user hostile. This even includes writing a commit message! I already argued that the staging area significantly interferes with this critical task. Git branches do as well.

If we were designing a version control tool from scratch (or if you were a new user to version control), you would probably think that a sane feature/requirement would be to update to any revision and start making changes. In Git speak, this would be something like git checkout b201e96f, make some file changes, git commit. I think that's a pretty basic workflow requirement for a version control tool. And the workflow I suggested is pretty intuitive: choose the thing to start working on, make some changes, then save those changes.

Let's see what happens when we actually do this:

$ git checkout b201e96f
Note: checking out 'b201e96f'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at b201e96f94... Merge branch 'rs/config-write-section-fix' into maint

$ echo 'my change' >>
$ git commit -a -m 'my change'
[detached HEAD aeb0c997ff] my change
 1 file changed, 1 insertion(+)

$ git push indygreg
fatal: You are not currently on a branch.
To push the history leading to the current (detached HEAD)
state now, use

    git push indygreg HEAD:<name-of-remote-branch>

$ git checkout master
Warning: you are leaving 1 commit behind, not connected to
any of your branches:

  aeb0c997ff my change

If you want to keep it by creating a new branch, this may be a good time
to do so with:

 git branch <new-branch-name> aeb0c997ff

Switched to branch 'master'
Your branch is up to date with 'origin/master'.

I know what all these messages mean because I've mastered Git. But if you were a newcomer (or even a seasoned user), you might be very confused. Just so we're on the same page, here is what's happening (along with some commentary).

When I run git checkout b201e96f, Git is trying to tell me that I'm potentially doing something that could result in the loss of my data. A golden rule of version control tools is don't lose the user's data. When I run git checkout, Git should be stating the risk for data loss very clearly. But instead, the If you want to create a new branch sentence is hiding this fact by instead phrasing things around retaining commits you create rather than the possible loss of data. It's up to the user to make the connection that retaining commits you create actually means don't eat my data. Preventing data loss is critical and Git should not mince words here!

The git commit seems to work like normal. However, since we're in a detached HEAD state (a phrase that is likely gibberish to most users), that commit isn't referred to by any ref, so it can be lost easily. Git should be telling me that I just committed something it may not be able to find in the future. But it doesn't. Again, Git isn't being as protective of my data as it needs to be.

The failure in the git push command is essentially telling me I need to give things a name in order to push. Pushing is effectively remote save. And I'm going to apply my reasoning about version control tools not interfering with save to pushing as well: Git is adding an extra barrier to remote save by refusing to push commits without a branch attached and by doing so is being user hostile.

Finally, we git checkout master to move to another commit. Here, Git is actually doing something halfway reasonable. It is telling me I'm leaving commits behind, which commits those are, and the command to use to keep those commits. The warning is good but not great. I think it needs to be stronger to reflect the risk around data loss if that suggested Git commit isn't executed. (Of course, the reflog for HEAD will ensure that data isn't immediately deleted. But users shouldn't need to involve reflogs to not lose data that wasn't rewritten.)

The point I want to make is that Git doesn't allow you to just update and save. Because its dumb store requires pointers to relevant commits (refs) and because that requirement isn't abstracted away or paved over by user-friendly features in the frontend, Git is effectively requiring end-users to define names (branches) for all commits. If you fail to define a name, it gets a lot harder to find your commits, exchange them, and Git may delete your data. While it is technically possible to not create branches, the version control tool is essentially unusable without them.

When local branches are exchanged, they appear as remote branches to others. Essentially, you give each instance of the repository a name (the remote). And branches/refs fetched from a named remote appear as a ref in the ref namespace for that remote. e.g. refs/remotes/origin holds refs for the origin remote. (Git allows you to not have to specify the refs/remotes part, so you can refer to e.g. refs/remotes/origin/master as origin/master.)

Again, if you were designing a version control tool from scratch or you were a new Git user, you'd probably think remote refs would make good starting points for work. For example, if you know you should be saving new work on top of the master branch, you might be inclined to begin that work by running git checkout origin/master. But like our specific-commit checkout above:

$ git checkout origin/master
Note: checking out 'origin/master'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 95ec6b1b33... RelNotes: the eighth batch

This is the same message we got for a direct checkout. But we did supply a ref/remote branch name. What gives? Essentially, Git tries to enforce that the refs/remotes/ namespace is read-only and only updated by operations that exchange data with a remote, namely git fetch, git pull, and git push.

For this to work correctly, you need to create a new local branch (which initially points to the commit that refs/remotes/origin/master points to) and then switch/activate that local branch.

I could go on talking about all the subtle nuances of how Git branches are managed. But I won't.

If you've used Git, you know you need to use branches. You may or may not recognize just how frequently you have to type a branch name into a git command. I guarantee that if you are familiar with version control tools and workflows that aren't based on having to manage refs to track data, you will find Git's forced usage of refs and branches a bit absurd. I half jokingly refer to Git as Game of Refs. I say that because coming from Mercurial (which doesn't require you to name things), Git workflows feel to me like all I'm doing is typing the names of branches and refs into git commands. I feel like I'm wasting my precious time telling Git the names of things only because this is necessary to placate the leaky abstraction of Git's storage layer which requires references to relevant commits.

Git and version control doesn't have to be this way.

As I said, my Mercurial workflow doesn't rely on naming things. Unlike Git, Mercurial's store has an explicit (not shared) storage location for commits (changesets in Mercurial parlance). And this data structure is ordered, meaning a changeset later always occurs after its parent/predecessor. This means that Mercurial can open a single file/index to quickly find all changesets. Because Mercurial doesn't need pointers to commits of relevance, names aren't required.

My Zen of Mercurial moment came when I realized you didn't have to name things in Mercurial. Having used Git before Mercurial, I was conditioned to always be naming things. This is the Git way after all. And, truth be told, it is common to name things in Mercurial as well. Mercurial's named branches were the way to do feature branches in Mercurial for years. Some used the MQ extension (essentially a port of quilt), which also requires naming individual patches. Git users coming to Mercurial were missing Git branches and Mercurial's bookmarks were a poor port of Git branches.

But recently, more and more Mercurial users have been coming to the realization that names aren't really necessary. If the tool doesn't actually require naming things, why force users to name things? As long as users can find the commits they need to find, do you actually need names?

As a demonstration, my Mercurial workflow leans heavily on the hg show work and hg show stack commands. You will need to enable the show extension by putting the following in your hgrc config file to use them:

show =

Running hg show work (I have also set the config enable me to type hg swork) finds all in-progress changesets and other likely-relevant changesets (those with names and DAG heads). It prints a concise DAG of those changesets:

hg show work output

And hg show stack shows just the current line of work and its relationship to other important heads:

hg show stack output

Aside from the @ bookmark/name set on that top-most changeset, there are no names! (That @ comes from the remote repository, which has set that name.)

Outside of code archeology workflows, hg show work shows the changesets I care about 95% of the time. With all I care about (my in-progress work and possible rebase targets) rendered concisely, I don't have to name things because I can just find whatever I'm looking for by running hg show work! Yes, you need to run hg show work, visually scan for what you are looking for, and copy a (random) hash fragment into a number of commands. This sounds like a lot of work. But I believe it is far less work than naming things. Only when you practice this workflow do you realize just how much time you actually spend finding and then typing names in to hg and - especailly - git commands! The ability to just hg update to a changeset and commit without having to name things is just so liberating. It feels like my version control tool is putting up fewer barriers and letting me work quickly.

Another benefit of hg show work and hg show stack are that they present a concise DAG visualization to users. This helps educate users about the underlying shape of repository data. When you see connected nodes on a graph and how they change over time, it makes it a lot easier to understand concepts like merge and rebase.

This nameless workflow may sound radical. But that's because we're all conditioned to naming things. I initially thought it was crazy as well. But once you have a mechanism that gives you rapid access to data you care about (hg show work in Mercurial's case), names become very optional. Now, a pure nameless workflow isn't without its limitations. You want names to identify the main targets for work (e.g. the master branch). And when you exchange work with others, names are easier to work with, especially since names survive rewriting. But in my experience, most of my commits are only exchanged with me (synchronizing my in-progress commits across devices) and with code review tools (which don't really need names and can operate against raw commits). My most frequent use of names comes when I'm in repository maintainer mode and I need to ensure commits have names for others to reference.

Could Git support nameless workflows? In theory it can.

Git needs refs to find relevant commits in its store. And the wire protocol uses refs to exchange data. So refs have to exist for Git to function (assuming Git doesn't radically change its storage and exchange mechanisms to mitigate the need for refs, but that would be a massive change and I don't see this happening).

While there is a fundamental requirement for refs to exist, this doesn't necessarily mean that user-facing names must exist. The reason that we need branches today is because branches are little more than a ref with special behavior. It is theoretically possible to invent a mechanism that transparently maps nameless commits onto refs. For example, you could create a refs/nameless/ namespace that was automatically populated with DAG heads that didn't have names attached. And Git could exchange these refs just like it can branches today. It would be a lot of work to think through all the implications and to design and implement support for nameless development in Git. But I think it is possible.

I encourage the Git community to investigate supporting nameless workflows. Having adopted this workflow in Mercurial, Git's workflow around naming branches feels heavyweight and restrictive to me. Put another way, nameless commits are actually lighter-weight branches than Git branches! To the common user who just wants version control to be a save feature, requiring names establishes a barrier towards that goal. So removing the naming requirement would make Git simpler and more approachable to new users.

Forks aren't the Model You are Looking For

This section is more about hosted Git services (like GitHub, Bitbucket, and GitLab) than Git itself. But since hosted Git services are synonymous with Git and interaction with a hosted Git services is a regular part of a common Git user's workflow, I feel like I need to cover it. (For what it's worth, my experience at Mozilla tells me that a large percentage of people who say I prefer Git or we should use Git actually mean I like GitHub. Git and GitHub/Bitbucket/GitLab are effectively the same thing in the minds of many and anyone finding themselves discussing version control needs to keep this in mind because Git is more than just the command line tool: it is an ecosystem.)

I'll come right out and say it: I think forks are a relatively poor model for collaborating. They are light years better than what existed before. But they are still so far from the turn-key experience that should be possible. The fork hasn't really changed much since the current implementation of it was made popular by GitHub many years ago. And I view this as a general failure of hosted services to innovate.

So we have a shared understanding, a fork (as implemented on GitHub, Bitbucket, GitLab, etc) is essentially a complete copy of a repository (a git clone if using Git) and a fresh workspace for additional value-added services the hosting provider offers (pull requests, issues, wikis, project tracking, release tracking, etc). If you open the main web page for a fork on these services, it looks just like the main project's. You know it is a fork because there are cosmetics somewhere (typically next to the project/repository name) saying forked from.

Before service providers adopted the fork terminology, fork was used in open source to refer to a splintering of a project. If someone or a group of people didn't like the direction a project was taking, wanted to take over ownership of a project because of stagnation, etc, they would fork it. The fork was based on the original (and there may even be active collaboration between the fork and original), but the intent of the fork was to create distance between the original project and its new incantation. A new entity that was sufficiently independent of the original.

Forks on service providers mostly retain this old school fork model. The fork gets a new copy of issues, wikis, etc. And anyone who forks establishes what looks like an independent incantation of a project. It's worth noting that the execution varies by service provider. For example, GitHub won't enable Issues for a fork by default, thereby encouraging people to file issues against the upstream project it was forked from. (This is good default behavior.)

And I know why service providers (initially) implemented things this way: it was easy. If you are building a product, it's simpler to just say a user's version of this project is a git clone and they get a fresh database. On a technical level, this meets the traditional definition of fork. And rather than introduce a new term into the vernacular, they just re-purposed fork (albeit with softer connotations, since the traditional fork commonly implied there was some form of strife precipitating a fork).

To help differentiate flavors of forks, I'm going to define the terms soft fork and hard fork. A soft fork is a fork that exists for purposes of collaboration. The differentiating feature between a soft fork and hard fork is whether the fork is intended to be used as its own project. If it is, it is a hard fork. If not - if all changes are intended to be merged into the upstream project and be consumed from there - it is a soft fork.

I don't have concrete numbers, but I'm willing to wager that the vast majority of forks on Git service providers which have changes are soft forks rather than hard forks. In other words, these forks exist purely as a conduit to collaborate with the canonical/upstream project (or to facilitate a short-lived one-off change).

The current implementation of fork - which borrows a lot from its predecessor of the same name - is a good - but not great - way to facilitate collaboration. It isn't great because it technically resembles what you'd expect to see for hard fork use cases even though it is used predominantly with soft forks. This mismatch creates problems.

If you were to take a step back and invent your own version control hosted service and weren't tainted by exposure to existing services and were willing to think a bit beyond making it a glorified frontend for the git command line interface, you might realize that the problem you are solving - the product you are selling - is collaboration as a service, not a Git hosting service. And if your product is collaboration, then implementing your collaboration model around the hard fork model with strong barriers between the original project and its forks is counterproductive and undermines your own product. But this is how GitHub, Bitbucket, GitLab, and others have implemented their product!

To improve collaboration on version control hosted services, the concept of a fork needs to significantly curtailed. Replacing it should be a UI and workflow that revolves around the central, canonical repository.

You shouldn't need to create your own clone or fork of a repository in order to contribute. Instead, you should be able to clone the canonical repository. When you create commits, those commits should be stored and/or more tightly affiliated with the original project - not inside a fork.

One potential implementation is doable today. I'm going to call it workspaces. Here's how it would work.

There would exist a namespace for refs that can be controlled by the user. For example, on GitHub (where my username is indygreg), if I wanted to contribute to some random project, I would git push my refs somewhere under refs/users/indygreg/ directly to that project's. No forking necessary. If I wanted to contribute to a project, I would just clone its repo then push to my workspace under it. You could do this today by configuring your Git refspec properly. For pushes, it would look something like refs/heads/*:refs/users/indygreg/* (that tells Git to map local refs under refs/heads/ to refs/users/indygreg/ on that remote repository). If this became a popular feature, presumably the Git wire protocol could be taught to advertise this feature such that Git clients automatically configured themselves to push to user-specific workspaces attached to the original repository.

There are several advantages to such a workspace model. Many of them revolve around eliminating forks.

At initial contribution time, no server-side fork is necessary in order to contribute. You would be able to clone and contribute without waiting for or configuring a fork. Or if you can create commits from the web interface, the clone wouldn't even be necessary! Lowering the barrier to contribution is a good thing, especially if collaboration is the product you are selling.

In the web UI, workspaces would also revolve around the source project and not be off in their own world like forks are today. People could more easily see what others are up to. And fetching their work would require typing in their username as opposed to configuring a whole new remote. This would bring communities closer and hopefully lead to better collaboration.

Not requiring forks also eliminates the need to synchronize your fork with the upstream repository. I don't know about you, but one of the things that bothers me about the Game of Refs that Git imposes is that I have to keep my refs in sync with the upstream refs. When I fetch from origin and pull down a new master branch, I need to git merge that branch into my local master branch. Then I need to push that new master branch to my fork. This is quite tedious. And it is easy to merge the wrong branches and get your branch state out of whack. There are better ways to map remote refs into your local names to make this far less confusing.

Another win here is not having to push and store data multiple times. When working on a fork (which is a separate repository), after you git fetch changes from upstream, you need to eventually git push those into your fork. If you've ever worked on a large repository and didn't have a super fast Internet connection, you may have been stymied by having to git push large amounts of data to your fork. This is quite annoying, especially for people with slow Internet connections. Wouldn't it be nice if that git push only pushed the data that was truly new and didn't already exist somewhere else on the server? A workspace model where development all occurs in the original repository would fix this. As a bonus, it would make the storage problem on servers easier because you would eliminate thousands of forks and you probably wouldn't have to care as much about data duplication across repos/clones because the version control tool solves a lot of this problem for you, courtesy of having all data live alongside or in the original repository instead of in a fork.

Another win from workspace-centric development would be the potential to do more user-friendly things after pull/merge requests are incorporated in the official project. For example, the ref in your workspace could be deleted automatically. This would ease the burden on users to clean up after their submissions are accepted. Again, instead of mashing keys to play the Game of Refs, this would all be taken care of for you automatically. (Yes, I know there are scripts and shell aliases to make this more turn-key. But user-friendly behavior shouldn't have to be opt-in: it should be the default.)

But workspaces aren't all rainbows and unicorns. There are access control concerns. You probably don't want users able to mutate the workspaces of other users. Or do you? You can make a compelling case that project administrators should have that ability. And what if someone pushes bad or illegal content to a workspace and you receive a cease and desist? Can you take down just the offending workspace while complying with the order? And what happens if the original project is deleted? Do all its workspaces die with it? These are not trivial concerns. But they don't feel impossible to tackle either.

Workspaces are only one potential alternative to forks. And I can come up with multiple implementations of the workspace concept. Although many of them are constrained by current features in the Git wire protocol. But Git is (finally) getting a more extensible wire protocol, so hopefully this will enable nice things.

I challenge Git service providers like GitHub, Bitbucket, and GitLab to think outside the box and implement something better than how forks are implemented today. It will be a large shift. But I think users will appreciate it in the long run.


Git is an ubiquitous version control tool. But it is frequently lampooned for its poor usability and documentation. We even have research papers telling us which parts are bad. Nobody I know has had a pleasant initial experience with Git. And it is clear that few people actually understand Git: most just know the command incantations they need to know to accomplish a small set of common activities. (If you are such a person, there is nothing to be ashamed about: Git is a hard tool.)

Popular Git-based hosting and collaboration services (such as GitHub, Bitbucket, and GitLab) exist. While they've made strides to make it easier to commit data to a Git repository (I purposefully avoid saying use Git because the most usable tools seem to avoid the git command line interface as much as possible), they are often a thin veneer over Git itself (see forks). And Git is a thin veneer over a content indexed key-value store (see forced usage of bookmarks).

As an industry, we should be concerned about the lousy usability of Git and the tools and services that surround it. Some may say that Git - with its near monopoly over version control mindset - is a success. I have a different view: I think it is a failure that a tool with a user experience this bad has achieved the success it has.

The cost to Git's poor usability can be measured in tens if not hundreds of millions of dollars in time people have wasted because they couldn't figure out how to use Git. Git should be viewed as a source of embarrassment, not a success story.

What's really concerning is that the usability problems of Git have been known for years. Yet it is as popular as ever and there have been few substantial usability improvements. We do have some alternative frontends floating around. But these haven't caught on.

I'm at a loss to understand how an open source tool as popular as Git has remained so mediocre for so long. The source code is out there. Anybody can submit a patch to fix it. Why is it that so many people get tripped up by the same poor usability issues years after Git became the common version control tool? It certainly appears that as an industry we have been unable or unwilling to address systemic deficiencies in a critical tool. Why this is, I'm not sure.

Despite my pessimism about Git's usability and its poor track record of being attentive to the needs of people who aren't power users, I'm optimistic that the future will be brighter. While the ~7000 words in this post pale in comparison to the aggregate word count that has been written about Git, hopefully this post strikes a nerve and causes positive change. Just because one generation has toiled with the usability problems of Git doesn't mean the next generation has to suffer through the same. Git can be improved and I encourage that change to happen. The three issues above and their possible solutions would be a good place to start.

Planet MozillaFirefox 58 Beta 10 Testday Results

Hello everyone,

Last Friday – December 8th – we held a Testday event, for Firefox 58 Beta 10.

Thank you all for helping us make Mozilla a better place!

From India team: Mohammed Adam, Surentharan.R.A, B.Krishnaveni, Aishwarya Narasimhan, Nagarajan Rajamanickam, Baranitharan, Fahima Zulfath, Andal_Narasimhan, Amit Kumar Singh.

From Bangladesh team: Rezaul Huque Nayeem, Tanvir Mazharul, Tanvir Rahman, Maruf Rahman, Sontus Chandra Anik.

– several test cases executed for Media Recorder Refactor and Tabbed Browser;

– 4 bugs verified: 1393237, 1399397, 1403593, and 1405319;

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaTest Pilot in 2018: Lessons Learned from Screenshots

Test Pilot in 2018: Lessons learned from graduating Screenshots into Firefox

Wil and I were talking about the Bugzilla vs Github question for Screenshots a couple of days ago, and I have to admit that I’ve come around to “let’s just use Bugzilla for everything, and just ride the trains, and work with the existing processes as much as possible.”

I think it’s important to point out that this is a complete departure from my original point of view. Getting an inside view of how and why Firefox works the way it does has changed my mind. Everything just moves slower, with so many good reasons for doing so (good topic for another blog post). Given that our goal is to hand Screenshots off, just going with the existing processes, minimizing friction, is the way to go.

If Test Pilot’s goals include landing stuff in Firefox, what does this mean for the way that we run earlier steps in the product development process?

Suggestions for experiments that the Test Pilot team will graduate into Firefox

Ship maximal, not minimal, features

I don’t think we should plan on meaningful iteration once a feature lands in Firefox. It’s just fundamentally too slow to iterate rapidly, and it’s way too hard for a very small team to ship features faster than that default speed (again, many good reasons for that friction).

The next time we graduate something into Firefox, we should plan to ship much more than a minimum viable product, because we likely won’t get far past that initial landing point.

When in Firefox, embrace the Firefox ways

Everything we do, once an experiment gets approval to graduate, should be totally Firefox-centric. Move to Bugzilla for bugs (except, maybe, for server bugs). Mozilla-central for code, starting with one huge import from Github (again, except for server code). Git-cinnabar works really well, if you prefer git over mercurial. We have committers within the team now, and relationships with reviewers, so the code side of things is pretty well sorted.

Similarly for processes: we should just go with the existing processes to the best of our ability, which is to say, constantly ask the gatekeepers if we’re doing it right. Embrace the fact that everything in Firefox is high-touch, and use existing personal relationships to ask early and often if we’ve missed any important steps. We will always miss something, whether it’s new rules for some step, or neglecting to get signoff from some newly-created team, but we can plan for that in the schedule. I think we’ve hit most of the big surprises in shipping Screenshots.

Aim for bigger audiences in Test Pilot

Because it’s difficult to iterate on features when code is inside Firefox, we should make our Test Pilot audience as close to a release audience as possible. We want to aim for the everyday users, not the early adopters. I think we can do this by just advertising Test Pilot experiments more heavily.

By gathering data from a larger audience, our data will be more representative of the release audience, and give us a better chance of feature success.

Aim for web-flavored features / avoid dramatic changes to the Firefox UI

Speaking as the person who did “the weird Gecko stuff” on both Universal Search and Min-Vid, doing novel things with the current Firefox UI is hard, for all the reasons Firefox things are hard: the learning curve is significant and it’s high-touch. Knowledge of how the code works is confined to a small group of people, the docs aren’t great, and learning how things work requires reading the source code, plus asking for help on IRC.

Given that our team’s strengths lie in web development, we will be more successful if our features focus on the webby things: cross-device, mobile, or cloud integration; bringing the ‘best of the web’ into the browser. This is stuff we’re already good at, stuff that could be a differentiator for Firefox just as much as new Firefox UI, and we can iterate much more quickly on server code.

This said, if Firefox Product wants to reshape the browser UI itself in powerful, unexpected, novel ways, we can do it, but we should have some Firefox or Platform devs committed to the team for a given experiment.

Planet MozillaExperimenting with AR and the Web on iOS

Experimenting with AR and the Web on iOS

Today, we’re happy to announce that the WebXR Viewer app is available for download on iTunes. In our recent announcement of our Mixed Reality program, we talked about some explorations we were doing to extend WebVR to include AR and MR technology. In that post, we pointed at an iOS WebXR Viewer app we had developed to allow us to experiment with these ideas on top of Apple’s ARKit. While the app is open source, we wanted to let developers experiment with web-based AR without having to build the app themselves.

The WebXR Viewer app lets you view web pages created with the webxr-polyfill Javascript library, an experimental library we created as part of our explorations. This app is not intended to be a full-fledged web browser, but rather a way to test, demonstrate and share AR experiments created with web technology.

Code written with the webxr-polyfill runs in this app, as well as Google’s experimental WebARonARCore APK on Android. We are working on supporting other AR and VR browsers, including WebVR on desktop.

Experimenting with AR and the Web on iOS

We’ve also been working on integrating the webxr-polyfill into the popular three.js graphics library and the A-Frame framework to make it easy for three.js and A-Frame developers to try out these ideas. We are actively working on these libraries and using them in our own projects; while they are works-in-progress, each contains some simple examples to help you get started with them. We welcome feedback and contributions!

Experimenting with AR and the Web on iOS

What’s Next?

We are not the only company interested in how WebVR could be extended to support AR and MR; for example, Google released a WebAR extension to WebVR with the WebARonARCore application mentioned above, and discussions on the WebVR standards documents has been lively with these issues.

As a result, the companies developing the WebVR API (including us) recently decided to rename the WebVR 2.0 proposal to the WebXR Device API and rename the WebVR Community Group to the Immersive Web Community Group, to reflect broad agreement that AR and VR devices should be exposed through a common API. The WebXR API we created was based on WebVR2.0; we will be aligning it with the WebXR Device API as is develops and continue using it to explore ideas for exposing additional AR concepts into WebXR.

We’ve been working on this app since earlier this fall, before the WebVR community decided to move from WebVR to WebXR, and we are looking forward to continue updating the app and libraries as the WebXR Device API is developed. We will continue to use this app as a platform for our experiments with WebXR on iOS using ARKit, and welcome others (both inside and outside the Immersive Web Community Group) to work with us on the app, the javascript libraries, and demonstrations of how the web can support AR and VR moving forward.

Planet MozillaWhat’s on the new Firefox menus? Speed

If you’ve downloaded the new Firefox, you’ve probably noticed that we did some redecorating. Our new UI design (we call it Photon) is bright, bold, and inspired by the speed … Read more

The post What’s on the new Firefox menus? Speed appeared first on The Firefox Frontier.

Planet Mozillahtml5lib-python 1.0 released!

html5lib-python v1.0 released!

Yesterday, Geoffrey released html5lib 1.0 [1]! The changes aren't wildly interesting.

The more interesting part for me is how the release happened. I'm going to spend the rest of this post talking about that.

[1]Technically there was a 1.0 release followed by a 1.0.1 release because the 1.0 release had issues.

The story of Bleach and html5lib

I work on Bleach which is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML. It relies heavily on another library called html5lib-python. Most of the work that I do on Bleach consists of figuring out how to make html5lib do what I need it to do.

Over the last few years, maintainers of the html5lib library have been working towards a 1.0. Those well-meaning efforts got them into a versioning model which had some unenthusing properties. I would often talk to people about how I was having difficulties with Bleach and html5lib 0.99999999 (8 9s) and I'd have to mentally count how many 9s I had said. It was goofy [2].

In an attempt to deal with the effects of the versioning, there's a parallel set of versions that start with 1.0b. Because there are two sets of versions, it was a total pain in the ass to correctly specify which versions of html5lib that Bleach worked with.

While working on Bleach 2.0, I bumped into a few bugs and upstreamed a patch for at least one of them. That patch sat in the PR queue for months. That's what got me wondering--is this project dead?

I tracked down Geoffrey and talked with him a bit on IRC. He seems to be the only active maintainer. He was really busy with other things, html5lib doesn't pay at all, there's a ton of stuff to do, he's burned out, and recently there have been spats of negative comments in the issues and PRs. Generally the project had a lot of stop energy.

Some time in August, I offered to step up as an interim maintainer and shepherd html5lib to 1.0. The goals being:

  1. land or close as many old PRs as possible
  2. triage, fix, and close as many issues as possible
  3. clean up testing and CI
  4. clean up documentation
  5. ship 1.0 which ends the versioning issues
[2]Many things in life are goofy.

Thoughts on being an interim maintainer

I see a lot of open source projects that are in trouble in the sense that they don't have a critical mass of people and energy. When the sole part-time volunteer maintainer burns out, the project languishes. Then the entitled users show up, complain, demand changes, and talk about how horrible the situation is and everyone should be ashamed. It's tough--people are frustrated and then do a bunch of things that make everything so much worse. How do projects escape the raging inferno death spiral?

For a while now, I've been thinking about a model for open source projects where someone else pops in as an interim maintainer for a short period of time with specific goals and then steps down. Maybe this alleviates users' frustrations? Maybe this gives the part-time volunteer burned-out maintainer a breather? Maybe this can get the project moving again? Maybe the temporary interim maintainer can make some of the hard decisions that a regular long-term maintainer just can't?

I wondered if I should try that model out here. In the process of convincing myself that stepping up as an interim maintainer was a good idea [3], I looked at projects that rely on html5lib [4]:

  • pip vendors it
  • Bleach relies upon it heavily, so anything that uses Bleach uses html5lib (jupyter, hypermark, readme_renderer, tensorflow, ...)
  • most web browsers (Firefox, Chrome, servo, etc) have it in their repositories because web-platform-tests uses it

I talked with Geoffrey and offered to step up with these goals in mind.

I started with cleaning up the milestones in GitHub. I bumped everything from the 0.9999999999 (10 9s) milestone which I determined will never happen into a 1.0 milestone. I used this as a bucket for collecting all the issues and PRs that piqued my interest.

I went through the issue tracker and triaged all the issues. I tried to get steps to reproduce and any other data that would help resolve the issue. I closed some issues I didn't think would ever get resolved.

I triaged all the pull requests. Some of them had been open for a long time. I apologized to people who had spent their time to upstream a fix that sat around for years. In some cases, the changes had bitrotted severely and had to be redone [5].

Then I plugged away at issues and pull requests for a couple of months and pushed anything out of the milestone that wasn't well-defined or something we couldn't fix in a week.

At the end of all that, Geoffrey released version 1.0 and here we are today!

[3]I have precious little free time, so this decision had sweeping consequences for my life, my work, and people around me.
[4]Recently, I discovered's pretty amazing project. They have a page for html5lib. I had written a (mediocre) tool that does vaguely similar things.
[5]This is what happens on projects that don't have a critical mass of energy/people. It sucks for everyone involved.

Conclusion and thoughts

I finished up as interim maintainer for html5lib. I don't think I'm going to continue actively as a maintainer. Yes, Bleach uses it, but I've got other things I should be doing.

I think this was an interesting experiment. I also think it was a successful experiment in regards to achieving my stated goals, but I don't know if it gave the project much momentum to continue forward.

I'd love to see other examples of interim maintainers stepping up, achieving specific goals, and then stepping down again. Does it bring in new people to the community? Does it affect the raging inferno death spiral at all? What kinds of projects would benefit from this the most? What kinds of projects wouldn't benefit at all?

Planet MozillaMozilla Awards Research Grants to Fund Top Research Projects

We are happy to announce the results of the Mozilla Research Grant program for the second half of 2017. This was a competitive process, with over 70 applicants. After three rounds of judging, we selected a total of fourteen proposals, ranging from building tools to support open web platform projects like Rust and WebAssembly to designing digital assistants for low- and middle- income families and exploring decentralized web projects in the Orkney Islands. All these projects support Mozilla’s mission to make the Internet safer, more empowering, and more accessible.

The Mozilla Research Grants program is part of Mozilla’s Emerging Technologies commitment to being a world-class example of inclusive innovation and impact culture-and reflects Mozilla’s commitment to open innovation, continuously exploring new possibilities with and for diverse communities.

Zhendong Su University of California, Davis Practical, Rigorous Testing of the Mozilla Rust and bindgen Compilers
Ross Tate Cornell University Inferable Typed WebAssembly
Laura Watts IT University of Copenhagen Shaping community-based managed services (‘Orkney Cloud Saga’)
Svetlana Yarosh University of Minnesota Children & Parent Using Speech Interfaces for Informational Queries
Serge Egelman UC Berkeley / International Computer Science Institute Towards Usable IoT Access Controls in the Home
Alexis Hiniker University of Washington Understanding Design Opportunities for In-Home Digital Assistants for Low- and Middle-Income Families
Blase Ur University of Chicago Improving Communication About Privacy in Web Browsers
Wendy Ju Cornell Tech Video Data Corpus of People Reacting to Chatbot Answers to Enable Error Recognition and Repair
Katherine Isbister University of California Santa Cruz Designing for VR Publics: Creating the right interaction infrastructure for pro-social connection, privacy, inclusivity, and co-mingling in social VR
Sanjeev Arora Princeton University and the Institute for Advanced Study Compact representations of meaning of natural language: Toward a rigorous and interpretable study
Rachel Cummings Georgia Tech Differentially Private Analysis of Growing Datasets
Tongping Liu University of Texas at San Antonio Guarder: Defending Heap Vulnerabilities with Flexible Guarantee and Better Performance


The Mozilla Foundation will also be providing grants in support of two additional proposals:


J. Nathan Matias CivilServant, incubated by Global Voices Preventing online harassment with Community A/B Test Systems
Donghee Yvette Wohn New Jersey Institute of Technology Dealing with Harassment: Moderation Practices of Female and LGBT Live Streamers


Congratulations to all successfully funded applicants! The 2018H1 round of grant proposals will open in the Spring; more information is available at

Jofish Kaye, Principal Research Scientist, Emerging Technologies, Mozilla

The post Mozilla Awards Research Grants to Fund Top Research Projects appeared first on The Mozilla Blog.

Planet MozillaGood Riddance to AUFS

For over a year, AUFS - a layering filesystem for Linux - has been giving me fits.

As I initially measured last year, AUFS has... suboptimal performance characteristics. The crux of the problem is that AUFS obtains a global lock in the Linux kernel (at least version 3.13) for various I/O operations, including stat(). If you have more than a couple of active CPU cores, the overhead from excessive kernel locking inside _raw_spin_lock() can add more overhead than extra CPU cores add capacity. That's right: under certain workloads, adding more CPU cores actually slows down execution due to cores being starved waiting for a global lock in the kernel!

If that weren't enough, AUFS can also violate POSIX filesystem guarantees under load. It appears that AUFS sometimes forgets about created files or has race conditions that prevent created files from being visible to readers until many seconds later! I think this issue only occurs when there are concurrent threads creating files.

These two characteristics of AUFS have inflicted a lot of hardship on Firefox's continuous integration. Large parts of Firefox's CI execute in Docker. And the host environment for Docker has historically used Ubuntu 14.04 with Linux 3.13 and Docker using AUFS. AUFS was/is the default storage driver for many versions of Docker. When this storage driver is used, all files inside Docker containers are backed by AUFS unless a Docker volume (a directory bind mounted from the host filesystem - EXT4 in our case) is in play.

When we started using EC2 instances with more CPU cores, we weren't getting a linear speedup for CPU bound operations. Instead, CPU cycles were being spent inside the kernel. Stack profiling showed AUFS as the culprit. We were thus unable to leverage more powerful EC2 instances because adding more cores would only provide marginal to negative gains against significant cost expenditure.

We worked around this problem by making heavy use of Docker volumes for tasks incurring significant I/O. This included version control clones and checkouts.

Somewhere along the line, we discovered that AUFS volumes were also the cause of several random file not found errors throughout automation. Initially, we thought many of these errors were due to bugs in the underlying tools (Mercurial and Firefox's build system were common victims because they do lots of concurrent I/O). When the bugs mysteriously went away after ensuring certain operations were performed on EXT4 volumes, we were able to blame AUFS for the myriad of filesystem consistency problems.

Earlier today, we pushed out a change to upgrade Firefox's CI to Linux 4.4 and switched Docker from AUFS to overlayfs (using the overlay2 storage driver). The improvements exceeded my expectations.

Linux build times have decreased by ~4 minutes, from ~750s to ~510s.

Linux Rust test times have decreased by ~4 minutes, from ~615s to ~380s.

Linux PGO build times have decreased by ~5 minutes, from ~2130s to ~1820s.

And this is just the build side of the world. I don't have numbers off hand, but I suspect many tests also got a nice speedup from this change.

Multiplied by thousands of tasks per day and factoring in the cost to operate these machines, the elimination of AUFS has substantially increased the efficiency (and reliability) of Firefox CI and easily saved Mozilla tens of thousands of dollars per year. And that's just factoring in the savings in the AWS bill. Time is money and people are a lot more expensive than AWS instances (you can run over 3,000 c5.large EC2 instances at spot pricing for what it costs to employ me when I'm on the clock). So the real win here comes from Firefox developers being able to move faster because their builds and tests complete several minutes faster.

In conclusion, if you care about performance or filesystem correctness, avoid AUFS. Use overlayfs instead.

Planet MozillaParameterized Roles

The roles functionality in Taskcluster is a kind of “macro expansion”: given the roles

group:admins -> admin-scope-1
group:devs   -> dev-scope

the scopeset ["assume:group:admins", "my-scope"] expands to


because the assume:group:admins expanded the group:admins role, and that recursively expanded the group:devs role.

However, this macro expansion did not allow any parameters, similar to allowing function calls but without any arguments.

The result is that we have a lot of roles that look the same. For example, project-admin:.. roles all have similar scopes (with the project name included in them), and a big warning in the description saying “DO NOT EDIT”.

Role Parameters

Now we can do better! A role’s scopes can now include <..>. When expanding, this string is replaced by the portion of the scope that matched the * in the roleId. An example makes this clear:

project-admin:* -> assume:hook-id:project-<..>/*

With the above parameterized role in place, we can delete all of the existing project-admin:.. roles: this one will do the job. A client that has assume:project-admin:bugzilla in its scopes will have assume:hook-id:project:bugzilla/* and all the rest in its expandedScopes.

There’s one caveat: a client with assume:project-admin:nss* will have assume:hook-id:project:nss* – note the loss of the trailing /. The * consumes any parts of the scope after the <..>. In practice, as in this case, this is not an issue, but could certainly cause surprise for the unwary.


Parameterized roles seem pretty simple, but they’re not!


Before parameterized roles the Taskcluster-Auth service would pre-compute the full expansion of every role. That meant that any API call requiring expansion of a set of scopes only needed to combine the expansion of each scope in the set – a linear operation. This avoided a (potentially exponential-time!) recursive expansion, trading some up-front time pre-computing for a faster response to API calls.

With parameterized roles, such pre-computation is not possible. Depending on the parameter value, the expansion of a role may or may not match other roles. Continuing the example above, the role assume:project:focus:xyz would be expanded when the parameter is focus, but not when the parameter is bugzilla.

The fix was to implement the recursive approach, but in such a way that non-pathological cases have reasonable performance. We use a trie which, given a scope, returns the set of scopes from any matching roles along with the position at which those scopes matched a * in the roleId. In principle, then, we resolve a scopeset by using this trie to expand (by one level) each of the scopes in the scopeset, substituting parameters as necessary, and recursively expand the resulting scopes.

To resolve a scope set, we use a queue to “flatten” the recursion, and keep track of the accumulated scopes as we proceed. We already had some utility functions that allow us to make a few key optimizations. First, it’s only necessary to expand scopes that start with assume: (or, for completeness, things like * or assu*). More importantly, if a scope is already included in the seen scopeset, then we need not enqueue it for recursion – it has already been accounted for.

In the end, the new implementation is tens of milliseconds slower for some of the more common queries. While not ideal, in practice that as not been problematic. If necessary, some simple caching might be added, as many expansions repeat exactly.


An advantage of the pre-computation was that it could seek a “fixed point” where further expansion does not change the set of expanded scopes. This allowed roles to refer to one another:

some-role -> assume:another-role
another*  -> assume:some-role

A naïve recursive resolver might loop forever on such an input, but could easily track already-seen scopes and avoid recursing on them again. The situation is much worse with parameterized roles. Consider:

some-role-*    -> assume:another-role-<..>x
another-role-* -> assume:some-role-<..>y

A simple recursive expansion of assume:some-role-abc would result in an infinite set of roles:


We forbid such constructions using a cycle check, configured to reject only cycles that involve parameters. That permits the former example while prohibiting the latter.

Atomic Modifications

But even that is not enough! The existing implementation of roles stored each role in a row in Azure Table Storage. Azure provides concurrent access to and modification of rows in this storage, so it’s conceivable that two roles which together form a cycle could be added simultaneously. Cycle checks for each row insertion would each see only one of the rows, but the result after both insertions would cause a cycle. Cycles will crash the Taskcluster-Auth service, which will bring down the rest of Taskcluster. Then a lot of people will have a bad day.

To fix this, we moved roles to Azure Blob Storage, putting all roles in a single blob. This service uses ETags to implement atomic modifications, so we can perform a cycle check before committing and be sure that no cyclical configuration is stored.

What’s Next

The parameterized role support is running in production now, but we have no yet updated any roles, aside from a few test roles, to use it. The next steps are to use the support to address a few known weak points in role configuration, including the project administration roles used as an example above.

Planet MozillaAdd-on recommendations for Firefox users: a prototype recommender system leveraging existing data sources

By: Alessio Placitelli, Ben Miroglio, Jason Thomas, Shell Escalante and Martin Lopatka. With special recognition of the development efforts of Roberto Vitillo who kickstarted this project, Mauro Doglio for massive contributions to the code base during his time at Mozilla, Florian Hartmann, who contributed efforts towards prototyping the ensemble linear combiner, Stuart Colville for coordinating … 

Planet MozillaCLI for alerts via Slack

I finally got a chance to scratch an itch today.


When working with bigger ETL jobs, I frequently run into jobs that take hours to run. I usually either step away from the computer or work on something less important while the job runs. I don't have a good …

Planet MozillaL10N Report: December Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.


New localizers

  • Amadou Sileymane Guèye has joined the Fulah localization team and has worked relentlessly in helping projects stay up to date in Pontoon. Fun fact: he is known for his high proficiency in the language and also for solving extremely difficult crossword puzzles in Fulah! Welcome!

New community/locales added

Sicilian (scn), Luxembourgish (lb), and Shuar (jiv) started translating Firefox on Pontoon. If you speak any of those languages, get in touch to help!

While Amharic (am) is not a brand new locale, it has recently known a period of revival. They are going to ship their first localized Mozilla product next week, which is Focus for Android. Congrats!

New content and projects

What’s new or coming up in Firefox desktop

Wondering why there weren’t new strings to translate for Firefox in the past few weeks? We hit a bit of a roadblock with cross-channel and merge day. Axel is working hard on finding a solution, we’re going to resume exposing new strings as soon as possible.

The current queue consists of about 130 strings, most of them belong to the new about:devtools page. This is the first step in decoupling Firefox Developer Tools from the main application, moving them into an add-on that can be disabled on release and beta channels.

We have also uncovered a couple of wide-spread (and long standing) l10n bugs.

PKCS #11 (security) has a peculiar set of limitations for a few strings: once encoded to UTF-8, they need to fit a 32 or 64 bytes buffer. Turns out we had 264 errors in our localizations, accumulated over 15 years!

A bug has been filed for each of the 71 affected languages, and within 24 hours we were already under 180 errors. Thanks for the quick response!

Incorrect number of plural forms

Plural forms in Firefox and Firefox for Android are obtained using a hack on top of .properties files (plural forms are separated by a semicolon). For example:

#1 tab has arrived from #2;#1 tabs have arrived from #2

English has 2 plural forms, one for singular, and one for all other numbers. The situation is much more complex for other languages, reaching up to 5 or 6 plural forms.

In Russian the same string has 3 forms, each one separated from the other by a semicolon:

С #2 получена #1 вкладка;С #2 получено #1 вкладки;С #2 получено #1 вкладок

It looks like several languages have an incorrect number of plural forms, or a completely wrong plural rule. You should have seen bugs if your locale is affected, or fixes directly in Pontoon if the mistake was easy to identify (e.g. a closing extra semicolon).

What’s new or coming up in mobile

Did you notice that Firefox iOS v10 came out, with many more languages added? Come check it out here if you haven’t! Languages added are: Afrikaans (af), Bosnian (bs), Spanish Argentina (es-AR), Galician (gl), Armenian (hy-AM), Interlingua (ia), Khmer (km), Kannada (kn), Malayalam (ml), Odia (or), Punjabi (pa-IN) and Sinhala (si).

If you see that your language is incomplete you can get involved and help! Just reach out to Delphine.

Focus iOS and Android should get a new release some time next week! V4 is right around the corner, and will offer a ton of cool new features. If you don’t have Focus yet, you can find the iOS version here and the Android one here. While usually on a two week release schedule, Focus will take some end of year vacations and come back in January. This means Focus localizers will also have some time off too 🙂

What’s new or coming up in web projects project is finally taking a break! This is after a few weeks of mad rush leading up to the Firefox Quantum launch to add a few more new pages and revisions. Thanks to all of you who have been part of this journey, keeping the site to date despite the tight schedule! site has gone through quite a transformation since the release of the new branding guideline. The logos, color schemes and palettes, layout, and navigation menu all went through major changes. Content style and tone had a major shift to correspond to the visual update. Since June, you have localized and reviewed about 10,000 words, in 20+ new, replacement, and updated pages.

During the “quiet” period, there will be some cleanups in the backend, so obsolete pages will be permanently removed from the dashboards. There might be small updates here and there as a result of the cleanup. Keep an eye on the dashboard for these non-urgent changes.

If you are still catching up with the updates, use the webdashboard for pending projects in your locale.

What’s new or coming up in Foundation projects

On the campaigns side, the Foundation is currently focused on the fundraising and the *Privacy Not Included campaigns.

For fundraising, we should be sending 2 or 3 more emails in December, you should see them appear on the webdashboard. Thanks a lot for all the work you’ve already done for fundraising so far, you’ve helped us a lot! We would be in a far worse position without your help.

This week we’ve worked with the snippets and product teams to launch an improved version of the snippets in Firefox Quantum, making them a bit more prominent and adding back the donate form. Hopefully more people will notice your work and donate eventually!

We are also doing some video testing in English and German, kudos to Michael for his quick help.

Newly published localizer facing documentation

A new document is available with basic instructions on how to use Bugzilla for localizers.


  • We held a Localization Workshop in Kolkata on November 18-19. Here are a few blog posts about this event from Drashti, Bala and Selva.
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Localization impact by the numbers

  • [Snippets]: Since Quantum launch on Nov 14, and not counting the default onboarding messages all users see when they start 57, so impressions and clicks that have been made possible by L10N are much higher than the numbers below reflect:
    • As of Dec 2nd, non-EN snippets generated 355,676,300 impressions (WOW!)
    • Over 40,000 clicks.
    • Contributors have helped Mozilla raise over $50,000 fundraising dollars, and we expect this number to be much higher by the end of year. Fundraising snippets were mostly turned off between Quantum release and Giving Tuesday.
  • [Firefox Rocket]: Launched in Indonesia since Nov 7, the install number between in the four-week period was 100,930.

Friends of the Lion

Image by Elio Qoshi

  • Huge congratulations to Rodrigo Guerra, Carmelo Serraino, and the other members of the Interlingua team for doing an outstanding job in localizing Firefox. Target for Firefox desktop release is 59. The site is already live on production!
  • Latgalian is also moving to Beta and Release with Firefox 59. Thanks to Dainis for taking over translation ownership, and Raivis from the Latvian team for helping to kick-start it.
  • Vietnamese is moving to Beta and Release on Firefox Android and riding the 59 train as well. Congratulations to the team who worked hard on testing and making sure the app is in impeccable shape!
  • Bengali (from Bangladesh) and Nepali will be in the Firefox Android 58 release coming up next Jan. Congratulations on this achievement!
  • Wolof shipped with Firefox Android 57. Congrats to this small team that does an incredible effort each time!
  • Thanks to Hari Om Panchal, from the Gujarati team, for doing a great job, translating almost 2 thousand strings in a short period of time.
  • Thanks to the whole Malayalam team for reviving their localization effort after a long hiatus.
  • Kudos to all the teams that started cleaning up their bugs in the Mozilla Localization product, without forgetting all those that were already keeping a close eye on their queue.
  • Thanks to Stoyan of the Bulgarian community for identifying issues in AMO and pushing for product changes to make it more localizer friendly. The AMO developers will start this discussion at the All Hands.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Please note that there will be no January l10n report as this is going to be a quiet month (for once) at Mozilla. Happy end of year holidays to those of you that celebrate it!

Planet MozillaFrom Visual Assets to WebVR Experiences: Mozilla Launches Second Part of its WebVR Medieval Fantasy…

From Visual Assets to WebVR Experiences: Mozilla Launches Second Part of its WebVR Medieval Fantasy Challenge!

Mozilla seeks to continually grow a robust community around A-Frame and WebVR. This is why we partnered with Sketchfab to create hundreds of medieval fantasy assets for the WebVR community to use. Today we are moving forward to use these assets in new and innovative ways such as games, scenes and character interaction.

Launching part two of our WebVR Medieval Fantasy Design Challenge we are calling for developers to create stories and scenes building on the visually stunning collection of assets previously built by talented designers from all around the world.

This challenge started December 5th and will be running until February 28th. We are excited to be able to offer great prizes such as a VR enabled laptop, an Oculus headset, pixel phones, Daydream headsets and even a few Lenovo Jedi Challenge kits.

Through this challenge, we hope to get submissions not only from people who have been using WebVR for a long time but also those who are new to the community and would like to take the plunge into the world of WebVR!

<figure><figcaption>Thor and the Midgard Serpent by MrEmjeR on Sketchfab</figcaption></figure>

Therefore, we have written an extensive tutorial to get new users started in A-Frame and will be using a slack group specifically for this challenge to support new users with quick tips and answers to questions. You can also review the A-Frame blog for more information on building and for project inspiration.

<figure><figcaption>Baba Yaga’s hut by Inuciian on Sketchfab</figcaption></figure>

Check out the Challenge Site for further information and how to submit your work. We can’t wait to see what you come up with!

From Visual Assets to WebVR Experiences: Mozilla Launches Second Part of its WebVR Medieval Fantasy… was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaNew Firefox Preference Center, Feels As Fast As It Runs

We’ve released an all-new Firefox to the world, and it includes a lot of new browser technology that’s super fast. People and press everywhere are loving it. We’ve made many … Read more

The post New Firefox Preference Center, Feels As Fast As It Runs appeared first on The Firefox Frontier.

Planet MozillaMozilla is Funding Art About Online Privacy and Security

Mozilla’s Creative Media Grants support art and artists raising awareness about surveillance, tracking, and hacking


La convocatoria para solicitudes está disponible en Español aquí

The Mozilla Manifesto states that “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.”

Today, Mozilla is seeking artists, media producers, and storytellers who share that belief — and who use their art to make a difference.

Mozilla’s Creative Media Grants program is now accepting submissions. The program awards grants ranging from $10,000 to $35,000 for films, apps, storytelling, and other forms of media that explore topics like mass surveillance and the erosion of online privacy.

What we’re looking for

We seek to support producers creating work on the web, about the web, and for a broad public. Producers should share Mozilla’s concern that the private communications of internet citizens are increasingly being monitored and monetized by state and corporate actors.

As we move to an era of ubiquitous and connected digital technology, Mozilla sees a vital role for media produced in the public interest that advocates for internet citizens being informed, empowered and in control of their digital lives.

Imagine: An open-source browser extension that reveals how much Facebook really knows about you. Or artwork and journalism that examine how women’s personal data is tracked and commodified online.

(These are real projects, created by artists who now receive Mozilla Creative Media grants. Learn more about Data Selfie and Chupadados.)

The audiences for this work should be primarily in Europe and Latin America.

While this does not preclude makers from other regions from applying, content and approach must be relevant to one of these two regions.

How to apply

To apply, download the application guide.

Lee la descripción del proyecto en Español aquí.

All applications must be in English, and applicants are encouraged to read the application guide. Applications will be open until Midnight Pacific time December 31st, 2017. Applications will be reviewed January 2018.

The post Mozilla is Funding Art About Online Privacy and Security appeared first on The Mozilla Blog.

Planet MozillaOpen Office XML shading patterns for JavaScript

Working on Open Office XML and html these days, I ended up reading and implementing section 17.18.78 of the ISO/IEC 29500 spec. It's the one dedicated to shading patterns. In words we're used to, predefined background images serving as pattern masks. It's not a too long list but the PNG or data URLs were not available as public resource, and I found that rather painful. I am then making my own implementation, in JavaScript, of ST_Shd public. Feel free to use it under MPL 2.0 if you need it.

Planet MozillaExperiments are releases

Mission Control was a major 2017 initiative for the Firefox Data team. The goal is to provide release managers with near-real-time release-health metrics minutes after going public. Will has a great write up here if you want to read more.

The key here is that the data has to be …

Planet Mozillathree kinds of open source metrics

Some random notes about open source metrics, related to work on CHAOSS, where Mozilla is a member and I'm on the Governing Board.

As far as I can tell, there are three kinds of open source metrics.

Impact metrics cover how much value the software creates. Possible good ones include count of projects dependent on this one, mentions of this project in job postings, books, papers, and conference talks, and, of course sales of products that bundle this project.

Contributor reward metrics cover how the software is a positive experience for the people who contribute to it. Job postings are a contributor reward metric as well as an impact metric. Contributor retention metrics and positive results on contributor experience surveys are some other examples.

But impact metrics and contributor reward metrics tend to be harder to collect, or slower-moving, than other kinds of metrics, which I'll lump together as activity metrics. Activity metrics include most of the things you see on open source project dashboards, such as pull request counts, time to respond to bug reports, and many others. Other activity metrics can be the output of natural language processing on project discussions. An example of that is FOSS Heartbeat, which does sentiment analysis, but you could also do other kinds of metrics based on text.

IMHO, the most interesting questions in the open source metrics area are all about: how do you predict impact metrics and contributor reward metrics from activity metrics? Activity metrics are easy to automate, and make a nice-looking dashboard, but there are many activity metrics to choose from—so which ones should you look at?

Which activity metrics are correlated to any impact metrics?

Which activity metrics are correlated to any contributor reward metrics?

Those questions are key to deciding which of the activity metrics to pay attention to. I'm optimistic that we'll be seeing some interesting correlations soon.

Planet MozillaBugzilla Project Meeting, 06 Dec 2017

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Planet MozillaBugzilla Project Meeting, 06 Dec 2017

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Planet WebKitRelease Notes for Safari Technology Preview 45

Safari Technology Preview Release 45 is now available for download for macOS Sierra and macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 224579-225266.

If you recently updated from macOS Sierra to macOS High Sierra, you may need to install the High Sierra version of Safari Technology Preview manually.


  • Corrected the computed style in pseudo-elements with display:contents (r225049)
  • Changed to reset the SVG scrolling anchor if the fragmentIdentifier does not exist or is not provided (r224973)
  • Fixed the available height for positioned elements with box-sizing:border-box (r225101)
  • Fixed long pressing a phone number with spaces to result in a data detectors sheet instead of a link sheet (r224819)
  • Fixed compositing layers to use accelerated drawing when navigating back to a page (r224796)
  • Fixed Flexbox issues with display:contents by no longer eliminating the whitespace renderer if the previous sibling is a text renderer (r224773)
  • Fixed missing contents of composited overflow-scroll when newly added (r224715)
  • Fixed content not getting painted when scrolling an overflow node inside an iframe (r224618)
  • Fixed FETurbulence SVG filter with stitchTiles (r225018)
  • Fixed skewed colors in feImage as a filter input (r225152)
  • Fixed an issue in the FEGaussianBlur SVG filter where the output of the last blur pass wasn’t copied to the result buffer (r225147)
  • Optimized the FEDisplacementMap SVG filter (r225183)
  • Optimized the FEMorphology SVG filter (r225172)
  • Optimized the FEComponentTransfer SVG filter (r225107)
  • Optimized the FELighting SVG filter (r225088, r225122)
  • Optimized the FETurbulence SVG filter (r224977, r224996, r225009, r225021, r225020, r225035, r225024)
  • Used vImage to optimize alpha premultiplication and un-premultiplication in FilterEffect (r225086)


  • Added recursive tail call optimization for polymorphic calls (r225212)
  • Fixed async iteration to only fetch the next method once (r224787)
  • Updated module fetching to retry if the previous request fails (r224662)


  • Fixed continual style re-resolution due to calc() values always comparing as unequal as seen on (r225141)
  • Fixed display issues in CSS Grid for a child with max-width (r225163)
  • Enabled display:contents by default (r224822)
  • Fixed inserting an image, selecting, underlining, and then deleting to remove the typing style with both -webkit-text-decorations-in-effect and text-decoration (r224649)
  • Prevented mixing stroke-width and stroke-color with their prefixed versions (r224780)


  • Added support for CanvasPattern.setTransform() (r225121)
  • Implemented OffscreenCanvas.getContext("webgl") (r225193)
  • Changed XMLHttpRequest to not treat file URLs as same origin (r224609)
  • Changed FetchLoader to unregister its blob URL (r224954)


  • Added a FairPlay Streaming based CDM for Modern EME (r224707)
  • Changed Web Audio’s AnalyserNode.fftSize to allow up to 32768 to match specifications (r225226)
  • Changed skip back and skip forward buttons to not hard-code their numeric amount in localised strings (r225216)
  • Fixed play and pause when pressing the space bar while watching a fullscreen video (r225265)
  • Prevented captions from moving when <video> with no controls is hovered (r225138)

Web Inspector

  • Added the display of detailed status during canvas recording in the experimental Canvas tab (r224726)
  • Added showing the internal properties of PaymentRequest in the Console (r224606)
  • Cleaned up the backtrace in Canvas details sidebar (r225259)
  • Cleaned up navigation bar dividers and separators (r224807)
  • Gave the DataGrid table header ContextMenu a section header to better describe its functions (r224761)
  • Included Beacon loads in the Network table’s “Other” filter (r225246)
  • Moved console Preserve Log setting from Setting tab to Console navigation bar (r225257)
  • Added a toggle in the Network tab to control automatically clearing or preserving log across loads (r225250)
  • Added a HAR Export button to the Network tab (r224994)
  • Cleaned up the Network tab details view (r224733, r224851, r224769)
  • Fixed the Navigation sidebar that would become broken after closing and re-opening the tab (r224905)
  • Made the connection part thinner in the Network tab waterfall graph (r224727)
  • Updated the Ignore Caches icon in the Network tab (r224989)
  • Updated the Clear icon in the Network tab and Console tab (r225019)
  • Removed Network “Clear on load” from the Settings tab now that Network tab has a toggle for it (r225256)
  • Prevented adding a new property when clicking on inline swatches and property checkboxes in the Styles sidebar (r224651)
  • Changed typing colon in the property name field to advance to the value field in the Styles sidebar (r224906)
  • Fixed the clipped shadow of the selector’s field in the Styles sidebar (r225165)
  • Made the selector field inline-block when editing in the Styles sidebar (r224768)
  • Added undo and redo support for the DOM Tree’s add sibling context menus (r224648)
  • Removed context menu actions for images which did not work (r225234)


  • Fixed VoiceOver in Safari to read the table header for the first cell in the first body row (r224997)
  • Fixed the search predicate to return the text element for plain text instead of the containing group (r224650)
  • Prevented accessibility from triggering a sync layout while building the render tree on (r224899)

Planet MozillaThe Joy of Coding - Episode 123

The Joy of Coding - Episode 123 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 123

The Joy of Coding - Episode 123 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaBerlin Admin Meetup – November 2017

Hey there, fellow helpful Mozillians :-)

Last week, three of the administrators and coordinators (Rachel, Madalina, Michał) met in Berlin to talk about our status and plans for 2018. We would like to share some of the most important points and decisions taken after five days of talks and invite you to ask questions or voice your opinions about the state & trajectory of Mozilla’s Support.

To set your expectations before you dig in – we are going to talk about many of the points made below during our upcoming All Hands, so outlining detailed and complete plans at this stage does not make sense, as after next week there could be many more factors “on the table” to consider and include in our 2018 planning. What you see on this page is rather a general outline of where we see Mozilla’s community (when talking about user support) in 2018.

That said, if you have any questions about the content of this post, please use the Discourse thread if you have something to share regarding the contents of this post.

We’re done with the introduction… Let’s dig into it!

The main focus of the agenda was community, and the organization of the discussion was (more or less) as follows:

  1. Current state of the community and challenges we face
  2. Tools and resources at our disposal
  3. Teams and people to talk to
  4. Defining “community”
  5. Community goals for different aspects of Support
  6. Defining “community health”
  7. Community building
  8. Delegating / sharing tasks & responsibilities with contributors
  9. Reporting
  10. Events
  11. Contributor paths
  12. Reviewing the Firefox Quantum release
  13. Goals for 2018

Let’s start at the end, shall we? ;-) Our main goals for 2018, in no particular order, are to be able to say that we:

  1. …have a way to segment contributors and identify personal profiles that will help us make strategic decisions. We want to have a better idea about who the people in our community are.
    • One of the ways we want to try to achieve this is conducting research with the help of Open Innovation/Participation and setting up a survey for new contributors.
  2. …can cooperate with and within a self-sufficient community or a set of self-sufficient communities.
  3. …have identified and implemented metrics that are accessible and relevant to both other project teams at Mozilla and to anyone outside Mozilla.
    • Talking to people about the numbers they need and crunching numbers is the name of the game here.
  4. …have identified community management tasks that we (as admins) cannot completely take care of and the people who can help us with that. Also, have successfully delegated some (all?) of such tasks to people interested in getting involved at a higher level.
    • We are working on a team list of tasks and we hope to share it with you soon enough. If you’re interested in this part already, let us know through the usual channels.
  5. …have identified neighbouring expert networks that contribute help us fulfil all the needs of our contributors and users.
    • This means investigating internal and external networks of specialists and experts in areas that enable support. We want to have more smart people out there working with us on making it better for everyone.
  6. …have identified and cleaned up all existing legacy content regarding user support at Mozilla.
    • Expect some centralisation and purging of content across several places. We keep producing too much noise and not enough signal in some areas of our activity as administrators – and we need to clear up our act in order to make it easier for Mozillians old and new to find their way.
  7. …have participated (together with contributors) in at least one large (and impactful) event every quarter (on average), representing Mozilla and our support efforts.
    • Mozilla’s support was presented at different events in the past, with varied degree of success. We want to take this up a notch and involve you, whenever it makes logistical sense.
  8. …have permeated the community with a “one Mozilla” attitude.

We have also identified several “big” issues that have an impact on our part of the project that we want to provide solutions (or a way forward) for – not necessarily before the next year ends. Here are the key ones:

  1. Kitsune is an old (if reliable) platform and it is unlikely we will be able to recruit full-time developers to fully support and develop it in the near future. We need to address this in order to provide continuous support options for users and a platform for contribution for Mozillians.
    • keywords: community developers, community dashboards, Discourse, Pontoon
  2. We do not have enough resources to run a staff only support model. We want to have a self-sufficient community that can function with minimum admin help in order to grow and scale beyond our current numbers, while keeping users supported and contributors engaged.
    • keywords: research, survey, community health, non-coding Mozillians
  3. Due to our team’s size limitations, we need to adopt a global approach to building and maintaining relationships with Mozillians supporting our users, based on the expertise and resources from teams we have historically not been continuously collaborating with.
    • keywords: non-coding Mozillians, one Mozilla, Community Participation Guidelines,
  4. Administrators cannot focus on community needs and projects as much as they should, as our main focus is (and will be) user support. Because of that, we will almost always prioritize user support tasks before community management and that will have a negative impact on our community relationships.
    • keywords: Bugzilla reports, escalations, task delegation
  5. We need to improve our communication channels within and outside of Mozilla to avoid bottlenecks and delays in daily and emergency situations.
    • keywords: universal “who is who”, more meet & greet, communication procedures
  6. We do not have enough participation in online discussions from all contributors – the forums feel dominated by a few voices. We need to encourage participation in community building through discussion and avoiding bias.
    • keywords: anonymous team persona, Discourse
  7. We need a central, scalable, and transparent way to track and engage contributors for each release across the functional areas.
    • keywords: release newsletter, readiness check-in, active contributor list, task delegation
  8. We do not feel fully aligned with other Mozilla project teams, especially those that could be directly relying on our performance.
    • keywords: conversations, metrics, expectations, reporting, positioning

Once again – if any of the above sounds terribly cryptic, the Discourse discussion is your best place to ask for clarification.

Finally, we also discussed the Firefox Quantum release from the support perspective, and here are the observations we want to share with you:

“The good”:

  • We have 12 Mozilla staff members active in social support!
  • We had good communication channels with Product, Marketing & other teams involved in the release – no bottlenecks or confusion, no feeling of “going solo” through it.
  • We had great community participation, especially in the first days after the release. The number of contributions grew exponentially with the number of incoming user questions.
  • Our legacy platform performed well, despite the huge increase in traffic.
  • Several new contributors appeared, helped out, and remained very active.
  • The volume was our main challenge; we had no unexpected emergency or reasons to panic – preparation was key here.

“Could have been better”: 

  • Contribution levels matched incoming user traffic – when incoming user traffic dropped, you could see contributions dropping as well. We’d love to keep contributors engaged for longer.
  • The forum structure encouraged multiple contributor replies to questions that have already been replied to. This could be more streamlined in order to improve coverage and decrease frustration.
  • Fringe top 20 (and beyond) locales in some cases did not rally for Quantum (this is a long-standing issue that requires addressing outside of just the support project).
  • We did not have a clear process for managing internal requests, so some team members faced disproportionate workloads and communication overload – we want to address this through smart delegation of tasks for the next release. It was also a challenge to deliver all the data or feedback requested in a timely manner. If we have a defined release requirement list before the next release, we can deliver the requested data more consistently.
  • It was not always easy to clearly understand what was the best solution for a popular issue and address user feedback or set best practices for community contributors to follow (e.g. with chrome.css edits). A point of contact in the Product team or Marketing who could offer a final decision or solution for issues is a necessity – we can’t ignore tough questions only if we lack clear answers; we can’t have clear answers without a confirmation from subject matter experts.
  • We noticed cases of people being on PTO/AFK, who were still working during that period – this should not be happening for obvious reasons – “on call” work has legal/work implications.
  • Emergency internal messaging during the release period was not organized enough and may have caused additional confusion or tension within and around the team.

“Still an issue”:

  • Following up on reported bugs – we could do better through having a clear hand-off procedure with clear expectations from the team we file bug against. We can’t let open bugs stew longer than necessary (e.g. as with a malware issue during previous releases). A clear process with the Product team in which we push for an answer or resolution that can later be taken back to people affected and contributors is definitely necessary. It is still better for us to hear “I don’t know” or “This is not a priority at this moment” from the right people, than to hear nothing and second-guess what is going to happen or what the best response could be.

A final reminder here: use the Discourse thread to discuss any of the above and let us know there if you need any more information. We are happy to provide as much of it as we have at this moment.

(And if you think that this summary is a lot to digest, imagine how many pages of notes we produced over five days…) (Probably more than you’d like to read ;D)

Thank you for getting through this post all the way to the bottom… and thank you for being there for other Mozillians and users around the world! You make Mozilla happen! Keep rocking the helpful web :-)

P.S. On a slightly less serious note, Madalina tried to keep us in Berlin a bit longer, using an escape room as an excuse… but we have successfully escaped thanks to concerted team work!

Planet MozillaA Classic Extension Reborn: Tree Style Tab

Yuki “Piro” Hiroshi is a trailblazer and a true do-it-yourselfer. Whenever the Tokyo-based programmer gets irritated with any aspect of his browsing experience, he builds a workaround for himself and shares it with others. After authoring nearly 100 browser extensions, Piro recently took on his biggest challenge yet: migrating the legacy Tree Style Tab (TST) extension to work with the new WebExtensions API and Firefox Quantum.

TST on the legacy XUL platform was a very complicated extension. So rebuilding it for the new WebExtensions API had a lot of unique challenges. Piro hit a number of snags and frustrations, including undocumented behaviors and bugs. What helped? A positive frame of mind and a laser-sharp focus on the capabilities of the new APIs. Working within the context of the new platform, Piro was able to complete the extension and bring a crowd favorite to the fastest Firefox yet.

Want more technical detail? Check out Piro’s post WebExtensions Migration Story of Tree Style Tab for his strategies, code snippets, and architectural diagrams of the XUL and WebExtensions platforms.

How You Got Started

Q: What browser did you first create an extension for?
I wrote my first extension for Mozilla Application Suite, before Firefox was born.

Q: What extensions have you created for Firefox?
Tree Style Tab, Multiple Tab Handler, Text Link, and many others. My extensions mainly aim to change Firefox’s behavior, rather than offering completely new features.

Q: How many extensions have you created for Firefox?
I’ve written nearly 100 extensions for Firefox and Thunderbird. About half of those are purely for my personal use. I’ve published around 40, mostly legacy XUL extensions, and most of them are not migrated to WebExtensions yet.

Q: Where can we find your extensions?
My published extensions are on You can find older extensions, including unlisted extensions, on my website.

<figure>Tree Style Tab extension for Firefox<figcaption>Drag and drop to group related tabs.</figcaption></figure>

More about Your Project

Q: Why did you create your extensions? What problem were you trying to solve?
My motivation was to make it easier to browse the web. When I encounter any small trouble or inconvenience, I create a new extension as a workaround.

Q: How many years have you been working with extensions?
Sixteen years. My first extension was released in November 2001.

Q: How has the technology changed during that time?
There have been changes to Firefox itself, like improvements in the Gecko rendering engine and JavaScript engine. And there have been new Web standard technologies like CSS3, ES6, and others. Recently, I learned the async-await syntax while developing WebExtensions. Those new technologies help me clean up “dirty” or “hacky” code and maintain it more easily.

Q: Did you migrate your extension from XUL to WebExtensions? How difficult was that process?
I migrated Tree Style Tab, a large XUL extension, to a WebExtension. It was very difficult, to be honest. The most difficult part was the feature triage. Due to limitations of the WebExtensions APIs, some features were definitely not migratable. Making those decisions was painful for me.

It was also hard to detect the moment when my extension should cancel and override Firefox’s default behavior. As I mentioned earlier, most of my extensions are created to change Firefox’s behavior. But the WebExtensions API provides only limited entry points, so I often have to guess what’s happening in Firefox at any given moment, based on information like the order of events.

Q: What resources were most helpful when implementing Firefox extensions?
For researching how to implement extensions in XUL, I used DXR, the online search system for source code of Mozilla products. For WebExtensions, I looked at the source code of other existing WebExtensions and Google Chrome extensions.

Q: What sites or tools helped you learn about the WebExtensions API?
MDN Web Docs is still the first entry point. Additionally I think links to the source code of existing extensions will help new extension authors, like an inverted dictionary.

Your Experience with WebExtensions

Q: What new technology in Firefox Quantum helped you with migration?
Mozilla introduced many new WebExtensions APIs in Firefox 57 [Quantum], such as the openerTabId property for tabs. The opener information is highly relevant for Tree Style Tab. If those APIs were not available, Tree Style Tab couldn’t be migrated successfully.

Q: On a scale of 1 to 10, with 1 being the easiest, how difficult was it to write to the WebExtensions APIs?
In general, it was a 2 or 3. WebExtensions APIs are simple enough and clear for extensions that just add a new button, sidebar, or other unique feature to Firefox. However, there are some undocumented behaviors on edge cases, and they might be closely tied to a specific Firefox release you’re working on. If you’re writing extensions that change Firefox’s behavior, you will need to dig into those undocumented behaviors.

Q: Now that you’ve moved to WebExtensions, what do you like about it?
I like the stability of the APIs. Generally, WebExtensions APIs are guaranteed to be compatible with Firefox, now and in the future. So I can develop long-lived extensions with no fear about breakage in future releases.

Q: How long did it take you to write this extension?
I started to develop the WebExtensions-based version of Tree Style Tab in late August 2017 and released it in November 2017. Of course, some parts of the tree management functionality were imported from the legacy XUL version, which I developed between 2007 and 2017.

Q: How difficult is it to debug Firefox extensions?
Today that is very easy to learn. Most extension authors don’t need to become experts of Firefox-specific technologies, especially XUL and XPCOM. Moreover, the about:debugging feature of Firefox itself is very helpful for debugging of extensions. A command-line tool called web-ext is also helpful for debugging and publishing.

Q: How is adoption of your Tree Style Tab extension?
Most people seem to welcome the new WebExtension-based version. Some say that Tree Style Tab is the main reason why they use Firefox. And they are excited now that TST is available for Firefox 57 [Quantum] and future releases.

Q: What problems, if any, did you experience while developing a your extension? How were those problems resolved?
Because the WebExtensions API is still under active development, I did find some bugs while developing the new Tree Style Tab extension, on edge cases. I reported those bugs in Bugzilla. With WebExtensions, problems due to API limitations are basically impossible to fix. Workarounds are available only in a few cases. So reporting to Bugzilla and waiting for the fix is the only thing extension authors can do.

However, that is good news when viewed from another perspective. In legacy XUL extensions, authors could do anything, including replacing Firefox’s internal XPCOM components. As a result, the boundaries of an extension and the author’s responsibility could be widened infinitely, and authors would have to make decisions on all the feature requests from users: to do them or not. That troubled me a lot. But, with WebExtensions, authors can just say, “It is impossible due to the API limitations.” I think this is a very helpful and important change in WebExtensions.

Q: What advice would you give to others porting to WebExtensions?
Try to simplify your development tasks. If you want to port a multi-feature extension, think about putting each feature into a separate extension, instead of trying to put them all into one single extension. You can make those extensions interact with each other by using API methods that are either implicit (based on shared information like properties tabs.Tab and openerTabId) or explicit (based on custom messages for runtime.sendMessage()).

Stay focused on the results you want, instead of worrying how you will get it done. You may not be able to fully reproduce the behavior of your XUL extension in the new WebExtensions version. For instance, old method-based synchronous (blocking) APIs are no longer available, and raw DOM events on Firefox UI are no longer “listenable”. So you need to forget how you used to do things, and learn what is the normal approach in the WebExtensions world. You can probably find a better way to go about it, and still get to the same results. Then you’ll successfully dive into the new world, and you’ll have an awesome experience!

Related Content

WebExtensions Migration Story of Tree Style Tab
Why Firefox Had to Kill Your Favorite Extension
Q&A with Grammarly’s Sergey Yavnyi
Q&A with Add-on Developer Stefan Van Damme
Remaking Lightbeam as a Browser Extension
Cross-browser extensions, available now in Firefox


Planet MozillaSome SpiderMonkey optimizations in Firefox Quantum

A few weeks ago we released Firefox 57, also known as Firefox Quantum. I work on SpiderMonkey performance and this year I spent a lot of time analyzing profiles and optimizing as many things as possible.

I can't go through all SpiderMonkey performance improvements here: this year alone I landed almost 500 patches (most of them performance related, most of them pretty boring) and other SpiderMonkey hackers have done excellent work in this area as well. So I just decided to focus on some of the more interesting optimizations I worked on in 2017. Most of these changes landed in Firefox 55-57, some of them will ship in Firefox 58 or 59.

shift/unshift optimizations

Array.prototype.shift removes the first element from an array. We used to implement this by moving all other elements in memory, so shift() had O(n) performance. When JS code uses arrays as queues, it was easy to get quadratic behavior ( did something like this):

while (arr.length > 0)

The situation was even worse for us when we were in the middle of an incremental GC (due to pre-barriers).

Instead of moving the elements in memory, we can now use pointer arithmetic to make the object point to the second element (with some bookkeeping for the GC) and this is an order of magnitude faster when there are many elements. I also optimized unshift and splice to take advantage of this shifted-elements optimization. For instance, unshift can now reserve space for extra elements at the start of the array, so subsequent calls to unshift will be very fast.

While working on this I noticed some other engines have a similar optimization, but it doesn't always work (for example when the array has many elements). In SpiderMonkey, the shifted-elements optimization fits in very well architecturally and we don't have such performance cliffs (as far as I know!).

Regular expressions

RegExp objects can now be nursery allocated. We can now also allocate them directly from JIT code. These changes improved some benchmarks a lot (the orange line is Firefox):

While working on this, I also moved the RegExpShared table from the compartment to the zone: when multiple iframes use the same regular expression, we will now only parse and compile it once (this matters for ads, Facebook like buttons, etc). I also fixed a performance bug with regular expressions and interrupts: we would sometimes execute regular expressions in the (slow!) regex interpreter instead of running the (much faster) regex JIT code.

Finally, the RegExp constructor could waste a lot of time checking the pattern syntax. I noticed this when I was profiling real-world code, but the fix for this also happened to double our Dromaeo object-regexp score :)

Inline Caches

This year we finished converting our most important ICs (for getting/setting properties) to CacheIR, our new IC architecture. This allowed us to optimize more things, here are a few:

  • I rewrote our IC heuristics. We now have special stubs for megamorphic lookups.
  • We now optimize more property gets and sets on DOM proxies like document or NodeLists. We had some longstanding bugs here that were much easier to fix with our new tools.
  • Similarly, we now optimize more property accesses on WindowProxy (things like or in the browser).
  • Our IC stubs for adding slots and adding elements now support (re)allocating new slots/elements.
  • We can now use ICs in more cases.

The work on CacheIR has really paid off this year: we were able to remove many lines of code while also improving IC coverage and performance a lot.

Property addition

Adding new properties to objects used to be much slower than necessary. I landed at least 20 patches to optimize this.

SpiderMonkey used to support slotful accessor properties (data properties with a getter/setter) and this complicated our object layout a lot. To get rid of this, I first had to remove the internal getProperty and setProperty Class hooks, this turned out to be pretty complicated because I had to fix some ancient code we have in Gecko where we relied on these hooks, from NPAPI code to js-ctypes to XPConnect.

After that I was able to remove slotful accessor properties and simplify a lot of code. This allowed us to optimize our property addition/lookup code even more: for instance, we now have separate methods for adding data vs accessor properties. This was impossible to do before because there was simply no clear distinction between data and accessor properties.

Property iteration

Property iteration via for-in or Object.keys is pretty common, so I spent some time optimizing this. We used to have some caches that were fine for micro-benchmarks (read: SunSpider), but didn't work very well on real-world code. I optimized the for-in code, rewrote the iterator cache, and added an IC for this. For-in performance should be much better now.

I also rewrote the enumeration code used by both for-in, Object.getOwnPropertyNames, etc to be much faster and simpler.

MinorGC triggers

In Firefox, when navigating to another page, we have a mechanism to "nuke" chrome -> content wrappers to prevent bad memory leaks. The code for this used to trigger a minor GC to evict the GC's nursery, in case there were nursery-allocated wrappers. These GCs showed up in profiles and it turned out that most of these evict-nursery calls were unnecessary, so I fixed this.

According to Telemetry, this small patch eliminated tons of unnecessary minor GCs in the browser:

The black line shows most of our minor GCs (69%) were EVICT_NURSERY GCs and afterwards (the orange line) this just doesn't show up anymore. We now have other minor GC reasons that are more common and expected (full nursery, full store buffer, etc).


After refactoring our object allocation code to be faster and simpler, it was easy to optimize proxy objects: we now allocate ProxyValueArray inline instead of requiring a malloc for each proxy.

Proxies can also have an arbitrary slot layout now (a longstanding request from our DOM team). Accessing certain slots on DOM objects is now faster than before and I was able to shrink many of our proxies (before these changes, all proxy objects had at least 3-4 Value slots, even though most proxies need only 1 or 2 slots).


A lot of builtin functions were optimized. Here are just a few of them:

  • I fixed some bad performance cliffs that affected various Array functions.
  • I ported Object.assign to C++. It now uses less memory (we used to allocate an array for the property names) and in a lot of cases is much faster than before.
  • I optimized Function.prototype.toString. It's surprisingly common for websites to stringify the same function repeatedly so we now have a FunctionToString cache for this.
  • Object.prototype.toString is very hot and I optimized it a number of times. We can also inline it in the JIT now and I added a new optimization for lookups of the toStringTag/toPrimitive Symbols.
  • Array.isArray is now inlined in JIT code in a lot more cases.

Other optimizations

  • We unnecessarily delazified (triggering full parsing of) thousands of functions when loading Gmail.
  • Babel generates code that mutates __proto__ and this used to deoptimize a lot of things. I fixed a number of issues in this area.
  • Cycle detection (for instance for JSON.stringify and Array.prototype.join) now uses a Vector instead of a HashSet. This is much faster in the common cases (and not that much slower in pathological cases).
  • I devirtualized some of our hottest virtual functions in the frontend and in our Ion JIT backend.


SpiderMonkey performance has improved tremendously the past months and we're not stopping here. Hopefully there will be a lot more of this in 2018 :) If you find some real-world JS code that's much slower in Firefox than in other browsers, please let us know. Usually when we're significantly slower than other browsers it's because we're doing something silly and most of these bugs are not that hard to fix once we are aware of them.

Planet MozillaDesirable features of experimentation tools


At Mozilla, we're quickly climbing up our Data Science Hierarchy of Needs 1. I think the next big step for our data team is to make experimentation feel natural. There are a few components to this (e.g. training or culture) but improving the tooling is going to be …

Planet MozillaMozilla Files Cross-Complaint Against Yahoo Holdings and Oath

Yahoo Holdings and Oath filed a complaint against Mozilla on December 1, 2017, claiming that we improperly terminated the agreement between Mozilla and Yahoo. Today, in response, Mozilla filed a cross-complaint against Yahoo Holdings and Oath for breach of contract.

While this is a legal matter and much of it is confidential, as Mozilla, we want to share as much information as we can in the spirit of our values of openness and transparency.

We started a wiki page with links to relevant public court documents – over time we expect to add more content as it becomes public.

Our official statement on this matter is:

“On December 1, Yahoo Holdings and Oath filed a legal complaint against Mozilla in Santa Clara County court claiming that we improperly terminated our agreement. On December 5, Mozilla filed a cross-complaint seeking to ensure that our rights under our contract with Yahoo are enforced.

We recently exercised our contractual right to terminate our agreement with Yahoo based on a number of factors including doing what’s best for our brand, our effort to provide quality web search, and the broader content experience for our users.

Immediately following Yahoo’s acquisition, we undertook a lengthy, multi-month process to seek assurances from Yahoo and its acquirers with respect to those factors. When it became clear that continuing to use Yahoo as our default search provider would have a negative impact on all of the above, we exercised our contractual right to terminate the agreement and entered into an agreement with another provider.

The terms of our contract are clear and our post-termination rights under our contract with Yahoo should continue to be enforced. We enter into all of our relationships with a shared goal to deliver a great user experience and further the web as an open platform. No relationship should end this way – litigation doesn’t further any goals for the ecosystem. Still, we are proud of how we conducted our business and product work throughout the relationship, how we handled the termination of the agreement, and we are confident in our legal positions.

We remain focused on the recent launch of Firefox Quantum and our commitment to protecting the internet as a global public resource, especially at a time when user rights like net neutrality and privacy are under attack.”

The post Mozilla Files Cross-Complaint Against Yahoo Holdings and Oath appeared first on The Mozilla Blog.

Planet MozillaThese Weeks in Firefox: Issue 29


Friends of the Firefox team

Project Updates


Activity Stream

Browser Architecture

Form Autofill

  • Improved the accuracy of identifying fields related to credit card expiration date.
  • Fixed an issue that the cached “searchString” in the autocomplete module isn’t consistent with the actual value in the input element.
  • Heartbeat for credit card autofill is live.


  • If you use `git mozreview push`, then lint should run automatically against the files in your push.
    • You may need to run `./mach mercurial-setup` to pick up the latest version-control-tools.
  • Requiring the use of Services.jsm rather than .getService continues to roll out, all of toolkit/ & services/ now covered, browser/ is on its way.


  • Firefox for Android can be used as an “Assist App” again! This means that long pressing the home button launches Firefox with a new tab ready for typing a search query or URL.
    • This feature was temporarily lost when the Search Activity / Widget was removed in Firefox for Android 58, and should be fixed in Firefox for Android 59


Platform Audibles


Search and Navigation

Address Bar & Search

Test Pilot

Web Payments

  • Making progress on the dialog and use of Custom Elements for implementing the dialog and UI components within it
  • Continuing refinement on UX specs

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Planet MozillaCatching flickering regressions

For Firefox 57, as part of the Photon Performance project, one of the things we worked on was dramatically reducing flickering in the main browser window. We focused especially on what happens when opening the first browser window during startup, and when opening new browser windows.

To identify bugs, we looked frame by frame at video recordings. This was good enough to file bugs for an initial exploration, but was time consuming, and won't help us to keep these interactions flicker free going forward.

I'm happy to announce that these two interactions are now covered by automated flickering tests that will catch any regression there:

These tests currently contain whitelists of known bugs (blocking bug 1421456).

Here is how these tests work:
  • as soon as the window we care about is created, we add a MozAfterPaint event listener to it.
  • for each received MozAfterPaint event, we capture a screenshot of the window using CanvasRenderingContext2D.drawWindow().
  • remove the event listener after the window is done loading and after several callbacks to ensure that the window has settled.
  • compare the pixel data of each of the captured frames to identify areas that have changed.
  • for changed areas, see if they are whitelisted, if not make the test fail and dump base64 encoded data urls of the before/after screenshots, so that the test failure can be visually debugged.
We currently cover only the startup and window opening cases, but I expect us to add similar tests to more areas in the future. Each time we spend effort reducing flickering in some area of our UI, we should add tests for it to prevent regression.

Planet MozillaWoke up and thought you were in a different reality? Reality Redrawn Challenge launches

Woke up and thought you were in a different reality? Reality Redrawn Challenge launches with a total prize value of $40,000

It’s not often I get to invite artists and developers to collaborate together so I’m excited to see how they respond to the Reality Redrawn Challenge from Mozilla which launches today. The boundaries between truth and fiction are becoming harder to define, in part because of the proliferation of fake news and other forms of misinformation. Mozilla wants to shed light on this by sponsoring public demonstrations, using mixed reality and other art media that make the power of misinformation and its potential impacts visible and visceral.


We live in strange times in which legitimate news organizations such as CNN have to launch advertising campaigns to remind people what real information is. Meanwhile social networks that connect millions more people struggle to help them differentiate truth from fiction and to define their unplanned role as media platforms.

Throughout historic moments of upheaval people have used art to make sense of what appears to be dystopian reality. The west side of the Berlin wall became one of the largest canvases in the world as Berliners attempted to make sense of their divided city, while the east side remained blank as none were allowed to get close enough to paint. I also like to remember that Jazz icon and civil rights activist Nina Simone set an enduring challenge to all artists when she asked “how can you be an artist and not reflect the times?”

Mixed reality includes all the ways in which 3D virtual content can be integrated with someone’s perception of the physical world around them, from simple enhancements of everyday experiences with augmented reality to completely synthesized immersive virtual reality worlds, and all variations in between. You can read more about Mozilla’s contribution to the space here.

For this challenge, traditional as well as digital and mixed media artists are invited to submit applications for three levels of financial support. One semi finalist will receive $15,000. Two will receive $7,500 each and two more will receive $5,000 each. Submissions can be made from anywhere in the world on the challenge website. Judging will begin on January 3 and finish on January 15 2018.

After the winners are announced they will have three months in which to complete their work. We’re excited to partner with The Tech Museum of Innovation in San Jose where the winners’ work will then be exhibited. The Tech’s landmark building welcomes more than 500,000 visitors a year with 135,000 square feet of exhibits and activities that empower people to use technology to solve problems.

The Reality Redrawn Challenge is part of the Mozilla Information Trust Initiative announced August 8 to build a movement to fight misinformation online. The initiative aims to stimulate work towards this goal on products, research, literacy and creative interventions. Today’s announcement marks the first effort within the creative space. I am excited to see how artists respond to the crisis of truth that is infecting more and more people’s experience of the web.

Woke up and thought you were in a different reality? Reality Redrawn Challenge launches was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaWhen an Online Community Hits the Big Time, with Col Needham

When an Online Community Hits the Big Time, with Col Needham In December, IMDb founder Col Needham will come to Mozilla's London office to share how he built up a part-time hobby with several unknown volunteers...

Planet MozillaWhen an Online Community Hits the Big Time, with Col Needham

When an Online Community Hits the Big Time, with Col Needham In December, IMDb founder Col Needham will come to Mozilla's London office to share how he built up a part-time hobby with several unknown volunteers...

Planet MozillaUsing Headless Mode in Firefox

If you know the ropes, good news! Firefox now has support for headless mode, making it easier to use as a backend to automated tools. You can jump ahead to learn how to use it.

Browser automation is not a new idea, but is an increasingly important part of how modern websites are built, tested, and deployed. Automation setups range from scripts run on local machines to vast deployments of specialized servers running in the cloud. To this end, browsers have long supported some level of automated control, usually via third-party driver software.

Browsers are at their core a user interface to the web, and a graphical user interface in particular. This poses a few problems for automation. In some environments, there may be no graphical display available, or it may be desirable to not have the browser appear at all when being controlled. This has required tools like virtual display software in order to run properly, adding complexity. More recently, tools like Lighthouse have packaged complex automated tests into a simple attractive package. They use the browser as a testing runtime, but there’s no need to display the browser window while the tests run.

For years, the best way to load webpages without displaying UI was PhantomJS, which is based on WebKit. While it remains a fantastic tool, it’s valuable to be able to run automated browser tests in official browsers, and so it’s valuable to have a headless mode available.

In June, Google shipped Chrome 59 featuring a headless mode, and Firefox has followed close behind with headless mode available on all platforms starting with version 56.

Using Firefox in Headless Mode

Launching Firefox in headless mode is simple enough. From the command line, simply add the -headless argument:

/path/to/firefox -headless

Great! Firefox is running in headless mode. How do you control it?


There are multiple options out there, many of which actually pre-date headless mode itself. Let’s review them!


There’s a wealth of of information about selenium-webdriver testing on the MDN page for headless mode. Here’s a high-level overview.

Selenium is a venerable tool for browser automation, and it’s all the better with a headless browser. Writing a headless test is just as it was before, and there are some great libraries out there to make it easier. For instance, here is a basic node script to capture a screenshot of a webpage:

const { Builder } = require('selenium-webdriver');
const firefox = require('selenium-webdriver/firefox');
const fs = require('fs');


async function capture(url) {
  const binary = new firefox.Binary(firefox.Channel.RELEASE);
  binary.addArguments('-headless'); // until newer webdriver ships

  const options = new firefox.Options();
  // options.headless(); once newer webdriver ships

  const driver = new Builder().forBrowser('firefox')

  await driver.get(url);
  const data = await driver.takeScreenshot();
  fs.writeFileSync('./screenshot.png', data, 'base64');



Really the only difference when using headless mode is to make sure the right argument is passed.

That said, if you just need a screenshot of a webpage, that’s built in:

# -screenshot assumes -headless
firefox -screenshot

# want to pick the resolution?
firefox -screenshot --window-size=480,1000

DevTools Debugging Protocol

Firefox has a debugging protocol that allows scripts to drive its DevTools from remotely. There are libraries such as node-firefox and foxdriver that use this protocol to remotely debug websites, fetch their logs, etc. For security reasons, the remote debugging protocol is not enabled by default, but can be enabled in preferences or from the command line:

/path/to/firefox --start-debugger-server 6000 -headless

In addition, the remote debugging protocol also speaks WebSockets! You can connect a webpage to a remote Firefox and drive it from there:

/path/to/firefox --start-debugger-server ws:6000 -headless

Learn (Lots) More!

This is an overview of what’s possible with headless Firefox and it’s the early days of support, but there’s already great information out there. In particular:

Happy scripting!

Planet Mozillarelease tag the following changes have been pushed to [1418205] Update easy...

release tag

the following changes have been pushed to

  • [1418205] Update easy product selector on Browse and Enter Bug pages
  • [1379607] Reimplement Google Analytics on
  • [1393950] Block users from signing into Phabricator unless they have MFA enabled
  • [1363889] Update BMO to use new Mozilla branding
  • [1420300] Move bug tagging tool from global footer to bug footer
  • [1420295] Remove the Bugzilla version number from homepage
  • [1381296] Buttons on modal UI are not displayed in Fira Sans font, too small compared to other ones
  • [1417980] Fix non-HTTPS links and outdated links where possible

discuss these changes on

Planet MozillaWoke up and thought you were in a different reality? Reality Redrawn Challenge launches with a total prize value of $40,000

It’s not often I get to invite artists and developers to collaborate together so I’m excited to see how they respond to the Reality Redrawn Challenge from Mozilla which launches today. The boundaries between truth and fiction are becoming harder to define, in part because of the proliferation of fake news and other forms of misinformation. Mozilla wants to shed light on this by sponsoring public demonstrations, using mixed reality and other art media that make the power of misinformation and its potential impacts visible and visceral.

We live in strange times in which legitimate news organizations such as CNN have to launch advertising campaigns to remind people what real information is. Meanwhile social networks that connect millions more people struggle to help them differentiate truth from fiction and to define their unplanned role as media platforms.

Throughout historic moments of upheaval people have used art to make sense of what appears to be dystopian reality. The west side of the Berlin wall became one of the largest canvases in the world as Berliners attempted to make sense of their divided city, while the east side remained blank as none were allowed to get close enough to paint. I also like to remember that Jazz icon and civil rights activist Nina Simone set an enduring challenge to all artists when she asked “how can you be an artist and not reflect the times?”

Mixed reality includes all the ways in which 3D virtual content can be integrated with someone’s perception of the physical world around them, from simple enhancements of everyday experiences with augmented reality to completely synthesized immersive virtual reality worlds, and all variations in between. You can read more about Mozilla’s contribution to the space here.

For this challenge, traditional as well as digital and mixed media artists are invited to submit applications for three levels of financial support. One semi finalist will receive $15,000. Two will receive $7,500 each and two more will receive $5,000 each. Submissions can be made from anywhere in the world on the challenge website. Judging will begin on January 3 and finish on January 15 2018.

After the winners are announced they will have three months in which to complete their work. We’re excited to partner with The Tech Museum of Innovation in San Jose where the winners’ work will then be exhibited. The Tech’s landmark building welcomes more than 500,000 visitors a year with 135,000 square feet of exhibits and activities that empower people to use technology to solve problems.

The Reality Redrawn Challenge is part of the Mozilla Information Trust Initiative announced August 8 to build a movement to fight misinformation online. The initiative aims to stimulate work towards this goal on products, research, literacy and creative interventions. Today’s announcement marks the first effort within the creative space. I am excited to see how artists respond to the crisis of truth that is infecting more and more people’s experience of the web.

The post Woke up and thought you were in a different reality? Reality Redrawn Challenge launches with a total prize value of $40,000 appeared first on The Mozilla Blog.

Planet MozillaUsing glTF Models with A-Frame

Using glTF Models with A-Frame

You may have found your way to this blog because Mozilla just announced a new challenge with Sketchfab models, or maybe you just want a way to use glTF (GL Transmission Format) models in your A-Frame application. Either way we’ve got you covered. In this post I’ll show you how to load a glTF model into an A-Frame application, and some tips on how to modify it to create a fully interactive VR experience.

Building a simple scene

A-Frame is the easiest way to build VR content on the Web. With just some HTML and a little bit of JavaScript you can build games, interactive demos, and immersive experiences.

Create your first simple A-Frame scene by pasting the following code into a new file called scene.html:

   <script src=""></script>
     <a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box>
     <a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>
     <a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
     <a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
     <a-sky color="#ECECEC"></a-sky>

In this code you will see a bunch of strange looking elements with names like a-scene and a-box. These are the A-Frame elements. Each element has attributes for setting the position, rotation, color, and other properties of the element.

Now load this page into your web browser. I would be remiss if I didn’t mention that the newest release of Firefox, Firefox Quantum, has excellent support for 3D and VR. You should see a simple scene that looks like this:

Using glTF Models with A-Frame

Now remove the cube, sphere, and cylinder lines, leaving just the plane. Next, go find a model on Sketchfab that you want to use. All of the models for the WebVR Medieval Fantasy Experience Challenge should use assets from the previous Real-Time Design Challenge and are tagged with medievalfantasyscene. I’ve chosen one called Mega Scene by VitSh.

Click the Download (Free) link and choose the DOWNLOAD GLTF option (not the original Blender or FXB model). You will need to create a free Sketchfab account to download it. Your browser will download a ZIP file. Put the ZIP file somewhere near your scene.html file and extract the ZIP file.

Using glTF Models with A-Frame

Using glTF Models with A-Frame

Open the code to scene.html and add a link to the scene.gltf file using the a-gltf-model element like this.

<a-gltf-model src="mega_scene/scene.gltf"></a-gltf-model>  

You should see a scene similar to this one:

Using glTF Models with A-Frame

Now you can move it around by adding position and rotation attributes, or change the lighting with some a-light elements. See the A-Frame docs for more details.

Note: if you don’t see the model, check in the JavaScript console. Some operating systems set the file permissions of downloads to be readable only by the current user. On my Mac, this created an HTTP 401 error on my local webserver. I had to change the permission to world-readable to be able to view the scene.

For more information about the gltl-model component see the documentation page.

Subsetting Scenes

The scene above looks cool, but to create your own experience you will probably want to use only certain elements from the scene, not everything. A-Frame works in terms of models, so if we want to use just part of it we need to subset the model. There are a few ways to do this.

If the designer of the original scene has extracted models from the scene, you can just download those directly. Many of the designers on Sketchfab have already done so. Check the main model description for links, or look at galleries owned by the original designer. For example, the imp from Baba Yaga’s hut is available here. Many of the models have also been tagged with medievalfantasyassets.

If the particular scene model you wish to load was marked with individual parts, you can also use the gltf-part component to load only a portion of it. You may need to open the .gltf file to find these names. The file is just JSON, so search for properties called name.

Finally, if you are brave, you can download the open-source 3D-editor tool Blender for free to edit and subset the original designs directly. You will also need to install a glTF exporter such as this one from the Khronos Group, or this one from Kupoman.

Supporting Assets

3D models are the core of a VR scene, but to create a complete experience you’ll probably want some particle effects, animation, and even audio. The wonderful A-Frame community has built many supplemental components you can use in your application. One of my personal favorites is Diego Goberna’s environment component which can randomly generate terrain, objects, and skyboxes for you. For particles you can use IdeaSpace VR’s particle-system-component and for simple physics you can try out Don McCurdy’s physics component.

A-Frame has built-in support for positional audio, but you still need to provide your own audio files. You can find lots of natural sound effects at freesound, and the Free Music Archive has lots of creative commons-licensed music. You can even find background music designed to loop. Always remember to credit the original author of any assets you use in your creation.

Next Steps

I hope this post gives you a good start at creating cool VR content. If you run into any challenges, feel free to reach out for help. The main A-Frame community lives on the Slack group. We have created a Slack channel specifically for the challenge. You can also read the A-Frame blog for inspiration by seeing what other A-Frame developers have created. And of course use the aframe tag on Stack Overflow. Happy Coding!

Planet MozillaOff-Main-Thread Painting

I’m excited to announce Off-Main-Thread painting, our new Firefox graphics performance effort! It’s shipping soon in our next release, Firefox 58 – directly on the heels of Advanced Layers, our new compositor for Firefox 57.

To understand OMTP, and why it’s a big deal for us, it helps to understand how Firefox renders a webpage down to pixels on your screen. There are four main steps involved:

  1. Making a Display List: Here we collect the visible elements on the page and create high-level primitives to encapsulate rendering each one. These primitives are called “display items.”
  2. Assigning Layers: Here we try to group display items together into “layers”, based on how they are scrolled or animated. There are different types of layers. Display items will usually be grouped into “Painted” layers, which have a texture (or bitmap) that is updated when items are added, removed, or changed.
  3. Rasterization: This is where each display item is asked to render itself into its assigned layer. For example, a “table” item might issue a series of API calls to draw borders and lines.
  4. Compositing: Finally, the layers are composited into a single final image, which is then sent to the monitor. This step uses Direct3D or OpenGL when available.

These steps occur across two threads, like so:

Painting before OMTP

This process is pretty complicated! And it’s expensive. For Firefox to render at 60fps, we have about 16ms to process input events, JavaScript code, perform garbage collection, and render everything to the screen. We want to maximize the time available for input events and JavaScript, and minimize the time we spend crunching pixels. Otherwise… the browser appears to lag, skip, or pause.

The Compositing step already happens off the main thread, but the other major steps do not. And while rasterization is not always expensive, it can be, and it is very much affected by resolution. Rasterizing on a 4K monitor requires computing roughly 10 times as many pixels  than, say, a 1024×768 screen.

Off Main Thread Painting is our answer to rasterization costs. As the name suggests – we simply do it on another thread! It turned out to be surprisingly easy – with an asterisk.

Normally, our display items render through an API we call Moz2D. Moz2D was already designed to support multiple backends – Skia, Cairo, Direct2D, et cetera. We added an additional “Capture” backend, where instead of immediately issuing commands, we can record them in a list. That list then gets replayed on a painting thread. Voilà! Rasterization is now asynchronous.

The new diagram looks like this:

Painting with OMTP

Notice that the final rendering step now flows from the paint thread. That’s intentional – as soon as the main thread is done recording, we can return to the event loop and begin processing JavaScript and input events again. The paint thread now has the responsibility of sending new “painted” layer content to the compositor.

What happens if painting takes multiple frames to complete? Say the paint thread is going to take 100ms to rasterize a very complex recording. Will the main thread keep piling up new frames and sending them to the paint thread? The answer is: no. Because Firefox double buffers, we currently cannot allow more than one frame of slack. If we begin rendering a new frame, we will wait for the previous frame to finish. Luckily since we only render on vertical sync (every 16ms on a 60hz display), this affords us a full 32ms (minus whatever time we spent preparing and recording the previous frame, of course) before we start delaying work on the main thread.

To see why this is beneficial, imagine a series of frames before and after OMTP. If each frame exceeds the frame budget – even if rasterization was not the biggest component (like it is in the diagram below) – our composite will be delayed until the next vsync. In the diagram below, not only are we missing frames, but we’re spending a good deal of time doing nothing.

Frame Model before OMTP

Now, imagine the same content being rendered with OMTP. The main-thread is now recording commands and sending them to the paint thread. We can resume processing the next frame up until another rasterization needs to be queued. As long as neither thread exceeds its frame budget, we’ll always be able to composite on time. And if even if we blow the frame budget – at least we’ll get a few more frames in than the previous diagram.

Frame Model with  OMTP

Data-Driven Decision

When we started planning for future Graphics team work last year, we set out by instrumenting Firefox with Telemetry. We wanted to know how much painting was affecting frame time, and in addition, we wanted to know more about slow paints. When painting exceeded a certain threshold (set to 15ms), how was that time divided between different phases of the painting process?

We had a gut feeling that rasterization was less of a cost than expected. Partly because it’s incremental (we rarely have to re-rasterize an entire page), and partly because we use Microsoft’s high-performance rasterization library, Direct2D. And indeed, our gut feelings were confirmed: for most “slow” paints, the costs were in the preparatory steps. Rasterization was sometimes a large chunk, but usually, it was somewhere between 10-20% of the entire paint cycle. Immediately, this data kicked off another project: Retained Display Lists, which the layout team will be talking about soon.

Even though rasterization was usually fast, we had enough evidence that it did consume precious frame cycles, and that was motivation enough to embark on this project.


A nice side effect of having instrumented Firefox is that we were pretty quickly able to see the effects of OMTP. The two graphs below are unfortunately a bit difficult to read or condense, but they are straight from our public Telemetry dashboard. On the left is data from Firefox 57, and on the right is data from Firefox 58. The horizontal axis is how expensive rasterization was as a percentage of the total frame time. The vertical axis is how often that particular weighting occurred.

In Firefox 57, “cheap” rasterizations (those less than ~10% of the paint cycle) occur 51% of the time. In Firefox 58, they occur 80% of the time! That means in Firefox 58, rasterization will consume less of the frame budget on average. Similarly, in Firefox 57, rasterization is a significant slice – 50% of the paint cycle or more – 21% of the time. In Firefox 58, that scenario occurs only 4% of the time!

To get a better feel for absolute effects, we also developed a microbenchmark called “rasterflood_svg”. It tries to simulate painting during a very heavy workload by spending 14ms of each frame spinning the CPU on JavaScript, and then forcing the browser to re-render a complex SVG pattern. In theory, OMTP should improve our framerate on a benchmark like this, because more time is available to render before the next frame begins. It closely matches the vsync diagram from earlier.

And indeed, we do see benefits! With Direct2D, our microbenchmark improved FPS by 30%. And with Skia, our microbenchmark improved FPS by 25%. We expect Skia wins to be even greater in the future as we experiment with parallel painting.

OMTP Microbenchmark


Earlier we mentioned this was easy to implement. If that’s the case, why didn’t we just do it a long time ago? There are a few reasons, but in actuality it was a super long road to get here, and it was only made simple by years of precursor work. This project required Off Main Thread Compositing and significant work to simplify and reduce complexity in both Layers and Moz2D. Some of that work was not even motivated until Electrolysis took off. We even had an earlier OMTP project (spearheaded by Jerry Shih for FirefoxOS), but it found roadblocks in our IPC layer. We were only able to overcome those roadblocks with the knowledge learned from past efforts, combined with later refactorings.

There were also some thread-safety complications, of course. Our 2D API is “copy on write”. You can issue many draw calls to a Moz2D surface, and even create copies of the surface, but usually no actual computations are performed until the contents of a surface will be read. So, a copy of a surface is just a pointer. When the original surface is about to be mutated, any outstanding copies are told to immediately duplicate the underlying pixels, so they reflect the image as when the “copy” was created.

Why did this pose problems for OMTP? Well, it turns out we copy Moz2D surfaces a lot. Those copies can be sent from the main thread to the paint thread. If the main thread happens to mutate the original surface while the paint thread tries to read from a shallow copy, there will be a race. We definitely don’t want to deep-copy all of our temporary surfaces on the main thread, so instead, we added per-surface synchronization to Moz2D.

Finally, another issue we ran into was the Direct2D global lock. Rather than completely audit or overhaul how Direct2D is used on both threads, we decided to enable Direct2D thread safety. When this is enabled, Direct2D will hold a global lock during certain critical sections. For example, this lock is held during surface destruction/allocation, when surfaces are “flushed” to the GPU, and when surfaces are copied. A good deal of work was us hitting these contention points for various reasons and addressing them, sometimes by moving more code off the main thread, and sometimes by fixing silly mistakes.

Future Work

What’s left to do? We have a few follow-up projects in mind. Now that we have asynchronous painting, it makes sense to explore parallel painting as well. We already support “tiled” rendering on Mac, and now we can explore both asynchronous tiling and painting tiles and layers in parallel. We also want to explore how well this works on Windows, both with Skia and with Direct2D. Our “slow rasterization” benchmarks suggest that parallel painting will be a huge win for Skia.

There are also just some missing features in OMTP right now. For example, we do not support rasterizing “mask” layers on the paint thread. We would like to move some of this functionality out of the renderer into Advanced Layers, where masking can be done in our new, much-more-intelligent batching compositor.

I’d like to thank Mason Chang, Ryan Hunt, Bas Schouten, Jerry Shih, Matt Woodrow, and our 2017 intern Dominic Farolino for contributing to these projects and getting them out the door!

Planet MozillaDesirable features of experimentation tools


At Mozilla, we're quickly climbing up our Data Science Hierarchy of Needs 1. I think the next big step for our data team is to make experimentation feel natural. There are a few components to this (e.g. training or culture) but improving the tooling is going to be …

Planet MozillaThis Week in Rust 211

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week is a bit sad for lack of a crate. Look, if you want a weekly crate, submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

146 pull requests were merged in the last week

New Contributors

  • Christian Duerr
  • Irina-Gabriela Popa
  • Julian Kulesh
  • Kenjiro Nakayama
  • Kyle Huey
  • Nikolay Merinov

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaMozMEAO SRE Status Report - December 5, 2017

Here’s what happened on the MozMEAO SRE team from November 14th - December 5th.

Current work


Work continues on the SUMO move to AWS. We’ve provisioned a small RDS MySQL instance in AWS for development and tried importing a production snapshot. The import took 30 hours on a db.t2.small instance, so we experimented with temporarily scaling the RDS instance to a an db.m4.xlarge. The import is now expected to complete in 5 hours.

We will investigate if incremental backup/restore is an option for the production transition.


MDN had several short downtime events in November, caused by heavy load due to scraping. Our K8s liveness and readiness probes often forced pods to restart when MySQL was slow to respond.

Several readiness and liveness probe changes were issued by @escattone and @jwhitlock to help alleviate the issue:

The November 2017 Kuma report has additional details.

We now have a few load balancer infrastructure tests for MDN, implemented in this pull request.

MDN Interactive-examples

Caching is now more granular due to setting different cache times for different assets.


Bedrock is transitioning to a local Sqlite DB and clock process in every container. This removes the dependency on RDS and makes running Bedrock cheaper. In preparation for this change, S3 buckets have been created for dev, stage and prod.


Planet MozillaKuma Report, November 2017

Here’s what happened in November in Kuma, the engine of MDN Web Docs:

We’re planning on more of the same for December.

Done in November

Shipped the First Interactive Examples

We’ve launched the new interactive examples on 20+ pages. Try them out on the pages for the CSS property box-shadow and the JavaScript method Array.slice.

box-shadow example

We’re monitoring the page load impact of this limited rollout, and if the results are good, we have another 400 examples ready to go, thanks to Mark Boas and others. Mark also added a JavaScript Interactive Examples Contributing Guide, so that contributors can create even more.

We want the examples to be as fast as possible. Schalk Neethling improved the page load speed of the <iframe> by using preload URLs (PR 4537). Stephanie Hobson and Schalk dived into HTTP/2, and identified require.js as a potential issue for this protocol (Kuma PR 4521 and Interactive Examples PR 329). Josh Mize added appropriate caching headers for the examples and static assets (PR 326).

For the next level of speed gains, we’ll need to speed up the MDN pages themselves. One possibility is to serve from a CDN, which will require big changes to make pages more cacheable. One issue is waffle flags, which allow us to experiment with per-user changes, at the cost of making pages uncacheable. Schalk has made steady progress in eliminating inactive waffle flag experiments, and this work will continue into December.

Continued Migration of Browser Compatibility Data

The Browser Compatibility Data project was the most active MDN project in November. 36.6% of the MDN pages (2284 total) have been converted. Here are some highlights:

  1. Imported more CSS data, such as the huge list of allowed values for the list-style-type property (this list uses georgian). This property alone required 7 PRs, starting with PR 576. Daniel D. Beck submitted 32 CSS PRs that were merged in November, and is making good progress on converting CSS data.
  2. Added browser and version validation, a month-long effort in PR 439 from Florian Scholz and Jean-Yves Perrier.
  3. Added a runtime_flag for features that can be enabled at browser startup (PR 615 from Florian Scholz).
  4. Add the first compatibility data for Samsung Internet for Android (PR 657 from first-time contributor Peter O'Shaughnessy).
  5. Shipped the new compatibility table to beta users. Stephanie Hobson resurrected a design that had been through a few rounds of user testing (PR 4436), and has made further improvements such as augmenting colors with gradients (PR 4511). For more details and to give us feedback, see Beta Testing New Compatability Tables on Discourse.

New Browser Compatibiility Table

Sticky Table of Contents and Other Article Improvements

We shipped some additional article improvements in November.

The new table of contents is limited to the top-level headings, and “sticks” to the top of the window at desktop sizes, showing where you are in a document and allowing fast navigation (PR 4510 from Stephanie Hobson).

Sticky Table of Contents

The breadcrumbs (showing where you are in the page hierarchy) have moved to the sidebar, and now has metadata tags. Stephanie also refreshed the style of the sidebar links.

Breadcrumbs and Quick Links

Stephanie also updated the visual hierarchy of article headings. This is most noticeable on <h3> elements, which are now indented with black space.

New <h3> style

Improved MDN in AWS and Kubernetes

We continued to have performance and uptime issues in AWS in November. We’re prioritizing fixing these issues, and we’re delaying some 2017 plans, such as improving KumaScript translations and upgrading Django, to next year.

We lost GZip compression in the move to AWS. Ryan Johnson added it back in PR 4522. This reduced the average page download time by 71% (0.57s to 0.16s), and contributed to a 6% decrease in page load time (4.2 to 4.0s).

Page Download drop due to GZip

Heavy load due to scraping caused 6 downtimes totaling 35 minutes. We worked to improve the performance of unpopular pages that get high traffic from scrapers, such as document list views (PR 4463 from John Whitlock) and the revisions dashboard (PR 4520 from Josh Mize). This made the system more resilient.

Kubernetes was contributing to the downtimes, by restarting web servers when they started to undergo heavy load and were slow to respond. We’ve adjusted our “readiness” and “liveness” probes so that Kubernetes will be more patient and more gentle (Infra PR 665 from Ryan Johnson).

These changes have made MDN more resilient and reliable, but more work will be needed in December.

Stephanie Hobson fixed the development favicon appearing in production (PR 4530), as well as an issue with lazy-loading web fonts (PR 4533).

Ryan Johnson continues work on our deployment process. Pushing certain branches will cause Jenkins to take specific deployment steps. Pushing master will run tests and publish a Docker image. Pushing stage-push will deploy that image to Pushing stage-integration-tests will run browser and HTTP tests against that deployment. We’ll make these steps more reliable, add production variants, and then link them together into automated deployment pipelines.

Shipped Tweaks and Fixes

There were 260 PRs merged in November:

Many of these were from external contributors, including several first-time contributions. Here are some of the highlights:

Planned for December

Mozilla gathers for the All-Hands event in Austin, TX in December, which gives us a chance to get together, celebrate the year’s accomplishments, and plan for 2018. Mozilla offices will shut down for the last full week of December. This doesn’t leave a lot of time for coding.

We’ll continue working on the projects we worked on in November. We’ll convert more Browser Compatibility data. We’ll tweak the AWS infrastructure. We’ll eliminate and convert more waffle flags. We’ll watch the interactive examples and improved compatibility tables, and ship them when ready.

We’ll also take a step back, and ask if we’re spending time and attention on the most important things. We’ll think about our processes, and how they could better support our priorities.

But mostly, we’ll try not to mess things up, so that we can enjoy the holidays with friends and family, and come back refreshed for 2018.


Updated: .  Michael(tm) Smith <>