Planet MozillaYes C is unsafe, but…

I posted curl is C a few days ago and it raced on hacker news, reddit and elsewhere and got well over a thousand comments in those forums alone. The blog post has been read more than 130,000 times so far.

Addendum a few days later

Many commenters of my curl is C post struck down on my claim that most of our security flaws aren’t due to curl being written in C. It turned out into some sort of CVE counting game in some of the threads.

I think that’s missing the point I was trying to make. Even if 75% of them happened due to us using C, that fact alone would still not be a strong enough reason for me to reconsider our language of choice (at this point in time). We use C for a whole range of reasons as I tried to lay out there in spite of the security challenges the language brings. We know C has tricky corners and we know we are likely to do more mistakes going forward.

curl is currently one of the most distributed and most widely used software components in the universe, be it open or proprietary and there are easily way over three billion instances of it running in appliances, servers, computers and devices across the globe. Right now. In your phone. In your car. In your TV. In your computer. Etc.

If we then have had 40, 50 or even 60 security problems because of us using C, through-out our 19 years of history, it really isn’t a whole lot given the scale and time we’re talking about here.

Using another language would’ve caused at least some problems due to that language, plus I feel a need to underscore the fact that none of the memory safe languages anyone would suggest we should switch to have been around for 19 years. A portion of our security bugs were even created in our project before those alternatives you would suggest were available! Let alone as stable and functional alternatives.

This is of course no guarantee that there isn’t still more ugly things to discover or that we won’t mess up royally in the future, but who will throw the first stone when it comes to that? We will continue to work hard on minimizing risks, detecting problems early by ourselves and work closely together with everyone who reports suspected problems to us.

Number of problems as a measurement

The fact that we have 62 CVEs to date (and more will follow surely) is rather a proof that we work hard on fixing bugs, that we have an open process that deals with the problems in the most transparent way we can think of and that people are on their toes looking for these problems. You should not rate a project in any way purely based on the number of CVEs – you really need to investigate what lies behind the numbers if you want to understand and judge the situation.


Let me clarify this too: I can very well imagine a future where we transition to another language or attempt various others things to enhance the project further – security wise and more. I’m not really ruling anything out as I usually only have very vague ideas of what the future might look like. I just don’t expect it to be happening within the next few years.

These “you should switch language” remarks are strangely enough from the backseat drivers of the Internet. Those who can tell us with confidence how to run our project but who don’t actually show us any code.


What perhaps made me most sad in the aftermath of said previous post, is everyone who failed to hold more than one thought at a time in their heads. In my post I wrote 800 words on some of the reasoning behind us sticking to the language C in the curl project. I specifically did not say that I dislike certain other languages or that any of those alternative languages are bad or should be avoided. Please friends, I wrote about why curl uses C. There are many fine languages out there and you should all use them as much as you possibly can, and I will too – but not in the curl project (at the moment). So no, I don’t hate language XXXX. I didn’t say so, and I didn’t imply it either. Don’t put that label on me, thanks.

Planet MozillaOur (css) discussion is broken

I have seen clever, thoughtful and hardworking people trying to be right about the CSS is not broken and CSS is broken. PopCorn. I will not attempt to be right on anything in this post. I'm just sharing a feeling.

Let me steal the image from this provocative tweet/thread. You can also read the thread in these tweets.


I guess looking at the image I understood how/why the discussions would never have any resolutions. Just let clarify a few things.

A Bit Of An Introduction

CSS means Cascading Style Sheets. Cascading Style Sheets (CSS) is a simple mechanism for adding style (e.g., fonts, colors, spacing) to Web documents.. Words have meanings. It is 20 years old spec wise. But the idea came from a proposal on www-talk on October 10, 1994. Fast forward, there is an ongoing effort to formalize CSS Object Model aka CSSOM defines APIs (including generic parsing and serialization rules) for Media Queries, Selectors, and of course CSS itself.

Let's look at the very recent CSS Grid, currently in Candidate Recommendation phase and starting to be well deployed in browsers. Look carefully at the prose. The word DOM is cited only twice in an example. CSS is a language for describing styling rules.

The Controversy

What people are argueing about and not discussing or dialoguing about is not about CSS, but how to apply the style rules to a document.

  • Some developers prefers to use style elements and files using the properties of the cascade and specificity (which are useful), aka CSS.
  • Some developers wants to apply the style on each individual node in the DOM using JavaScript to constraint (remove) the cascade and the specificity because they consider it annoying for their use case.

devtools screenshot

I do not have a clear cut opinion about it. I don't think CSS is broken. I think it is perfectly usable. But I also completely understand what the developers who wants to use JavaScript to set style on elements are doing. It makes me uncomfortable the same way that Flash (circa 2000s) or frame (circa 1995) was making me uncomfortable. It is something related with la rouille du Web (The Rusty Web). The perennity of what we create on the Web. I guess in this discussion there are sub-discussions about craft and its love, the Web perennity and the notion of Web industrialization.

There is one thing which rubs me in the wrong direction is when people talk about HTML and CSS with the "in JS" term associated. When we manipulate the DOM, create new nodes and associate style to it, we are not doing anymore HTML and CSS. We are basically modifying the DOM, which is a complete different layer. It's easy to see how the concerns are different. When we open a web site made with React for example, the HTML semantics is often gone. You could use only div and span and it would be exactly the same.

To better express my feeling, let's rephrase this:

You could use only div and span and it would be exactly the same.

It would become.

pronoun verb verb adverb noun conjunction noun conjunction pronoun verb verb adverb determiner noun.

then we would apply some JavaScript to convey meaning on it.


As I said I don't think I'm adding anything useful to the debates. I'm not a big fan of everything apps through JavaScript, maybe because I'm old or maybe because I value time.

I also would be curious for the advocates of applying styles to nodes in the DOM, if they made the experiment of generating (programmatically) a hierachical CSS from the DOM. Basically the salmon ladder in the river to go back to the source. I'm not talking about creating a snapshot with plenty of style attributes, but reverse engineering the cascade. At least just as an experiment to understand the two paths and what we could learn about it. It would minimize the CSS selectors, take advantage of the cascade, avoid as much as possible !important, etc.


Planet WebKitNew Web Features in Safari 10.1

A new version of Safari shipped with the release of iOS 10.3 and macOS Sierra 10.12.4. Safari on iOS 10.3 and Safari 10.1 on macOS adds many important web features and improvements from WebKit that we are incredibly excited about.

While this release makes the web platform more capable and powerful, it also makes web development easier, simplifying the ongoing maintenance of your code. We’re excited to see how web developers will translate these improvements into better experiences for users.

Read on for quick look at the features included in this release:


Fetch is a modern replacement for XMLHttpRequest. It provides a simpler approach to request resources asynchronously over the network. It also makes use of Promises from ECMAScript 2015 (ES6) for convenient, chain-able response handling. Compared to XMLHttpRequest, the Fetch API allows for cleaner, more readable code that is easier to maintain.

let jsonURLEndpoint = "";
fetch(jsonURLEndpoint, {
    method: "get"
}).then(function(response) {
    response.json().then(function(json) {
}).catch(function(error) {

Find out more in the Fetch Living Standard.

CSS Grid Layout

CSS Grid Layout gives web authors a powerful new layout system based on a grid of columns and rows in a container. It is a significant step forward in providing manageable page layout tools in CSS that enable complex graphic designs that respond to viewport changes. Authors can use CSS Grid Layout to more easily achieve designs normally seen in print, that before required the use of layout quirks in existing CSS tools like floats and Flexbox.

Read more in the blog post, CSS Grid Layout: A New Layout Module for the Web.

ECMAScript 2016 & ECMAScript 2017

WebKit added support in Safari 10.1 for both ECMAScript 2016 and ECMAScript 2017, the latest standards revisions for the JavaScript language. ECMAScript 2016 adds small incremental improvements, but the 2017 standard brings several substantial improvements to JavaScript.

ECMAScript 2016 includes the exponentiation operator (x ** y instead of Math.pow(x, y)) and Array.prototype.includes. Array.prototype.includes is similar to Array.prototype.indexOf, except it can find values including NaN.

ECMAScript 2017 brings async and await syntax, shared memory objects including Atomics and Shared Array Buffers, String.prototype.padStart, String.prototype.padEnd, Object.prototype.values, Object.prototype.entries, and allows trailing commas in function parameter lists and calls.

IndexedDB 2.0

WebKit’s IndexedDB implementation has significant improvements in this release. It’s now faster, standards compliant, and supports new IndexedDB 2.0 features. IndexedDB 2.0 adds support for binary data types as index keys, so you’ll no longer need to serialize them into strings or array objects. It also brings object store and index renaming, getKey() on IDBObjectStore, and getPrimaryKey() on IDBIndex.

Find out more in the Indexed Database API 2.0 specification.

Custom Elements

Custom Elements enables web authors to create reusable components defined by their own HTML elements without the dependency of a JavaScript framework. Like built-in elements, Custom Elements can communicate and receive new values in their attributes, and respond to changes in attribute values using reaction callbacks.

For more information, read the Introducing Custom Elements blog post.


The Gamepad API makes it possible to use game controllers in your web apps. Any gamepad that works on macOS without additional drivers will work on a Mac. All MFi gamepads are supported on iOS.

Read more about the API in the Gamepad specifications.

Pointer Lock

In Safari on macOS, requesting Pointer Lock on an element gives developers the ability to hide the mouse pointer and access the raw mouse movement data. This is particularly helpful for authors creating games on the web. It extends the MouseEvents interface with movementX and movementY properties to provide a stream of information even when the movements are beyond the boundaries of the visible range. In Safari, when the pointer is locked on an element, a banner is displayed notifying the user that the mouse cursor is hidden. Pressing the Escape key once dismisses the banner, and pressing the Escape key again will release the pointer lock on the element.

You can get more information from the Pointer Lock specifications.

Keyboard Input in Fullscreen

WebKit used to restrict keyboard input in HTML5 fullscreen mode. With Safari 10.1 on macOS, when using HTML5 fullscreen mode, WebKit removes the keyboard input restrictions.

Interactive Form Validation

With support for HTML Interactive Form Validation, authors can create forms with data validation contraints that are checked automatically by the browser when the form is submitted, all without the need for JavaScript. It greatly simplifies the complexity of ensuring good data entry from users on the client-side and minimizes the need for complex JavaScript.

Read more about HTML Interactive Form Validation in WebKit.

Input Events

Input Events simplifies implementing rich text editing experiences on the web in contenteditable regions. The Input Events API adds a new beforeinput event to monitor and intercept default editing behaviors and enhances the input event with new attributes.

You can read more about Enhanced Editing with Input Events.

HTML5 Download Attribute

The download attribute for anchor elements is now available in Safari 10.1 on macOS. It indicates the link target is a download link that should download a file instead of navigating to the linked resource. It also enables developers to create a link that downloads blob data as files entirely from JavaScript. Clicking a link with a download attribute causes the target resource to be downloaded as a file. The optional value of the download attribute can be used to provide a suggested name for the file.

<a href="" download="webkit-favicon.ico">Download Favicon</a>

Find out more from the Downloading resources section in the HTML specification.

HTML Media Capture

In Safari on iOS, HTML Media Capture extends file input controls in forms to allow users to use the camera or microphone on the device to capture data.

File inputs can be used to capture an image, video, or audio:

<input name="imageCapture" type="file" accept="image/*" capture>
<input name="videoCapture" type="file" accept="video/*" capture>
<input name="audioCapture" type="file" accept="audio/*" capture>`

More details are available in the HTML Media Capture specification.

Improved Fixed and Sticky Element Positioning

When using pinch-to-zoom, fixed and sticky element positioning has improved behavior using a “visual viewports” approach. Using the visual viewports model, focusing an input field that triggers the on-screen keyboard no longer disables fixed and sticky positioning in Safari on iOS.

Improved Web Inspector Debugging

The WebKit team added support for debugging Web Worker JavaScript threads in Web Inspector’s Debugger tab. There are also improvements to debugger stepping with highlights for the currently-executing and about-to-execute statements. The highlights make it much clearer what code is going to execute during debugging, especially for JavaScript with complex control flow or many expressions on a single line.

Learn more about JavaScript Debugging Improvements in Web Inspector.

CSS Wide-Gamut Colors

Modern devices support a broader range of colors. Now, web authors can use CSS colors in wide-gamut color spaces, including the Display P3 color space. A new color-gamut media query can be used to test if the display is capable of displaying a given color space. Then, using the new CSS color() function, developers can define a color in a specific color space.

@media (color-gamut:p3) {
    .brightred {
        color: color(display-p3 1.0 0 0);

For more information, see the CSS Color Module Level 4 standards specification.

Reduced Motion Media Query

The new prefers-reduced-motion media query allows developers using animation to make accommodations for users with conditions where large areas of motion or drastic movements can trigger physical discomfort. With prefers-reduced-motion, authors can create styles that avoid motion for users that set the reduced motion preference in system settings.

@keyframes decorativeMotion {
    /* Keyframes for a decorative animation */

.background {
    animation: decorativeMotion 10s infinite alternate;

@media (prefers-reduced-motion) {
    .background {
        animation: none;


We’re looking forward to what developers will do with these features to make better experiences for users. These improvements are available to users running iOS 10.3 and macOS Sierra 10.12.4, as well as Safari 10.1 for OS X Yosemite and OS X El Capitan.

Most of these features were also previewed in Safari Technology Preview over the last few months. You can download the latest Safari Technology Preview release to stay on the forefront of future web features.

Finally, we’d love to hear from you! Send a tweet to @webkit or @jonathandavis and let us know which of these features will have the most impact on your design or development work on the web.

Planet Mozillaon mutex performance and WTF::Lock

One of the things I’ve been doing this quarter is removing Gecko’s dependence on NSPR locks.  Gecko’s (non-recursive) mutexes and condition variables now use platform-specific constructs, rather than indirecting through NSPR.  This change makes things smaller, especially on POSIX platforms, and uses no dynamic memory allocation, so there are fewer untested failure paths.  I haven’t rigorously benchmarked things yet, but I suspect various operations are faster, too.

As I’ve done this, I’ve fielded questions about why we’re not using something like WTF::Lock or the Rust equivalent in parking_lot.  My response has always been some variant of the following: the benchmarks for the WTF::Lock blog post were conducted on OS X.  We have anecdotal evidence that mutex overhead can be quite high on OS X, and that changing locking strategies on OS X can be beneficial.  The blog post also says things like:

One advantage of OS mutexes is that they guarantee fairness: All threads waiting for a lock form a queue, and, when the lock is released, the thread at the head of the queue acquires it. It’s 100% deterministic. While this kind of behavior makes mutexes easier to reason about, it reduces throughput because it prevents a thread from reacquiring a mutex it just released.

This is certainly true for mutexes on OS X, as the measurements in the blog post show.  But fairness is not guaranteed for all OS mutexes; in fact, fairness isn’t even guaranteed in the pthreads standard (which OS X mutexes follow).  Fairness in OS X mutexes is an implementation detail.

These observations are not intended to denigrate the WTF::Lock work: the blog post and the work it describes are excellent.  But it’s not at all clear that the conclusions reached in that post necessarily carry over to other operating systems.

As a partial demonstration of the non-cross-platform applicability of some of the conclusions, I ported WebKit’s lock fairness benchmark to use raw pthreads constructs; the code is available on GitHub.  The benchmark sets up a number of threads that are all contending for a single lock.  The number of lock acquisitions for each thread over a given period of time is then counted.  While both of these qualities are configurable via command-line parameters in WebKit’s benchmark, they are fixed at 10 threads and 100ms in mine, mostly because I was lazy. The output I get on my Mac mini running OS X 10.10.5 is as follows:


Each line indicates the number of lock acquisitions performed by a given thread.  Notice the nearly-identical output for all the threads; this result follows from the fairness of OS X’s mutexes.

The output I get on my Linux box is quite different (aside from each thread performing significantly more lock acquisitions because of differences in processor speed, etc.):


The counts vary significantly between threads: Linux mutexes are not fair by default–and that’s perfectly OK.

What’s more, the developers of OS X have recognized this and added a way to make their mutexes non-fair.  In <pthread_spis.h>, there’s a OS X-only function, pthread_mutexattr_setpolicy_np.  (pthread mutex attributes control various qualities of pthread mutexes: normal, recursively acquirable, etc.  This particular function, supported since OS X 10.7, enables setting the fairness policy of mutexes to either _PTHREAD_MUTEX_POLICY_FAIRSHARE (the default) or _PTHREAD_MUTEX_POLICY_FIRSTFIT.  The firstfit policy is not documented anywhere, but I’m guessing that it’s something akin to the “barging” locks described in the WTF::Lock blog post: the lock is made available to whatever thread happens to get to it first, rather than enforcing a queue to acquire the lock.  (If you’re curious, the code dealing with firstfit policy can be found in Apple’s pthread_mutex.c.)

Running the benchmark on OS X with mutexes configured with the firstfit policy yields quite different numbers:


The variation in these numbers are more akin to what we saw with the non-fair locks on Linux, and what’s more, they’re almost an order of magnitude higher than the fair locks. Maybe we should start using firstfit locks in Gecko!  I don’t know how firstfit policy locks compare to something like WTF::Lock on my Mac mini, but it’s clear that saying simply “OS mutexes are slow” doesn’t tell the whole story. And of course there are other concerns, such as the size required by locks, that motivated the WTF::Lock work.

I have vague plans of doing more benchmarking, especially on Windows, where we may want to use slim reader/writer locks rather than critical sections, and evaluating Rust’s parking_lot on more platforms.  Pull requests welcome.

Planet MozillaThese Weeks in Firefox: Issue 13



Friends of the Firefox team

Project Updates


Activity Stream

  • timspurway reports that the team has re-evaluated their schedule for landing in Nightly – new estimate puts Activity Stream in Fx57

Electrolysis (e10s)

Firefox Core Engineering

Form Autofill


  • The team ran user testing of Prox v2, which emphasizes local sights, events, and multiple sources – full conclusions upcoming!
  • Firefox for Android 53 coming soon with RTL support for Urdu, Persian, Hebrew and Arabic!
  • Activity Stream is going live for 50% of the Firefox for Android Nightly audience this week. All Nightly users will see a setting to opt-in / opt-out (Settings -> Advanced -> Experimental Features).

Platform UI and other Platform Audibles


Project Mortar (PDFium)

  • evelyn reports that the front-end work for Mortar is almost done! A few bugs remaining, but it’s getting pretty polish-y.
  • The team is currently dealing with process separation work, and waiting on this bug to land which will allow us to create a special type of JS-implemented plugin
  • The team is also tackling the printing engine as well, as we want to make sure we print PDFs as accurately as possible
  • Blocked on spinning up QA help for manual testing, but we will first add more automation test and compare the result of pdf.js to understand how much improvement we gain. (Thanks to bsmedberg’s suggestion!)
  • Talking to release team on release to-dos, and how best to keep the system add-on up to date

Quality of Experience

  • New preferences organization should land sometime this week
  • Engineers now mostly segueing into Photon stuff (which should will probably get its own section in future meetings?).


  • Phase 1 of the hi-res favicons work should land before the next meeting.
  • The last big issue with one-off search buttons in the awesomebar is very close to landing.
  • Various miscellaneous fixes for the search and location bars.

Sync / Firefox Accounts

  • Fixes:
    • Sync will discard folder child order if the local timestamp is newer than the remote. This shows up most frequently on first syncs.
    • First sync for passwords was broken in Aurora and Nightly.
  • Push-driven sign-in confirmation is coming! Design doc in progress; should have more updates in the next meeting.
  • If you’re curious…

Storage Management

  • [fischer] The project target due date is 4/17.
  • [fischer] The implementations are almost done. The remained 3 bugs are expected to be resolved before the target 4/17.
  • [fischer] Bug 1312349: Hide the section of Offline Web Content and User Data in about:preferences
    • Because the Storage management handles appcache as well, after the Storage management completes, the Offline(Appcache) group will be hidden.
    • The pref to control hide the Offline group is browser.preferences.offlinegroup.enabled

Test Pilot

  • We are trying to track down some performance issues with the Test Pilot addon (“Test Pilot is making FF run slowly”). Any advice is welcome, ping fzzzy in #testpilot
  • First ever Test Pilot QA community event happened in Bangladesh last week!
    • Volunteers installed Test Pilot & did some manual testing of the Test Pilot addon and experiment addons
    • Event page
    • Tweets and photos of the event!

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Planet MozillaAnnouncing the Equal Rating Innovation Challenge Winners

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website,, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

<figure>Moderating a panel discussion with our esteemed Judges at the Equal Rating Conference in New York City</figure>

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners
With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

<figure>Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City</figure>

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution — Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

<figure>Semifinalists from left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband</figure>

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on, join our community, let your voice be heard. We’re all in this together — and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

Announcing the Equal Rating Innovation Challenge Winners was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaAnnouncing the Equal Rating Innovation Challenge Winners

Six months ago, we created the Equal Rating Innovation Challenge to add an additional dimension to the important work Mozilla has been leading around the concept of “Equal Rating.” In addition to policy and research, we wanted to push the boundaries and find news ways to provide affordable access to the Internet while preserving net neutrality. An open call for new ideas was the ideal vehicle.

An Open and Engaging Process

The Equal Rating Innovation Challenge was founded on the belief that reaching out to the local expertise of innovators, entrepreneurs, and researchers from all around the world would be the right way for Mozilla to help bring the power of the Internet to the next billion and beyond. It has been a thrilling and humbling experience to see communities engage, entrepreneurs conceive of new ideas, and regulatory, technology, and advocacy groups start new conversations.

Through our Innovation Challenge website,, in webinars, conferences, and numerous community events within the six week submission period, we reached thousands of people around the world. Ultimately, we received 98 submissions from 27 countries, which all taken together demonstrates the viability and the potential of the Equal Rating approach. Whereas previously many people believed providing affordable access was the domain of big companies and government, we are now experiencing a groundswell of entrepreneurs and ideas celebrating the power of community to bring all of the Internet to all people.

Our diverse expert Judges selected five teams as semifinalists in January. Mozilla staff from around the world provided six weeks of expert mentorship to help the semifinalists hone their projects, and on 9 March at our Equal Rating Conference in New York City, these teams presented their solutions to our panel of Judges. In keeping with Mozilla’s belief in openness and participation, we then had a one-week round of online public voting, the results of which formed part of the Judges’ final deliberation. Today, we are delighted to share the Judges’ decisions on the Equal Rating Innovation Challenge winners.

The Winners

With an almost unanimous vote, the Overall Winner of the Equal Rating Innovation Challenge, receiving US$125,000 in funding, is Gram Marg Solution for Rural Broadband. This project is spearheaded by Professor Abhay Karandikar, Dean (Faculty Affairs) and Institute Chair Professor of Electrical Engineering and Dr Sarbani Banerjee Belur, Senior Project Research Scientist, at Indian Institute of Technology (IIT) Bombay in Mumbai, India.

Dr Sarbani Banerjee Belur (India) presenting Gram Marg Solution for Rural Broadband at Mozilla’s Equal Rating Conference in New York City

Gram Marg, which translates as “roadmap” in Hindi, captured the attention of the Judges and audience by focusing on the urgent need to bring 640,000 rural villages in India online. The team reinforced the incredible potential these communities could achieve if they had online access to e-Governance services, payment and financial services, and general digital information. In order to close the digital divide and empower these rural communities, the Gram Marg team has created an ingenious and “indigenous” technology that utilizes unused white space on the TV spectrum to backhaul data from village wifi clusters to provide broadband access (frugal 5G).

The team of academics and practitioners have created a low-cost and ruggedized TV UHF device that converts a 2.4 Ghz signal to connect villages in even the most difficult terrains. Their journey has been one of resilience and perseverance as they have deployed their solution in 25 pilot villages, all while reducing costs, size, and perfecting their solution. This top prize of the Innovation Challenge is awarded to a solution the Judges recognize as creating a robustly scalable solution – Gram Marg is both technology enabler and social partner, and delivered beyond our hopes.

“All five semifinalists were equally competitive and it was really a challenge to pitch our solution among them. We are humbled by the Judges’ decision to choose our solution as the winner,” Professor Karandikar told us. “We will continue to improve our technology solution to make it more efficient. We are also working on a sustainable business model that can enable local village entrepreneurs to deploy and manage access networks. We believe that a decentralized and sustainable model is the key to the success of a technology solution for connecting the unconnected.”

As “Runner-Up” with a funding award of US$75,000, our Judges selected Afri-Fi: Free Public WiFi, lead by Tim Human (South Africa). The project is an extension of the highly awarded and successful Project Isizwe, which offers 500MB of data for free per day, but the key goal of this project is to create a sustainable business model by linking together free wifi networks throughout South Africa and engaging users meaningfully with advertisers so they can “earn” free wifi.

The team presented a compelling and sophisticated way to use consumer data, protect privacy, and bolster entrepreneurship in their solution. “The team has proven how their solution for a FREE internet is supporting thriving communities in South Africa. Their approach towards community building, partnerships, developing local community entrepreneurs and inclusivity, with a goal of connecting some of the most marginalized communities, are all key factors in why they deserve this recognition and are leading the FREE Internet movement in Southern Africa”, concluded Marlon Parker, Founder of Reconstructed Living Labs, on behalf of the jury.

Finally, the “Most Novel” award worth US$30,000 goes to Bruno Vianna (Brazil) and his team from the Free Networks P2P Cooperative. Fueled by citizen science and community technology, this team is building on the energy of the free networks movement in Brazil to tackle the digital divide. Rather than focusing on technology, the Coop has created a financial and logistical model that can be tailored to each village’s norms and community. The team was able to experiment more adventurously with ways to engage communities through “barn-raising” group activities, deploying “open calls” for leadership to reinforce the democratic nature of their approach, and instituting a sense of “play” for the villagers when learning how to use the equipment. The innovative way the team deconstructed the challenge around empowering communities to build their own infrastructure in an affordable and sustainable way proved to be the deciding factor for the Judges.

From left to right: Steve Song (Canada), Freemium Mobile Internet (FMI), Dr Carlos Rey-Moreno (South Africa), Zenzeleni “Do it for yourselves” Networks (ZN), Bruno Vianna (Brazil), Free Networks P2P Cooperative, Tim Genders (South Africa), Afri-Fi: Free Public WiFi, Dr Sarbani Banerjee Belur (India), Gram Marg Solution for Rural Broadband

Enormous thanks to all who participated in this Innovation Challenge through their submissions, engagement in meetups and events, as well as to our expert panel of Judges for their invaluable insights and time, and to the Mozilla mentors who supported the semifinalists in advancing their projects. We also want to thank all who took part in our online community voting. During the week-long period, we received almost 6,000 votes, with Zenzeleni and Gram Marg leading as the top two vote-getters.

Mozilla started this initiative because we believe in the power of collaborative solutions to tackle big issues. We wanted to take action and encourage change. With the Innovation Challenge, we not only highlighted a broader set of solutions, and broadened the dialogue around these issues, but built new communities of problem-solvers that have strengthened the global network of people working toward connecting the next billion and beyond.

At Mozilla, our commitment to Equal Rating through policy, innovation, research, and support of entrepreneurs in the space will continue beyond this Innovation Challenge, but it will take a global community to bring all of the internet to all people. As our esteemed Judge Omobola Johnson, the former Communication Technology Minister of Nigeria and partner at venture capital fund TLcom, commented: “it’s not about the issue of the unconnected, it’s about the people on the ground who make the connection.” We couldn’t agree more!

Visit us on, join our community, let your voice be heard. We’re all in this together – and today congratulate our five final teams for their tremendous leadership, vision, and drive. They are the examples of what’s best in all of us!

The post Announcing the Equal Rating Innovation Challenge Winners appeared first on The Mozilla Blog.

Planet MozillaFirefox 53 Beta 8 Testday, March 31st

Hello Mozillians,

We are happy to announce that Friday, March 31st, we are organizing Firefox 53 Beta 8 Testday. We will be focusing our testing on Compact Themes, Audio Compatibility and Video Compatibility. Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaPrint Messages With Nosetests

Let's clear something out print in the code is bad. But sometimes it is useful when you need to quickly debug or understand what is happening. Just do not forget to remove them before committing.

On development, we use unittest and nose for testing the code. Running one set of tests is as easy as

nosetests tests/

and the result is:

Ran 3 tests in 0.175s


Though usually I prefer to run:

nosetests -v tests/

which reveals our inconsistency in naming. Probably my own mistake.

Check Cache-Control for issues. ... ok
Check ETAG for issues. ... ok
Checks if we receive a 304 Not Modified. ... ok

Ran 3 tests in 0.173s


But there is something which was bothering me for a long time. Let's say I put a print statement in the code anywhere, be in the test or in the application code. Even when tests are valid, I want to know the content of some variables. I want to make double sure I'm testing the right thing.

    def test_cache_control(self):
        '''Check Cache-Control for issues.'''
        rv ='/issues/100', environ_base=html_headers)
        print '\n\n{what}:\n{this}\n'.format(what="Headers", this=rv.headers)
        self.assertIn('cache-control', rv.headers)
        self.assertEqual(rv.cache_control.max_age, 0)

nose will swallow everything except when the test fails. I finally discover the right parameter (and it's probably obvious to others): --nocapture. This will make it possible to get the print messages.

nosetests -v --nocapture tests/
('secrets', '/Users/karl/code/')
Check Cache-Control for issues. ... 

Content-Type: text/html; charset=utf-8
Content-Length: 9981
Cache-Control: must-revalidate, private, max-age=0
ETag: "39a3c7d6fda546253a02927272a360db"
Date: Wed, 29 Mar 2017 07:49:19 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
Content-Security-Policy-Report-Only: default-src 'none'; connect-src 'self'; font-src 'self'; img-src 'self' https://*; script-src 'self' 'unsafe-eval' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; report-uri /csp-report

Check ETAG for issues. ... ok
Checks if we receive a 304 Not Modified. ... ok

Ran 3 tests in 0.190s


Hope it is useful for someone else. It was in the documentation but was not obvious to me when I read it a couple of times.

Don’t capture stdout (any stdout output will be printed immediately) [NOSE_NOCAPTURE]

And yes I'm removing this print right away before I forget.


Planet MozillaNote on reinstalling httpie for SSLv3 Handshake failure

I ran into this issue recently with httpie (The python code for doing http requests)

http --verbose GET

I was getting:

http: error: SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:590) while doing GET request to URL:

Huh? Then tried curl

curl -v

We get.

*   Trying
* Connected to ( port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* Server certificate: DigiCert ECC Extended Validation Server CA
* Server certificate: DigiCert High Assurance EV Root CA
GET /home HTTP/1.1
User-Agent: curl/7.51.0
Accept: */*

< HTTP/1.1 200 OK
< Server: Cowboy
< Date: Sat, 25 Mar 2017 06:20:39 GMT
< X-Frame-Options: SAMEORIGIN
< X-Xss-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Type: text/html; charset=utf-8
< Etag: W/"730ab400103d08252b84003a4e7862ff"
< Cache-Control: max-age=0, private, must-revalidate
< Set-Cookie: _marketing-tito_session=S3NERVovbjJmcEYvTGhkSktzVkQyZ3I0TXBOQ1lxTCs5LzhVMlhBN2ZKL0pTSjZCTi94VUo4b3N5RDNweGt2ZWNZTHc4MitNeGdtbjUvZXQ0RjVic1lCMlE3NVk2V21GNGFnYVpXZzVBMnM1bHg3bXpUOW1MZ1R5MlR5UnY3cVB3bzhzZER1eGRNTlNXTFlUVVMxWEhnPT0tLUZCNWQrcE9JcmU0OXJOeUY4YXJ0QXc9PQ%3D%3D--e96126ac335694d135c6a13b2effb22b486380b3; path=/; HttpOnly; Secure
< X-Request-Id: ce6a6522-1780-4b85-8ff5-4eeea24f8fa4
< X-Runtime: 0.007563
< Transfer-Encoding: chunked
< Via: 1.1 vegur
< Strict-Transport-Security: max-age=15768000
<!doctype html>
<html lang="en">

So I searched a bit. And I needed to upgrade a couple of things.

http uninstall httpie 

Then install the security update of requests. If you are wondering about --user, I do that all the time. It installs the 3rd party libraries in your own repo. It makes it safe when Apple upgrades python on your machine destroying all previous libraries installs.

pip install --user --upgrade requests[security]

This install a lot of stuff. Once done. You should get something like:

Successfully installed appdirs-1.4.3 asn1crypto-0.22.0 cffi-1.10.0 cryptography-1.8.1 idna-2.5 ipaddress-1.0.18 packaging-16.8 pyOpenSSL-16.2.0 pycparser-2.17 pyparsing-2.2.0 setuptools-34.3.2


pip install --user httpie

Which gives

Successfully installed httpie-0.9.9

Once done, you can check the state of things with

http --debug

This will spill out

HTTPie 0.9.9
Requests 2.13.0
Pygments 2.2.0
Python 2.7.10 (default, Jul 30 2016, 19:40:32) 
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)]
Darwin 16.4.0

<Environment {  … cut for Brevity… }>

And now retrying the request

http --traceback --print h GET

We get the right thing

HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: text/html; charset=utf-8
Date: Sat, 25 Mar 2017 06:33:23 GMT
Etag: W/"ecdab4c883db75bc7e811a4360f62700"
Server: Cowboy
Set-Cookie: _marketing-tito_session=YVlLeFBWMXhwNml2cW54dC8xMk9DV1ZocWxQZ3BhVUlLUmlNK25SNHVyMHVsVnFGR1FYNGQ5SFRTZUNvMDRUU1VKbkR4Y2J3VStoZUJ0TWpteGIrcTMrYk9PQ21wTk1YRlU3cjVibC9NeEdXUUNlWFNoMHJjT3NWbnllUVMrVDMwZWJOZlZPWWQwSElUcG44WE5rWkhnPT0tLTZQNVZxVTYxYVhGUGNUV1htc1Jqb2c9PQ%3D%3D--d457a62be9a95ddf7ed17c19e2e006ed0a185be2; path=/; HttpOnly; Secure
Strict-Transport-Security: max-age=15768000
Transfer-Encoding: chunked
Via: 1.1 vegur
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 7abb2983-4eee-46e0-8461-2d0b14f24056
X-Runtime: 0.008374
X-Xss-Protection: 1; mode=block


Planet MozillaRecord & Replay: Motion Capture for A-Frame

Record & Replay: Motion Capture for A-Frame

No time for blog posts today? Skip the line and get to the trendiest club of the Metaverse through the VIP access:

  1. Make sure you have a webvr enabled browser
  2. Fix your hair.
  3. Hop right into the dance floor
  4. Share with the world your best moves to the tune of the wonderful music made by joss!

VR devices like the Oculus Rift and the HTC Vive have accurate systems to track the position and orientation of head and hands. On a-saturday-night we wanted to create a fun experience around the idea of recording and reproducing tracking data. The mechanics are simple: Put your headset on, select an avatar and dance. At the end of the countdown and thanks to the magic of the Web you will get a link like this one with the recorded dance that you can share instantly with anybody. The position of head and hands is sampled and persisted in JSON format so it can be reproduced later.

Record & Replay: Motion Capture for A-Frame

Make your own motion capture experience

With a-saturday-night we're also releasing a set of a-frame components for anybody to record, replay and persist the user's motion and interactions. The data can be saved as a JSON file and reused anywhere. There are plenty of interesting applications

Interactive animation tools

You can sample the position and orientation of the controller while the application is running and apply the recorded data immediately to any entity in the scene. We can for instance build an interactive animation tool for game characters and become a virtual puppeteer.

Record & Replay: Motion Capture for A-Frame

Record & Replay: Motion Capture for A-Frame

There will be two entities in the scene, one that records and the other replays:

  • One entity will generate the recording data. In the example above, it corresponds with one of the user's hand. We apply the motion-capture-recorder component to it and set recordingControls to true so the start/stop recording will be driven by the press + hold of the trigger on the controller:
<a-entity id="rightHand" motion-capture-recorder="hand: right; recordingControls: true"></a-entity>  

There are other properties that can be configured:

autoRecordThe component start recording at page load
recordingControlsStart / Stop of the recording is driven by the controller
visibleStrokeThe motion of the entity is visible while recording. Useful to provide feedback for interactive tools
  • The other entity will replay the data recorded from the controller. We setup the motion-capture-replayer on it with the recorderEl property pointing to the entity that records the data:
 <a-entity id="ghost" motion-capture-replayer="recorderEl: #rightHand; loop: true"></a-entity>
spectatorPositionThe position of the spectator camera
srcThe recording data can be hosted in a URL
recorderElAn entity with a motion-capture-recorder can be used as a source of the component. This allows for interactive cycles of record / replay

Test automation and development

The motion capture components allow to emulate the presence of a VR headset and controllers. We can build test automation for VR experiences. One can replay the recorded user behavior and assert the state of the entities at the end. This can happen with no user intervention at all.

Record & Replay: Motion Capture for A-Frame

We can also record user interactions and develop on the go where there's no VR hardware available. One can iterate over the visual aspect or behavior of the experience using the recorded user input.

Record & Replay: Motion Capture for A-Frame

Recording user interactions

To record the user interactions just drop the avatar-recorder component in your scene

 <a-scene avatar-recorder></a-scene>

The component will look for the camera and any entities using the tracked-controls component and apply the motion-capture-recorder to them. Remember to add an id to the controller entities so the recording information can be associated with them and replayed later.

The component can be configured in different ways:

autoRecordThe component starts recording at page load
autoPlayReplaying starts automatically when recording ends
spectatorPlayThe recording replays from a 3rd person perspective camera
spectatorPositionThe position of the spectator camera
localStorageThe recording is stored in localStorage
saveFileAt the end of the recording the user is prompted to download a JSON file with the data
loopThe recording replays in loops after finishing

Replaying user interactions

To replay the user's recorded motion add the component to the scene and pass the url to the recording data:

 <a-scene avatar-replayer="src: assets/tracked-recording.json"></a-scene>

These are the different options that the component provides:

srcThe url that points to the recorded data
loopIf the recording will be replayed in a loop
spectatorModeIf the recording is replayed using a 3rd person camera
spectatorPositionThe position of the spectator camera

Final Words

I hope after reading this blog post you are as excited as we are with the realization that your VR device device at home is also a super accurate motion capture system. We cannot wait to hear your feedback about the motion capture API and see what you do with it.

Planet MozillaBrexit: If it looks like racism, if it smells like racism and if it feels like racism, who else but a politician could argue it isn't?

Since the EU referendum got under way in the UK, it has become almost an everyday occurence to turn on the TV and hear some politician explaining "I don't mean to sound racist, but..." (example)

Of course, if you didn't mean to sound racist, you wouldn't sound racist in the first place, now would you?

The reality is, whether you like politics or not, political leaders have a significant impact on society and the massive rise in UK hate crimes, including deaths of Polish workers, is a direct reflection of the leadership (or profound lack of it) coming down from Westminster. Maybe you don't mean to sound racist, but if this is the impact your words are having, maybe it's time to shut up?

Choosing your referendum

Why choose to have a referendum on immigration issues and not on any number of other significant topics? Why not have a referendum on nuking Mr Putin to punish him for what looks like an act of terrorism against the Malaysian Airlines flight MH17? Why not have a referendum on cutting taxes or raising speed limits, turning British motorways into freeways or an autobahn? Why choose to keep those issues in the hands of the Government, but invite the man-in-a-white-van from middle England to regurgitate Nigel Farage's fears and anxieties about migrants onto a ballot paper?

Even if David Cameron sincerely hoped and believed that the referendum would turn out otherwise, surely he must have contemplated that he was playing Russian Roulette with the future of millions of innocent people?

Let's start at the top

For those who are fortunate enough to live in parts of the world where the press provides little exposure to the antics of British royalty, an interesting fact you may have missed is that the Queen's husband, Prince Philip, Duke of Edinburgh is actually a foreigner. He was born in Greece and has Danish and German ancestry. Migration (in both directions) is right at the heart of the UK's identity.

Queen and Prince Philip

Home office minister Amber Rudd recently suggested British firms should publish details about how many foreign people they employ and in which positions. She argued this is necessary to help boost funding for training local people.

If that is such a brilliant idea, why hasn't it worked for the Premier League? It is a matter of public knowledge how many foreigners play football in England's most prestigious division, so why hasn't this caused local clubs to boost training budgets for local recruits? After all, when you consider that England hasn't won a World Cup since 1966, what have they got to lose?

Kevin Pietersen

All this racism, it's just not cricket. Or is it? One of the most remarkable cricketers to play for England in recent times, Kevin Pietersen, dubbed "the most complete batsman in cricket" by The Times and "England's greatest modern batsman" by the Guardian, was born in South Africa. In the five years he was contracted to the Hampshire county team, he only played one match, because he was too busy representing England abroad. His highest position was nothing less than becoming England's team captain.

Are the British superior to every other European citizen?

One of the implications of the rhetoric coming out of London these days is that the British are superior to their neighbours, entitled to have their cake and eat it too, making foreigners queue up at Paris' Gare du Nord to board the Eurostar while British travelers should be able to walk or drive into European countries unchallenged.

This superiority complex is not uniquely British, you can observe similar delusions are rampant in many of the places where I've lived, including Australia, Switzerland and France. America's Donald Trump has taken this style of politics to a new level.

Look in the mirror Theresa May: after British 10-year old schoolboys Robert Thompson and Jon Venables abducted, tortured, murdered and mutilated 2 year old James Bulger in 1993, why not have all British schoolchildren fingerprinted and added to the police DNA database? Why should "security" only apply based on the country where people are born, their religion or skin colour?

Jon Venables and Robert Thompson

In fact, after Brexit, people like Venables and Thompson will remain in Britain while a Dutch woman, educated at Cambridge and with two British children will not. If that isn't racism, what is?

Running foreigner's off the roads

Theresa May has only been Prime Minister for less than a year but she has a history of bullying and abusing foreigners in her previous role in the Home Office. One example of this was a policy of removing driving licenses from foreigners, which has caused administrative chaos and even taken away the licenses of many people who technically should not have been subject to these regulations anyway.

Shouldn't the DVLA (Britain's office for driving licenses) simply focus on the competence of somebody to drive a vehicle? Bringing all these other factors into licensing creates a hostile environment full of mistakes and inconvenience at best and opportunities for low-level officials to engage in arbitrary acts of racism and discrimination.

Of course, when you are taking your country on the road to nowhere, who needs a driving license anyway?

Run off the road

What does "maximum control" over other human beings mean to you?

The new British PM has said she wants "maximum control" over immigrants. What exactly does "maximum control" mean? Donald Trump appears to be promising "maximum control" over Muslims, Hitler sought "maximum control" over the Jews, hasn't the whole point of the EU been to avoid similar situations from ever arising again?

This talk of "maximum control" in British politics has grown like a weed out of the UKIP. One of their senior figures has been linked to kidnappings and extortion, which reveals a lot about the character of the people who want to devise and administer these policies. Similar people in Australia aspire to jobs in the immigration department where they can extort money out of people for getting them pushed up the queue. It is no surprise that the first member of Australia's parliament ever sent to jail was put there for obtaining bribes and sexual favours from immigrants. When Nigel Farage talks about copying the Australian immigration system, he is talking about creating jobs like these for his mates.

Even if "maximum control" is important, who really believes that a bunch of bullies in Westminster should have the power to exercise that control? Is May saying that British bosses are no longer competent to make their own decisions about who to employ or that British citizens are not reliable enough to make their own decisions about who they marry and they need a helping hand from paper-pushers in the immigration department?

maximum control over Jewish people

Echoes of the Third Reich

Most people associate acts of mass murder with the Germans who lived in the time of Adolf Hitler. These are the stories told over and and over again in movies, books and the press.

Look more closely, however, and it appears that the vast majority of Germans were not in immediate contact with the gas chambers. Even Gobels' secretary writes that she was completely oblivious to it all. Many people were simply small cogs in a big bad machine. The clues were there, but many of them couldn't see the big picture. Even if they did get a whiff of it, many chose not to ask questions, to carry on with their comfortable lives.

Today, with mass media and the Internet, it is a lot easier for people to discover the truth if they look, but many are still reluctant to do so.

Consider, for example, the fingerprint scanners installed in British post offices and police stations to fingerprint foreigners and criminals (as if they have something in common). If all the post office staff refused to engage in racist conduct the fingerprint scanners would be put out of service. Nonetheless, these people carry on, just doing their job, just following orders. It was through many small abuses like this, rather than mass murder on every street corner, that Hitler motivated an entire nation to serve his evil purposes.

Technology like this is introduced in small steps: first it was used for serious criminals, then anybody accused of a crime, then people from Africa and next it appears they will try and apply it to all EU citizens remaining in the UK.

How will a British man married to a French woman explain to their children that mummy has to be fingerprinted by the border guard each time they return from vacation?

The Nazis pioneered biometric technology with the tracking numbers branded onto Jews. While today's technology is electronic and digital, isn't it performing the same function?

There is no middle ground between "soft" and "hard" brexit

An important point for British citizens and foreigners in the UK to consider today is that there is no compromise between a "soft" Brexit and a "hard" Brexit. It is one or the other. Anything less (for example, a deal that is "better" for British companies and worse for EU citizens) would imply that the British are a superior species and it is impossible to imagine the EU putting their stamp on such a deal. Anybody from the EU who is trying to make a life in the UK now is playing a game of Russian Roulette - sure, everything might be fine if it morphs into "soft" Brexit, but if Theresa May has her way, at some point in your life, maybe 20 years down the track, you could be rounded up by the gestapo and thrown behind bars for a parking violation. There has already been a five-fold increase in the detention of EU citizens in British concentration camps and they are using grandmothers from Asian countries to refine their tactics for the efficient removal of EU citizens. One can only wonder what type of monsters Theresa May has been employing to run such inhumane operations.

This is not politics

Edmund Burke's quote "The only thing necessary for the triumph of evil is for good men to do nothing" comes to mind on a day like today. Too many people think it is just politics and they can go on with their lives and ignore it. Barely half the British population voted in the referendum. This is about human beings treating each other with dignity and respect. Anything less is abhorrent and may well come back to bite.

Planet MozillaData Science is Hard: History, or It Seemed Like a Good Idea At the Time

I’m mentoring a Summer of Code project this summer about redesigning the “about:telemetry” interface that ships with each and every version of Firefox.

The minute the first student (:flyingrub) asked me “What is a parent payload and child payload?” I knew I was going to be asked a lot of questions.

To least-effectively answer these questions, I’ll blog the answers as narratives. And to start with this question, here’s how the history of a project makes it difficult to collect data from it.

In the Beginning — or, rather, in the middle of October 2015 when I was hired at Mozilla (so, at my Beginning) — there was single-process Firefox, and all was good. Users had many tabs, but one process. Users had many bookmarks, but one process. Users had many windows, but one process. All this and the web contents themselves were all sharing time within a single construct of electrons and bits and code and pixels: vying with each other for control of the filesystem, the addressable space of RAM, the network resources, and CPU scheduling.

Not satisfied with things being just “good”, we took a page from the book penned by Google Chrome and decided the time was ripe to split the browser into many processes so that a critical failure in one would not trouble the others. To begin with, because our code is venerable, we decided that we would try two processes. One of these twins would be in charge of the browser and the other would be in charge of the web contents.

This project was called Electrolysis after the mechanism by which one might split water into Hydrogen and Oxygen using electricity.

Suddenly the browser became responsive, even in the face of the worst JavaScript written by the least experienced dev at the most privileged startup in Silicon Valley. And out-of-memory errors decreased in frequency because the browser’s memory and the web contents’ memory were able to grow without interfering with each other.

Remember, our code is venerable. Remember, our code hearkens from its single-process past.

Our data-collection code was written in that single-process past. But we had two processes with input events that need to be timed to find problems. We had two processes with memory allocations that need to be examined for regressions.

So the data collection code was made aware that there could be two types of process: parent and child.

Alas, not just one child. There could be many child processes in a row if some webpage were naughty and brought down the child in its anger. So the data collection code was made aware there could be many batches of data from child processes, and one batch of data from parent processes.

The parent data was left looking like single-process data, out in the root of the data collection payload. Child processes’ data were placed in an array of childPayloads where each payload echoed the structure of the parent.

Then, not content with “good”, I had to come along in bug 1218576, a bug whose number I still have locked in my memory, for good or ill.

Firefox needs to have multiple child processes of different types, simultaneously. And many of some of those several types, also simultaneously. What was going to be a quick way to ensure that childPayloads was always of length 1 turned into a months-long exercise to put data exactly where we wanted it to be.

And so now we have childPayloads where the “weird” content child data that resists aggregation remains, and we also have payload.processes.<process type>.* where the cool and hip data lives: histograms, scalars, and keyed variants of both.

Already this approach is showing dividends as some proportions of Nightly users are getting a gpu process, and others are getting some number of content processes. The data files neatly into place with minimal intervention required.

But it means about:telemetry needs to know whether you want the parent’s “weird” data or the child’s. And which child was that, again?

And about:telemetry also needs to know whether you want the parent’s “cool” data, or the content child’s, or the gpu child’s.

So this means that within about:telemetry there are now five places where you can select what process you want. One for “weird” data, and one for each of the four kinds of “cool” data.

Sadly, that brings my storytelling to a close, having reached the present day. Hopefully after this Summer’s Code is done, this will have a happier, more-maintainable, and responsively-designed ending.

But until now, remember that “accident of history” is the answer to most questions. As such it behooves you to learn history well.


Planet MozillaMozilla @RightsCon 2017: Come See How We Are Fighting For An Open Internet

We are at RightsCon 2017 in Brussels this week with many of our friends and colleagues globally to discuss how to keep the Internet open, free, and secure. From Wednesday to Friday there are 1,200+ attendees from 95 countries with 500+ organizations, tech companies, universities, startups, and governments attending. There has never been a more important time for all of us to meet to tackle the issues at stake.

Check out the entire list of speakers and sessions here and for a look into the wide variety of topics on which we are speaking please find the details below.

The Opening Ceremonies featuring Mozilla’s Executive Chairwoman, Mitchell Baker, will be livestreamed here along with other sessions in the Palace Ballroom.

Also, keep an eye on the main Mozilla blog on Wednesday for us to reveal who won the Equal Rating Innovation Challenge to spur innovation in bringing the next wave of people online.

If you still can’t get enough RightsCon coverage, some of our global Mozilla community members will be reporting live on our Mozillagram Instagram channel and you can take part and keep up to date on all the conversations around the event using #RightsCon.

Day 1: Wednesday, March 29 (all times are CET)

9:00 – 10:15
Opening Ceremonies

All We Need is L…Privacy By Design And By Default

Encryption Fights Around The World

Equal Rating Innovation Challenge Winners Announcement

Funding For Digital Rights Organizations

Day 2: Thursday, March 30

Coding With Conscience: The Role Of Ethics In Programming

Modern Rules For All Creators and Users: A Progressive Copyright Framework For Europe

Unfairness by Algorithm: Distilling the Harms of Automated Decision Making

From Ad-Hoc to Prepared: Developing an Anti Online Harassment Infrastructure

Connecting the Unconnected: Innovative Ways to Provide Affordable Access to the Internet

Corporate Sponsorship of NGOs and Conferences: Ethics and Independence

Day 3: Friday, March 31

Privacy for Everyone: Using Libraries to Promote Digital Privacy

Towards a Priva-TISA-tion of Human Rights?

Strengthen the Effectiveness and Connectivity Between Corporations’ and Civil Societies’ Transparency Report

Is The Internet Unhealthy? How Do You Measure It?

OTT Services: Leveling the Field for Digital Rights

Keeping Up With the GAFAs: Competition Policy and Internet Openness

The post Mozilla @RightsCon 2017: Come See How We Are Fighting For An Open Internet appeared first on Open Policy & Advocacy.

Planet MozillaSoftware engineering, responsibility, and ownership

One of the ways to advance as a software engineer is to be in charge of something, such as a one-time project like implementing a new feature or leading a software release, or an ongoing task such as triaging incoming bugs or analyzing crash reports.

One thing that makes it more likely that you'll be in charge of something is if others trust you to be in charge of that. And you're more likely to be trusted if you've consistently behaved like somebody who is responsible for that thing or similar things.

So what does being responsible look like? Largely, it looks like the behavior you'd expect from a project owner, i.e., the way you'd expect the person in charge of the project to behave. In other words, I think it helps to think of yourself as having the responsibility of the project's owner. (But, at the same time, remember that perhaps you don't, and collaborate with others.) Let's look at two specific examples.

First, what do responsibility and ownership look like for somebody doing triage of incoming bugs? One piece is to encourage more and better bug reports by acting in ways that acknowledge the bug reporter's contribution, such as: making the reporter feel their concerns are heard, not making the reporter waste their time, and improving the bug report on the way (making the summary accurate, adding clearer or simpler testcases, etc.). Another is taking responsibility and following up to make sure important things are handled, and to make it clear that you're doing so. When you do this (or many other things), it's important to make appropriate commitments: don't commit to things if you can't honor the commitment, but avoiding committing to anything is a failure to take responsibility.

Second, what do responsibility and ownership mean for somebody writing code? I think one big piece is that you should do the things you'd do if you were the sole maintainer of the code before you submit it for review. That is, submit code for review when you're actually confident it's ready to be part of the codebase. This implies doing many things, from high level tasks like having a clear model of what the code is supposed to do, to having appropriate tests, assertions, and structure that make future modifications easier and reduce their risk, to more low-level things like looking at all the callers of a function when a change you make to what the function does requires doing so.

Another big piece of responsibility when writing code is taking responsibility for and fixing the problems that you cause. (As you take on more responsibility, you might find others to help you do this, but you're still responsible for it.) How to do this depends on the seriousness of the problems. It sometimes means temporarily reverting the changes while figuring out the longer term fix. In other cases it means writing patches for serious problems promptly. And in less serious cases a quick response may not be needed, but it's useful to communicate that you've concluded the problem is lower priority in case others have a different view of the seriousness.

Having engineers exercise responsibility and ownership in this way is important because having more engineers take responsibility makes a project run better. So it's a characteristic that I like to see in software engineers and one of the characteristics that defines what I see as a good engineer.

Planet MozillaMartes Mozilleros, 28 Mar 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaThis Week in Rust 175

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's Crate of this Week is pretty_assertions which replaces the standard ones to make them shiny. Thanks to willi_kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

120 pull requests were merged in the last week.

New Contributors

  • Adam Ransom
  • Cldfire
  • Irfan Hudda
  • mandeep
  • Manuel
  • omtcyfz
  • Sam Whited

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

Issues in final comment period:

Other significant issues:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get invovled

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I had many questions during the example implementations but "where do I find that" was none of them. [...] Thanks, docs team, you are doing great work!

Florian Gilcher in a blog post.

Thanks to Jules Kerssemakers for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaUpdate on Compatibility Milestones

The Road to Firefox 57 has been updated with a couple of changes to the timeline:

  • Firefox won’t run in multiprocess mode unless all enabled add-ons have the multiprocessCompatible flag set to true or are WebExtensions. This means that developers who haven’t set this flag don’t have to worry about multiprocess compatibility and can focus on WebExtensions and the Firefox 57 deadline.
  • The multiprocess compatibility shims will be removed earlier, starting with the Nightly and Developer Edition channels. Their purpose was to ease the transition to multiprocess compatibility, but given the change in the previous point, they aren’t really needed anymore. Removing them will help track performance issues.

None of these changes should require any immediate action from developers, but let us know if you have any questions or concerns.

The post Update on Compatibility Milestones appeared first on Mozilla Add-ons Blog.

Planet MozillaTrip Report: C++ Standards Meeting in Kona, February 2017

Summary / TL;DR

Project What’s in it? Status
C++17 See below Draft International Standard published; on track for final publication by end of 2017
Filesystems TS Standard filesystem interface Part of C++17
Library Fundamentals TS v1 optional, any, string_view and more Part of C++17
Library Fundamentals TS v2 source code information capture and various utilities Published!
Concepts (“Lite”) TS Constrained templates Published! Not part of C++17
Parallelism TS v1 Parallel versions of STL algorithms Part of C++17
Parallelism TS v2 Task blocks, library vector types and algorithms and more Under active development
Transactional Memory TS Transaction support Published! Not part of C++17
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Not part of C++17
Concurrency TS v2 See below Under active development
Networking TS Sockets library based on Boost.ASIO Resolution of comments on Preliminary Draft in progress
Ranges TS Range-based algorithms and views Resolution of comments on Preliminary Draft in progress
Numerics TS Various numerical facilities Under active development
Modules TS A component system to supersede the textual header file inclusion model First version based largely on Microsoft’s design; hope to vote out Preliminary Draft at next meeting
Graphics TS 2D drawing API Under active design review
Coroutines TS Resumable functions, based on Microsoft’s await design Preliminary Draft voted out for balloting by national standards bodies
Reflection Code introspection and (later) reification mechanisms Introspection proposal passed core language design review; next stop is design review of the library components. Targeting a Reflection TS.
Contracts Preconditions, postconditions, and assertions Proposal passed core language design review; next stop is design review of the library components. Targeting C++20.


A few weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Kona, Hawaii. This was the first committee meeting in 2017; you can find my reports on 2016’s meetings here (February 2016, Jacksonville), here (June 2016, Oulu), and here (November 2016, Issaquah). These reports, particularly the Issaquah one, provide useful context for this post.

This meeting was focused on wrapping up C++17, iterating on the various in-progress Technical Specifications (TS), and picking up steam on C++20.

As I described in my previous report, at the end of the Oulu meeting, the C++17 Committee Draft (CD) – a first feature-complete draft of the C++17 spec – was sent out for comment from national standards bodies, and the comment period concluded prior to the Issaquah meeting, which was then heavily focused on addressing the resulting comments.

The aim for this meeting was to resolve the remaining comments, and – thanks to a lot of hard work, particularly from the Library groups which were swamped with a large number of comments to address – we were successful in doing so. At the end of the meeting, a revised draft of the C++17 standard, with the comments addressed – now labelled Draft International Standard (DIS) – was sent out for a second round of voting and comments.

As per ISO procedure, if the DIS vote is successful, the committee can proceed to final publication of C++17 (possibly with further modifications to address DIS comments) without a further round of voting and comments; this was the case for C++14, and it is hoped to be the case again for C++17.


Since at the CD stage, C++17 is supposed to be feature-complete, no new features were voted into C++17 at this meeting (see this for a list of features in C++17). The only things that were voted in were minor changes to resolve CD comments. I call out some of the notable ones below:

Changes voted into C++17 at this meeting


With C++17 being wrapped up with final comment resolutions, sights are increasingly set on the next release of the language’s International Standard (IS), C++20.

C++20 doesn’t have a working draft yet (it can’t per ISO procedure until C++17 is published), but a number of features targeted for it have already passed design review (see my previous posts for details), and many more are in the queue.

Notably, at this meeting, in addition to discussing individual proposals targeted for C++20, a plan for an overall direction / set of primary goals for C++20 was also discussed. The discussion was prompted by a paper in the pre-meeting mailing outlining such a proposed vision, which was discussed during an evening session on Thursday.

This paper suggests that we aim to get the following major features into C++20:

  • Concepts
  • Modules
  • Ranges
  • Networking

It does not suggest focusing on these features to the exclusion of others, but does propose prioritizing these over others. The reasoning for why these features merit special attention is well-argued in the paper; read it for details.

It’s worth noting that each of these features is first being standardized in the form of a Technical Specification, and these are at various stages of publication (Concepts is published; Ranges and Networking have had Preliminary Drafts published and comments on them are in the process of being addressed; for Modules, we hope to publish a Preliminary Draft at the next meeting). Getting these features into C++20 will require promptly completing the TS publication in cases where it’s not there yet, and then merging the TS’s into C++20.

The proposal for focusing on these four was received favourably by the committee. It is very important to note that this should not be construed as a promise to get these features into C++20; the committee isn’t in a position to make such a promise ahead of time, nor is anyone on it. As always, a feature only goes into the IS wen it’s ready – that is, when the committee has a high confidence in the design, its specification, its implementability, the interaction of the feature with other language and library features, and the appropriateness of the feature for standardization in the context of the language as a whole. We will work hard to get these features into C++20, but there are no promises.

Technical Specifications

The committee has a number of Technical Specifications in flight, and others in the planning stage. I’ll mention the status of each.

Recapping the procedure for publishing a Technical Specification:

  • Once it has completed design and wording review and is considered to be in good enough shape, a Preliminary Draft Technical Specification (PDTS) is published, and sent out for comment by national standards bodies.
  • The comments are reviewed and addressed
  • The revised Technical Specification is sent out for final publication. (Optionally, if the committee believes the TS is not ready for publication and needs a second round of comments, it can publish a Draft Technical Specification (DTS) and send it out for comment, and only subsequently publish the final TS. However, this generally hasn’t been necessary.)

At this meeting, the Coroutines TS was sent out for its PDTS ballot. The Modules TS came close, but didn’t quite make it.

Ranges TS

The Ranges TS was sent out for its PDTS ballot at the end of the last meeting. The ballot comments have since come back, and the library groups have been busy addressing them, resulting in a few changes to the working draft being approved. With the focus being on the C++17 CD comments, however, not all comments have been addressed yet, and the TS is not yet ready for final publication.

Networking TS

The Networking TS was also sent out for its PDTS ballot at the end of the last meeting and, as with the Ranges TS, the ballot comments have come back. The library groups did not have much time to spend on addressing the comments (due to the C++17 focus), so that work will continue at the next meeting.

Coroutines TS

The Coroutines TS – which contains the “stackless coroutines” / co_await proposal based on Microsoft’s design – was approved to be sent out for its PDTS ballot at this meeting. (This was attempted at the last meeting, but it was voted down on the basis that there had not been time for a sufficiently thorough review of the core language wording. Such review has since taken place, so it sailed through this time.)

As mentioned in my previous posts, there are also efforts underway to standardize a different, stackful flavour of coroutines (the latest iteration of that proposal suggests a library interface similar to the call/cc facility found in some functional languages), and a proposal to unify the two flavours (this latter seems to be stalled after a discussion of it at the last meeting convinced some people that unification is not worth it).

These alternative proposals are progressing (or not) independently of the Coroutines TS for the time being. Depending on how they progress, they may still have an impact on the form(s) in which coroutines are ultimately standardized in the C++ IS.

Concepts TS

The Concepts TS was published in late 2015, and is now being eyed for merger into C++20.

Recall that the purpose of standardizing a feature as a Technical Specification first, is to give implementers and users the ability to gain experience with the feature, and provide feedback that may motivate changes to the feature, before standardizing it in its final form.

Now that the Concepts TS has been published for a while, and a compiler that supports it (GCC 6) released almost a year ago, user and implementer feedback (the latter primarily from developers working on an implementation in Clang) has started trickling in.

(There was previously an attempt to merge the Concepts TS into C++17, which failed because at that point, the amount of user and implementer feedback had been small, and the committee felt people should have more time to try the feature out.)

The feedback so far has generated a few proposals for changes to the Concepts TS (P0342, P0464, and P0587). Some of these were looked at this week; I summarize the technical discussion below, but the high-level summary is that, while people agree on the general direction of some changes, other proposed changes remain rather controversial.

One thing that’s not controversial, is that everyone wants Concepts to make it into C++20. Procedurally, the question arose whether we should (1) merge the TS into the IS now, and treat remaining points of contention as “issues” to be tracked and resolved before the publication of C++20; or (2) resolve the issues in the Concepts TS working draft, and only then merge the result into C++20. A proposal to take route #1 (merge now) failed to gain consensus in the Evolution Working Group (EWG), so for now, the plan is to take approach #2 (resolve issues first, merge after).

Modules TS

Modules currently has two implementations – one in MSVC, and one in Clang – which diverge in some aspects of their conceptual and implementation models. Most notably, the Clang implementation includes macros in the set of entitites that can be exported by one module and imported by another, while the MSVC one does not.

At the October 2015 meeting (which, as it happens, was also in Kona), it was decided that, since supporting macros is both contentious and comes with significant implementation complexity, Modules will initially be pursued in the form of a Technical Specification that does not support macros, and macro support can be considered for a second iteration of the feature (which I’ll henceforth refer to as “Modules v2”; the ship vehicle for that is still to be determined – it might be a second Modules TS, or it might be C++20).

Accordingly, the current Modules TS working draft largely reflects Microsoft’s design.

Meanwhile, the Clang implementers have submitted a proposal for design changes that includes adding support for macros among other, smaller changes. EWG looked at parts of this proposal at the February 2016 meeting in Jacksonville, and some of the smaller changes gained the group’s consensus at that time.

However, it appears that there was some confusion as to whether those changes were intended for Modules v1, or v2. Since one of the proposals in the paper was adding macro support, a feature previously earmarked for v2, some people assumed the entire paper was v2 material. Other people had understood that the smaller changes that were approved, were approved for v1.

This misunderstanding came to light when the Modules TS (v1) working draft (which does not contain the mentioned changes) came up for a vote in full committee to be sent out for its PDTS ballot. The vote failed, and the matter referred back to EWG to clarify whether the changes in question are in fact approved for v1 or not.

EWG ended up effectively re-discussing the proposed changes (since during the first discussion, it wasn’t clear whether people were considering the changes for v1 or v2), with the outcome being that some but not all of them had consensus to go into v1 (I summarize the technical discussion below).

The hope is that Modules v1, as amended with these approved changes, can be sent out for its PDTS ballot at the next meeting (this July, in Toronto).

Parallelism TS v2

Not too much new to report for Parallelism TS v2. Task blocks are already in the working draft, while vector types and algorithms are still working their way through the library groups.

Concurrency TS v2

The Concurrency TS v2 does not have a working draft yet. New proposals being considered for it include an RAII interface for deferreed reclamation, hazard pointers, an RCU API, and padding bits for compare-and-excahnge. The concurrent queues proposal has been approved by SG1 (the Study Group that deals with concurrency) and forwarded to the library groups for review.

Executors, probably slated for a separate TS at this point, have a unified proposal, an updated version of which was discussed in SG1. One of the reasons executors are taking so long to standardize is that different communities want them for different use cases, and accommodating all of them adds significant complexity to any proposal. It is likely that the initial version to be standardized will be restricted to a smaller feature set.

Future Technical Specifications

Several future Technical Specifications are planned as well.


The Reflection Study Group (SG 7) approved a proposal for static introspection (summary, design, specification) at the last meeting. This week, the Evolution Working Group looked at it and approved it as well; they also agreed with the decision to start by standardizing the proposal as a TS. The next stop for the proposal is the Library Evolution Working Group.

There were some proposals in the latest mailing that aimed to change aspects of the introspection proposal – not core aspects like “what can be reflected”, but more like “how does the reflection facility integrate into the language”, such as the proposal to reflect through values instead of types. These proposals will be given due consideration (and SG 7 started looking at them this meeting; I summarize the discussion below), but in the meantime, the introspection TS will go forward as-is, so that valuable implementer and user feedback can be collected on the core aspects of the proposal as soon as possible. Changes such as reflecting through values can then be made post-TS.


As previously outlined, the Numerics Study Group (SG 6) plans to put out a Numerics TS containing various facilities (summarized here). SG 6 met for two days this week, primarily to work on the TS; the group’s chair hopes that an initial working draft for the TS, to be edited collaboratively over Github, will be ready for the next meeting.

SG 6 also approved some library features related to bitwise operations (P0237, P0553, P0556) that are not slated for the Numerics TS, but will go directly into C++20 (or the latest version of the Library Fundamentals TS, at LEWG’s choosing).


The Graphics TS, which proposes to standardize a set of 2D graphics primitives inspired by cairo, has updated wording that addresses issues raised during a previous round of design review. The Library Evolution Working Group looked at the updated proposal this week, and suggested one more round of changes; the hope is to forward the proposal to the Library Working Group (which will then review it at the wording level) at the next meeting.

Evolution Working Group

I sat in the Evolution Working Group (EWG) for the duration of the meeting, so in addition to the general overview I gave above, I can go in more details about the technical discussion that took place in that group.

To recap, EWG spends its time evaluating and reviewing the design of proposed language features. This week, it briefly addressed some remaining C++17 issues, and then spent rest of the week on post-C++17 material (such as C++20, and Technical Specifications like the Modules TS).

C++17 Topics

  • National body comments on the C++17 CD remaining to be addressed. (Note that the comments can be found in two documents: official comments, and late comments)
    • constexpr static members of enclosing class type (US 24). EWG looked at this at the last meeting, and said this extension would be considered for C++17 if the details were worked out in a paper. As no such paper appeared at this meeting, the extension was now rejected for C++17. It could come back in a later standard.
    • When are get<>() functions called in decomposition declarations? (Late 3). The current wording says that they are called “eagerly”, at the point of decomposition. At the last meeting, EWG was in favour of changing that to “lazily”, that is, being called at the point of use of the decomposed bindings, but this change too required a paper. Again no such paper appeared, and moreover the comment author said that, upon contemplation, it seems like a tricky change to make, and probably not worth it, so the “lazy” idea was abandoned.
    • Should library implementers have the freedom to add constexpr to library functions where the standard doesn’t specify it? (GB 38). This was proposed for previous standards but rejected, mainly because of portability concerns (library functions that the standard didn’t specify to be constexpr still couldn’t be portably used in contexts that require constant expressions, such as array bounds). It was now requested again for C++17; however, the idea still didn’t have EWG’s consensus. There is some hope it may yet in C++20.
  • Terminology: “structured bindings” vs. “decomposition declarations”. The ability to decompose a tuple-like object into its constituent parts, as in auto [a, b, c] = foo(); , was called “structured bindings” during design discussions, but the wording in the standard refers to declarations of this form as “decomposition declarations”. To avoid confusion stemming from having two names for the same thing, “decomposition declarations” were renamed to “structured binding declarations”.
  • Class template argument deduction
    • There was some discussion of whether implicit deduction guides should be kept. The consensus was that most of the time they do the right thing (the exceptions typically being classes whose constructors involve a lot of metaprogramming, like std::tuple) and they should be kept.
    • Copying vs. wrapping behaviour. Suppose a is a variable of type tuple<int, int>, and we write tuple b{a};. Should the type of b be tuple<int, int> (the “copying” behaviour), or tuple<tuple<int, int>> (the “wrapping” behaviour)? This question arises for any wrapper-like type (such as pair, tuple, or optional) which has both a copy constructor and a constructor that takes an object of the type being wrapped. EWG felt copying was the best default. There was some talk of making the behaviour dependent on the syntax of the initialization (e.g. the { } syntax should always wrap) but EWG felt introducing new inconsistencies between the behaviours of different initialization syntaxes would do more harm than good.
    • Whether you should be able to define explicit deduction guides as deleted. EWG agreed that this is useful, but for practical reasons (being so late in the C++17 cycle), the relevant wording will be placed into a Defect Report applicable to C++17 rather than into the C++17 draft itself.
  • std::launder() is a library function with special behaviour that was added to C++17 to allow certain patterns of code (which arise, for example, in typical implementations of std::variant), to have well-defined behaviour without excessively constraining compiler optimizations such as devirtualization. At this meeting, a proposal to achieve the same goal via different means (no library function, just changes to core language wording) was put forward. However, during discussion it came to light that the proposed alternative would not handle all affected scenarios (particularly scenarios where vtable pointers are in play), and it did not gain consensus.

Post-C++17 Proposals

Here are the post-C++17 features that EWG looked at, categorized into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Allowing attributes on template instantiations. C++ allows many entities in the program (such as variables, types, and functions) to be annotated with attributes, but explicit template instantiations are not one of them. Annotating explicit instantiations turns out to be useful, particular for visibility attributes, so this ability was added. Since this is more an oversight in the initial specification of attributes than a new feature, in addition to being in C++20, it will be published as a Defect Report potentially applying as far back as C++11.
  • Simplifying implicit lambda capture. This proposal simplifies the specified procedure for determining whether a variable referenced inside the body of a lambda with a capture-default, is captured by the lambda. The main motivation is that the current procedure trips up in some scenarios involving if constexpr. The user impact should be limited to allowing those previously-problematic cases. (In theory, the change could result in lambdas notionally capturing some variables that they don’t strictly need to and didn’t capture before, but optimizers should be pretty effective in optimizing such captures out.)
  • Consistent comparisons. EWG looked at various proposals for default comparisons at the last meeting, and the consensus that emerged was that (1) we want a three-way comparison operator; (2) two-way comparison operators should be “desugared” to the three-way comparison operator; (3) most classes should only need to define the three-way comparison operator, and the language could provide some help in doing that.

    This paper formalizes that consensus into a proposal. The three-way comparison operator is spelt <=> (the “spaceship operator”), and the compiler can generate a default implementation if one is declared as = default. In addition, the proposal supports the notion of different strengths of ordering by making the return type of the spaceship operator be one of std::strong_ordering, std::weak_ordering, or std::partial_ordering, which are enumeration-like types (though not actual enumerations) with appropriate conversions between them; it also supports the notion of types that can be compared for equality but are not ordered, for which the spaceship operator would return std::strong_equality or std::weak_equality.

    EWG passed the above parts of the paper; they will now go to LEWG for library review, and CWG for core language wording review. The paper also contained an optional extension, where some user-defined types would get the spaceship operator auto-generated for them without even a defaulted declaration (similar to the previous default comparison proposal that almost made it into C++17). This extension did not gain EWG’s consensus at this time, although it may come back in revised form.

    (The paper also contained an extension to support chaining comparisons, that is, to let things like a == b == c have the meaning of (a == b) && (b == c) that was likely intended. This also didn’t gain consensus, though it could also come back after some compatibility analysis is performed to determine how much existing code it would break.)
  • Static reflection. This is the proposal that the Reflection Study Group approved at the last meeting. EWG approved this without asking for any design changes; it concurred with the Study Group’s opinion that the ship vehicle should be a Reflection TS (as opposed to, say, C++20), and that standardization of the proposal as written should proceed in parallel with alternative syntaxes being contemplated. The next stop for this proposal is the Library Evolution Working Group; bikeshedding of the reflection operator’s spelling (the latest draft of the paper contains $reflect, while previous ones contained reflexpr) will take place there.
  • Implicit moving from rvalue references in return statements. This proposal argues that in a function return statement, you practically always want to move from an rvalue reference rather than copying from it, and therefore the language should just implicitly move for you instead of requiring you to write std::move. Code that would break as a result of this change is almost certainly wrong to begin with.
  • Contracts. This is a refinement of the proposal for contract checking that was approved at previous meeting.

    There was one point of controversy, concerning the use of an attribute-like syntax. Some people found this objectionable on the basis that attributes are not supposed to have semantic effects. Others pointed out that, in a correct program, the contract attributes do not, in fact, have a semantic effect. Of particular concern was the “always” checking level specified in the proposal, which allows marking specific contract checks as being always performed, even if the program-wide contract checking level is set of “off“; it was argued that this means a conforming implementation cannot ignore contract attributes the way it can ignore other attributes.

    To resolve this concern, options such as using a different (non-attribute) syntax, or removing the “always” checking level, were considered. However, neither of these had stronger consensus than the proposal as written, so the proposal was sent onwards to the Library Evolution Working Group without changes.
  • short float. This is what it sounds like: a shorter floating-point type, expected to be 16 bits long on most platforms, to complement the typically 32-bit float and 64-bit double.

Proposals for which further work is encouraged:

  • Proposed changes to the Concepts TS. I talk about this below.
  • Abbreviated lambdas for fun and profit. This is really five separable proposals to abbreviate the syntax of lambdas further. All but the first one were rejected.
    • A terser syntax for single-expression lambdas (that is, lambdas whose body consists of a single return statement). This has been desired for a long time; the general idea is to eliminate the braces and the return keyword in this case. The problem is that just putting the expression after the lambda-introducer and optional parameter list leads to ambiguities. This paper proposes writing => expr; EWG encouraged thinking about more options.
    • Allow introducing parameters without a type specifier (that is, not even auto or auto&&). The problem with the naive approach here is that it’s ambiguous with the case where a parameter has a type specifier but no name. The paper proposes several possible solutions; EWG found them inelegant, and more generally felt that this problem space has already been explored without much fruit.
    • A way to request that SFINAE apply to a lambda’s deduced return type. The motivation for this is similar to that of a similar proposal along these lines. This idea had some support, but not consensus.
    • A way to request a deduced noexcept-specification. As with the earlier noexcept(auto) proposal, EWG did not find this sufficiently motivated.
    • A way to request that arguments to the lambda be automatically forwarded (in the std::forward<Type>(value) sense) in the body expression. EWG agreed that having to litter generic code with calls to std::forward is unfortunate, but felt that a solution to this problem should not be specific to lambdas.
  • std::call_cc() stackful context switching. This is an updated version of the earlier stackful coroutines proposal, with the library interface changed from std::execution_context (whose name clashed with a different type of “execution contexts” in the unified executors proposal) to std::continuation and std::call_cc(), names that some might recognize from Scheme and other functional languages. EWG looked at this in joint session with the Concurrency Study Group. The main purpose of the discussion was to determine whether the proposed interface is at the right level of abstraction for standardization. There wasn’t much consensus on this topic, with some liking the interface as written, others arguing for a lower-level interface (think ucontext), and yet others for a higher-level interface (think fibres). Interested parties will continue iterating on this proposal in a smaller group.
  • Attributes for likely and unlikely branches. When this proposal was first looked at (at the last meeting), the feedback was that instead of trying to place the attributes onto the conditions of branches themselves, they should be placed on the statement(s) that are being described as likely/unlikely to be reached. In this revision, the proposal author did that, but attempted to restrict the contexts in which such an attribute could be placed on a statement. EWG suggested allowing it on all statements instead, while also requesting that examples be added to clarify the semantics in a various scenarios.
  • An updated version of for loop exit strategies, which proposes to allow adding blocks of code at the end of a for loop that run when the loop exits by breaking (introduced by the keyword on_break), and when the loop runs to completion (introduced by on_complete). The only thing that has changed since the last revision of this proposal is the keywords – they were catch break and catch default in the previous version. EWG didn’t love the new keywords, either, and was generally lukewarm towards the proposal in terms of motivation, so it’s not clear if this is going anywhere.
  • Generalized unpacking and parameter pack slicing. This proposes a prefix slicing syntax: if T is a parameter pack, [N]T would denote the element at the Nth index of T; [N:M]T would denote a new parameter pack containing the elements from the Nth to the Mth indices of T. Additionally, the syntax can be applied to objects of any product type (roughly, a type that can be decomposed using structured bindings), in which case the object is considered to be a pack of its consituent objects (this is the “generalized unpacking” part, because it gives us a way to unpack product types into parameter packs using a simple syntax). EWG liked the goals, but felt that a proposal like this should be part of a global unified vision for metaprogramming. The proposal was sent to the Reflection Study Group, whose responsibilities going forward will include metaprogramming more generally, and which will consider it alongside other metaprogramming proposals.
    • It’s worth noting that there was another proposal on this topic in the pre-meeting mailing, that aimed to solve the problem by introducing a new type of expression resembling a for loop that would generate a parameter pack procedurally. It wasn’t presented because no one was available to champion it.
  • The Concurrency Study Group is working on SIMD support, and they consulted EWG on how that support should integrate with the type system. In particular, if a function has multiple SIMD variants, should they have different function types, potentially opening the door to overloading? This was the approach taken in the Transactional Memory TS (with transaction-safe and non-transaction-safe variants of a function). EWG discouraged further proliferation of this paradigm.
  • The constexpr operator. This is a facility that tells you whether or not you are executing a function in a constant expression context or not; it allows you to have different implementations for compile-time and runtime evaluation. EWG liked the idea, although the exact syntax remains to be nailed down. In addition, desire was expressed for a way to enforce that a constexpr function is only invoked at compile time.
  • constexpr_assert and constexpr_trace. These are proposed tools for debugging constexpr code, that allow you to direct the compiler to produce diagnostic/debugging output while performing constexpr evaluation. The idea was generally favourably received, though it was noted that for constexpr_trace, rather than having it produce output unconditionally, it would be useful to have a mechanism where it can accumulate output into a buffer, and then flush the buffer only if something goes wrong.
  • std::constexpr_vector<T>. The scope of what is allowed in constexpr functions has been steadily increasing, and it might seem that the logical next frontier is dynamic memory allocation. However, at least one implementer has performed extensive experimentation with allowing dynamic memory allocation in constexpr evaluation, and found that it could only be done at a drastic performance penalty. The reason is that the compiler is required to diagnose what would be undefined behaviour if it happens at runtime, during constexpr evaluation, and dynamic memory allocation opens up so many avenues to invoking undefined behaviour that the compiler needs extensive instrumentation to diagnose it. This proposal is an alternative to allowing full-blown dynamic allocation, that provides a specific, “magic”, vector-like type for use during constexpr evaluation, that should satisfy the large majority of use cases. EWG encouraged further exploration of this space.
  • Tuple-based for loops. This is an extension of range-based for loops to tuple-like types. Tuple-like types can be thought of as heterogeneous ranges (ranges whose elements are of different types), where the total number of elements and their types are known at compile time. Accordingly, a tuple-based for loop would be unrolled at compile time, and its body instantiated once for each element type. This gives rise to a simple and convenient syntax for processing tuple-like types, which has been desired for a long time (see e.g. the Boost.Fusion library). Concerns with this specific proposal included performance considerations (for example, it makes it very easy to write innocent-looking code that triggers a lot of template instantiations and is expensive to compile), and the fact that it didn’t generalize to some use cases (such as cases where you want the type of a result to depend on logic in your “loop”). Further exploration of the topic was encouraged.
  • Range-based for loop with initializer. This extends the C++17 selection statements with initializer feature to range-based for loops, allowing you to declare a variable “alongside” your iteration variable that is scoped to the loop and will live as long as the loop. In addition to facilitating better scoping hygiene (that is, making the scopes of variables as tight as possible), it provides a way to resolve a long-standing lifetime issue with range-based for loops. EWG requested that the author implement the proposal, and then bring it back, at which point, if no issues arise during implementation, approval is expected.
  • bit_sizeof and bit_offsetof. These are equivalents of the sizeof and offsetof operators (the latter is actually a macro) that return counts of bits, and are usable on bitfield members. EWG referred this proposal to the Reflection Study Group, as it may be implementable as a library on top of forthcoming reflection primitives.
  • Proposed changes to the Modules TS. I talk about this below.

Rejected proposals:

  • Implicit and anonymous return types. This proposed to allow declaring an anonymous structure in the return type of a function, as a way to allow it to return multiple named values (contrast with a tuple, which allows you to return multiple values but they are unnamed), without having to define a named structure in an enclosing scope. To facilitate out-of-line definitions for such functions, it also proposed using decltype(return) to denote “the previously declared return type” (since repeating the anonymous structure declaration in the function’s definition would both be repetitive, and, according to the current language rules, actually declare a second, distinct anonymous structure type). Besides being unconvinced about the motivation (viewing the alternatives of named structured and tuples as good enough), people pointed out various technical difficulties with the proposal. decltype(return), in particular, would further complicate C++’s function redeclaration rules, and introduce potential ambiguities and situations where a definition using decltype(return) could be well-formed or not depending on what other functions have declarations visible.
  • Initialization list symmetry. This tiny proposal would have allowed a trailing comma on the last element of a constructor’s initializer list (properly called a constructor-chain-initializer), to make member reordering and other refactorings easier. It was rejected on the basis that in C++ a comma is a separator, not a terminator, and it would be strange to allow it in a trailing position in this context but not other contexts (like function parameters, function arguments, or template arguments). It’s worth noting that C++ does already allow trailing commas in a few places, like enumerations and array initializers.
  • A qualified replacement for #pragma once. The proposal here is to standardize a variant of the widely supported #pragma once extension, spelt #once, which took an identifier and optional version as argument, and used them to solve #pragma once‘s longstanding problem of having to figure out whether two files are the same (which can sometimes be tricky in situations involving symbolic links and such). EWG felt that Modules would obsolete this feature, and as such it was not worth standardizing at this stage.


This meeting marked the first significant design discussion about Concepts in EWG since a proposal to merge the Concepts TS into C++17 without modifications failed at the February 2016 meeting in Jacksonville.

The main topic of discussion was the Concepts TS revisited paper which proposed several design changes to Concepts:

  • Unifying function-like and variable-like concepts into a single concept definition syntax, as previously proposed. The unified syntax would drop the redundant bool keyword from concept bool, and the entities it would declare would be a kind unto their own, not variables or functions. The question of whether such entities can overload on the number and kinds of their template parameters (the original motivation for having function-like concepts) remains open, though it seems people are leaning towards not allowing overloading, arguing that if you have two concepts taking a different number or kind of template parameters, it’s probably clearer to give them different names (a canonical example is EqualityComparable<T> vs. EqualityComparableTo<T, U>; the latter becomes EqualityComparableTo<U> T in a constrained-parameter declaration, which reads very naturally: T must be of a type that’s equality-comparable to U). EWG encouraged continued exploration of this unification.
  • Changes to the redeclaration rules for constrained function templates. The Concepts TS provides multiple ways to express the same constrained function template. Currently, for purposes of determining whether two declarations declare the same function template, the shorthand forms are “desugared” to the long form (using requires-clauses only), and the usual redeclaration rules (which require token-for-token equivalence) are applied to the result. This means that e.g. a function template declared using the long form can be re-declared using one of the shorthand forms. The paper argued that this is brittle (among other things, it requires specifying the desugaring at the token level), and proposed requiring that redeclarations be token-for-token equivalent before desugaring. After extensive discussion, EWG was supportive of this change.
  • A minor change to the concept subsumption rules (that come into play during partial ordering of constrained function templates) where, for the purpose of determining whether two constraints are equivalent, the compiler does not “look into” atomic constraints and determine equivalence textually; two constraints are only equivalent if they refer to the same concept. This also garnered support.
  • Addressing the issue that subsumption checking can sometimes be expontential-time in the presence of disjunctions. The paper proposes addressing this by removing disjunctions, and instead introducing an explicit syntax for declaring concept refinement relationships, but the authors have since withdrawn that proposal, in part because it was quite controversial. It was pointed out that the previous change (not “looking into” atomic constraints during subsumption checking) will significantly alleviate the problem by reducing how often we run into the expontential case.
  • Modifying the abbreviated function template syntax to make it clear to readers that the function being declared is a template. Currently, the Concepts TS allowed an “abbreviated” syntax like void foo(Sortable& s); which declares a function template if Sortable is a concept. Many members of the committee and the larger C++ community expressed concern about having a template look exactly like a non-template, discriminated only by the result of name lookup on the argument types.

    EWG had a vigorous debate on this topic. On the one hand, people argued that templates and non-templates are fundamentally different entities, due to differences like the way name lookup works inside each, requiring typename in some contexts inside templates, and so on. Countering this, others argued that these differences are relatively minor, and that the long-term aim should be to make generic programming (programming with templates) be just like regular programming.

    One argument I found particularly interesting was: instead of having syntax to discriminate between templates and non-templates, why don’t we rely on tooling, such as having concept names syntax-colored differently from type names? I found this fairly convincing: since we are designing modern language features, it seems reasonable to design them with modern tools in mind. Then again, while many editors are capable of performing the sort of semantic highlighting contemplated here, programmers also spend a lot of time looking at code in non-editor contexts, such as in review tools, and I’ve yet to see one of those with semantic highlighting (although such a thing is conceivable).

    EWG did not reach a consensus on this topic. Several concrete proposals for syntax that would alert readers to the template-ness of an abbreviated function template were briefly discussed, but none of them were given serious consideration, because many people weren’t convinced of the need to have such syntax in the first place.

There was another proposal in the mailing concerning Concepts – one that I coauthored – that touched on the semantics of a concept name appearing twice in an abbreviated function template. It was not presented at this meeting for procedural reasons; I expect it will be presented at the next meeting in Toronto.

Definition Checking

There is one other topic related to Concepts that I’ve talked about before, and would like to touch on again: definition checking. To recap, this involves having the compiler check an uninstantiated template definition to make sure that it only uses its template parameters in ways allowed for by their constraints. In principle, definition checking combined with checking constraints at call sites (which the Concepts TS already does) should completely eliminate the possibility of compiler errors with instantiation backtraces: either the error is that the caller is not passing in types that meet the constraints, in which case the error pertains to the call site only; or the error is that the implementation uses the template parameters in ways not provided for by the constraints, in which case the error pertains to the uninstantiated definition site only.

As I described in my Jacksonville report, one of the reasons Concepts failed to make C++17 is that people had concerns about whether concepts in their current form are conducive to definition checking. However, that was not the only reason, and since Jacksonville I’ve increasingly heard the sentiment expressed that definition checking is a hard problem, and we should not hold up Concepts while we figure out how to do it. To quote the “vision for C++20” paper that I talked about above:

At the risk of sounding like a broken record, I will repeat a particular point: we need to throw definition checking under the bus. If we insist on having it, we will probably get nothing in C++20, and chances are we’ll get nothing in C++23. Such a trade-off is unacceptable. The benefits of Concepts as proposed far outweigh the benefits of definition checking, and most users wouldn’t care less about not having definition checking.

While I personally disagree with the sentiment that “most users wouldn’t care less about not having definition checking” – I believe it’s actually a critical part of a complete generic programming design – I do agree that Concepts is still very useful without it, and having to wait for Concepts until C++23 or beyond on its account would be very unfortunate.

A year ago, I was quite hopeful that we could make progress on the technical problems surrounding definition checking in time to make appropriate revisions to the Concepts TS (to, for example, tweak the way concepts are defined to be more amenable to definition checking) and get the result into C++20. Today, based on statements like the above, I am significantly less hopeful. My prediction is that, at this point, we’ll get Concepts in C++20, without any changes geared towards definition checking, and then maybe at some point in the future we’ll get some form of definition checking, that will be constrained by choices made in the Concepts TS as currently written. In this scenario, the desirable property I described above – that all errors are caught either purely at the call site, or purely at the definition site – which, I’ll note, is a property that other languages like Haskell and Rust do have – is unlikely to be achieved.


EWG spent half a day on the Modules TS. (I summarized the procedural developments above.)

The first main topic of discussion was a name lookup issue that came up during wording review of the Modules TS working draft in the Core Working Group. The issue concerns situations like this:

  • A module A defines a template foo, which uses an operation op on its argument.
  • A module B includes a non-modular (“legacy”) header that defines a type S and provides operation op for it.
  • B imports A, and defines a template bar that invokes foo with S as an argument, but still in a dependent context (so as not to trigger immediate instantiaion of foo).
  • A translation unit C imports B and instantiates bar.

What happens here is, the instantiation of foo happens in the context of C, where S‘s op is not visible. This could lead to:

  • A compiler error, if no other viable definition of op is visible.
  • A different (viable) definition of op than the intended one being silently called.
  • An ODR violation (undefined behaviour) if another translation unit has already instantiated foo, and op resolved to something else there.

Note that if the modules are replaced with headers, and imports with #includes, this setup works just fine.

To work around this, the programmer can do one of two things:

  • Have B export S‘s op, so it’s visible in all units that import B. There’s syntax for doing this even though S is defined inside a non-modular header.
  • Include the non-modular header that defines S and its op from C.

Neither of these is particularly satisfying. The first, because it involves repeating the declaration of op; the second, because S may be an implementation detail of B that C can’t be expected to know anything about.

The implementers of Modules at Microsoft argued that they haven’t run into this situation very much while deploying their implementation in a large codebase, and suggested shipping Modules v1 as a PDTS without addressing this issue, to gather feedback about how widespread of a problem it is. The Clang implementers stated that they had run into this, and needed to adjust the semantics of their modules implementation to deal with it. In the end, there was no consensus on changing the Modules TS semantics to deal with this problem before releasing the PDTS.

The other main topic of discussion was a paper containing proposed changes to the Modules TS from the Clang implementers – specifically the ones approved in Jacksonville for which there was confusion about whether they were meant to apply to Modules v1:

  • Requiring the module declaration to be the first declaration in a module file. Currently, it doesn’t have to be, and preceding declarations belong to the global module. The proposal would come with a mechanism to “reopen” the global module after the module declaration and place declarations in it. Apart from implementation considerations (from which point of view there are arguments on both sides), the motivation is readability: making it immediately clear to the reader that a file is a module file. There was no consensus for making this changes in Modules v1.
  • Introducing syntax to differentiate the module declaration for a module interface file from the module declaration for a module implementation file. EWG expressed support for this for Modules v1, and a preference to add the extra syntax to the interface declaration. No specific syntax was chosen yet.
  • Introducing syntax for module partitions (module interfaces spread across multiple files). EWG supported putting this in Modules v1 as well, though again no specific syntax was yet chosen.

As stated above, the hope is that Modules v1, as amended with the approved changes I described, can ship as a PDTS at the next meeting in Toronto.

Other Working Groups

The Library Working Group spent the week, including most evenings, heroically processing the large quantity of C++17 CD comments that concerned the standard library (a lot of them relating to the recently-merged Filesystem TS, though there were many others too), so that we could stay on schedule and send C++17 out for its DIS ballot at the end of the week (which we did).

The Library Evolution Working Group was working through its large backlog of proposed new library features. As much as I’d love to follow this group in as much detail as I follow EWG, I can’t be in two places at once, so the best I can do is point interested readers to the mailings to check out library proposals.

Regarding the various Study Groups, I’ve largely mentioned as much as I know about their progress in the Technical Specifications section above, but I’ll say a few more words about a couple of them:

SG 7 (Reflection)

SG 7 looked at two proposals for static reflection that can be thought of as alternatives to the proposal approved earlier by the group (and approved this week by EWG).

Both proposals are essentially “isomorphic” to the approved proposal in terms of what can be reflected, they just propose doing the reflection with a different syntax, and in particular with a syntax that represents meta-objects (reflected information about program entities) as values rather than types.

The differences between the two proposals are mostly just syntax and interface, e.g. the first uses a member function interface for operations on the meta-objects, while the second one uses a free function interface.

In addition, the author of the second proposal presented an extension (not in the paper) that would essentially allow generating source code from compile-time strings, which would be a very powerful and general metaprogramming facility. It was pointed out that, while we may want such a general facility for advanced metaprogramming use cases, it seems like overkill for the reflection use cases being considered.

The outcomes of the session were that: there is interest in moving towards reflection through values rather than through types; that the already-approved proposal (which does it through types) should nonetheless go forward as-is (which it did, with EWG approving it the next day); that a free function interface is preferable to a member function interface; and that the group should aim to (also) standardize more general metaprogramming features.

This latter – expanding the purview of SG 7 from just reflection, to metaprogramming more generally – is a deliberate change supported by the committee leadership. The group may be renamed “Reflection and Metaprogramming” or similar in due course.

SG 14 (Game Development & Low-Latency Applications)

SG 14 did not meet at GDC this year like it did last year, because this year’s GDC was the same week as the committee meeting I’m writing about 🙂

They do hold regular teleconferences (see the mailing list for details), and the latest status of their various proposals can be found in this summary paper.

Over the past year, the group has also been courting the interest of the embedded programming community, which is interested in many of the same issues (memory-constrained environments, hard/soft real-time applications, avoidance of dynamic memory allocation, etc.) as the existing consituencies represented in SG 14.

Next Meeting

The next meeting of the Committee will be in Toronto, Canada (which is where I’m based!), the week of July 10th, 2017.


C++17 is effectively out the door, with its Draft International Standard sent out for ballot, and the official release expected later this year.

Development on C++20 is well under way, with numerous core language and library features already earmarked for it, and several Technical Specifications expected to be merged into it. A proposal for an overall vision for the release aims to focus the committee’s efforts on four headliner features – Concepts, Modules, Ranges, and Networking. That is, of course, not a guarantee these features will be in C++20 (nor is it intended to exclude other features!), but I believe with some luck and continued hard work we can make part of all of that plan a reality.

Modules, in particular, are expected to be a game-changer for C++. Whether or not they make C++20, they will be available as a Technical Specification very soon (possibly as soon as later this year), and they are already available for experimentation in Clang and MSVC today.

Stay tuned for continued reporting on C++ standardization on my part!

Planet MozillaWhat would a periodic table of digital employability look like?

Today, in my Twitter stream, I saw this:


It’s from this site. I can’t really comment on its real-world utility, as I don’t know Azure, and I tend to steer clear of Microsoft stuff wherever possible.

However, I did think it might be a useful metaphor for the digital employability stuff I’ve been thinking about recently.

A reminder that the real periodic table of chemical elements looks like this:

Periodic table of chemical elements -  CC BY-SA Sandbh

However, it did make me think that my work around the essential elements of digital literacies could be expanded into a ‘periodic table’ of digital employability. This would have a number of benefits:

  • It’s non-linear (unlike the metro map approach)
  • Different elements can be given various weights
  • Types of elements can be grouped together

At the time of writing, there are 118 chemical elements represented by the periodic table. Interestingly, there have been plenty of suggested ways to represent it differently - such as Theodor Benfey’s spiral approach:

Theodor Benfey's spiral periodic table - CC BY-SA DePiep

I like that this is possible, as it shows that the important thing is the definition of the elements and how they relate to one another, rather than just the way they’re represented.

The way to attempt to do this and fail would be to do what most people and organisations do in this situation:

  1. Get representatives from large companies around the table for a one day planning session and write down everything they say as if they’ve just come down the mountain with tablets of stone.
  2. Do a ‘crosswalk’ (an American term, I think) and compare/contrast existing frameworks.

The reason both of these approaches ultimately fail is because they attempt to bypass the hard work of thinking through it, talking with people who know their stuff, and testing/iterating.

More on this soon! I’m at the ‘thinking’ stage still finding out where the edges are…

Comments? Questions? I’m @dajbelshaw or you can email me:

Planet Mozillacurl is C

Every once in a while someone suggests to me that curl and libcurl would do better if rewritten in a “safe language”. Rust is one such alternative language commonly suggested. This happens especially often when we publish new security vulnerabilities. (Update: I think Rust is a fine language! This post and my stance here has nothing to do with what I think about Rust or other languages, safe or not.)

curl is written in C

The curl code guidelines mandate that we stick to using C89 for any code to be accepted into the repository. C89 (sometimes also called C90) – the oldest possible ANSI C standard. Ancient and conservative.

C is everywhere

This fact has made it possible for projects, companies and people to adopt curl into things using basically any known operating system and whatever CPU architecture you can think of (at least if it was 32bit or larger). No other programming language is as widespread and easily available for everything. This has made curl one of the most portable projects out there and is part of the explanation for curl’s success.

The curl project was also started in the 90s, even long before most of these alternative languages you’d suggest, existed. Heck, for a truly stable project it wouldn’t be responsible to go with a language that isn’t even old enough to start school yet.

Everyone knows C

Perhaps not necessarily true anymore, but at least the knowledge of C is very widespread, where as the current existing alternative languages for sure have more narrow audiences or amount of people that master them.

C is not a safe language

Does writing safe code in C require more carefulness and more “tricks” than writing the same code in a more modern language better designed to be “safe” ? Yes it does. But we’ve done most of that job already and maintaining that level isn’t as hard or troublesome.

We keep scanning the curl code regularly with static code analyzers (we maintain a zero Coverity problems policy) and we run the test suite with valgrind and address sanitizers.

C is not the primary reason for our past vulnerabilities

There. The simple fact is that most of our past vulnerabilities happened because of logical mistakes in the code. Logical mistakes that aren’t really language bound and they would not be fixed simply by changing language.

Of course that leaves a share of problems that could’ve been avoided if we used another language. Buffer overflows, double frees and out of boundary reads etc, but the bulk of our security problems has not happened due to curl being written in C.

C is not a new dependency

It is easy for projects to add a dependency on a library that is written in C since that’s what operating systems and system libraries are written in, still today in 2017. That’s the default. Everyone can build and install such libraries and they’re used and people know how they work.

A library in another language will add that language (and compiler, and debugger and whatever dependencies a libcurl written in that language would need) as a new dependency to a large amount of projects that are themselves written in C or C++ today. Those projects would in many cases downright ignore and reject projects written in “an alternative language”.

curl sits in the boat

In the curl project we’re deliberately conservative and we stick to old standards, to remain a viable and reliable library for everyone. Right now and for the foreseeable future. Things that worked in curl 15 years ago still work like that today. The same way. Users can rely on curl. We stick around. We don’t knee-jerk react to modern trends. We sit still in the boat. We don’t rock it.

Rewriting means adding heaps of bugs

The plain fact, that also isn’t really about languages but is about plain old software engineering: translating or rewriting curl into a new language will introduce a lot of bugs. Bugs that we don’t have today.

Not to mention how rewriting would take a huge effort and a lot of time. That energy can instead today be spent on improving curl further.

What if

If I would start the project today, would I’ve picked another language? Maybe. Maybe not. If memory safety and related issues was the primary concern I had, then sure. But as I’ve mentioned above there are several others concerns too so it would really depend on my priorities.


At the end of the day the question that remains is: would we gain more than we would pay, and over which time frame? Who would gain and who would lose?

I’m sure that there will be or it may even already exist, curl and libcurl competitors and potent alternatives written in most of these new alternative languages. Some of them are absolutely really good and will get used and reach fame and glory. Some of them will be crap. Just like software always work. Let a thousand curl competitors bloom!

Will curl be rewritten at some point in the future? I won’t rule it out, but I find it unlikely. I find it even more unlikely that it will happen in the short term or within the next few years.

Discuss this post on Hacker news or Reddit!

Followup-post: Yes, C is unsafe, but…

Planet Mozilla45.9.0b1 available

TenFourFox 45.9.0 beta 1 is now available (downloads, hashes). This version continues deploying more of the multiple microoptimizations started with 45.8, including rescheduling xptcall which is the glue used for calling XPCOM functions (use CTR instead of LR for branching, reorder instructions to eliminate register and FXU dependencies), more reduced branches, hoisting call loads earlier in code sequences, optimized arithmetic inline caches for both Baseline and Ion code generation (especially integer operations for division, min/max and absolute value), fixing a stupid bug which used a branchy way of doing logical comparisons on floating point values (this passed tests but was unnecessarily inefficient), and eliminating some irrelevant branches in font runs and graphics. While I was at it I cherrypicked a few other minor perf boosts from 46 and stuck those in as well.

Also, the font blacklist is updated (fixing Apple and Medium) along with new support for blocking ATSUI-incompatible data:font/* URLs, and there is a speculative fix for the long-running issue of making changing the default search engine stick (this is difficult for me to test because none of my systems seem to be affected). The guts for repairing geolocation are in this version too but I'm still dithering over the service; most likely we will use the Mozilla Location Service though I'm open to other suggestions. Remember that the only sensor data Power Macs can provide for geolocation out of the box is the WiFi SSIDs they see (but this is perfectly sufficient for MLS). This should be finalized by the time 45.9 goes to release.

For FPR1, the first feature I'm planning to implement is one that was cancelled for 45 during beta: Brotli compression, which on supported sites can reduce data transfer by 14 to 39 percent with little impact on decompression time. This will involve backporting the current Brotli decompressor from 52ESR and then making necessary changes to Necko to support it, but happily much of the work was already done before it was disabled (for a critical bug that could not be easily worked around at the time) and released in 46. If there is sufficient time, I'd also like to implement the "New Hot NSS" (get it?) and backport the NSS security and encryption library from 52ESR as well, both of which will also reduce the burden of backporting security fixes. That's a bit of a bigger job, though, and might be FPR2 territory. Other major want-to-dos will be some JavaScript ES6 features I predict will be commonly used in the near future like Unicode regexes and changes to function scoping, both of which landed in 46 and should be easy to add to 45.

Only one site has been reported as incompatible with our plan to shut down SHA-1 certificate support with FPR1. As mentioned, it would take a major site failure for me to call this plan off, but I'd still like as much testing as possible beforehand. If you haven't done it already, please go into about:config and switch security.pki.sha1_enforcement_level to 1, and report any sites that fail. This approach of complete decommissioning is essentially the same policy Google Chrome will be taking, and soon no major browser will accept SHA-1 certificates as trusted for TLS, so it's not like we're going out on a limb here. Please note that reversing this change will not be a supported configuration because (the security implications notwithstanding) it may be un-possible to allow doing so after the new NSS library is eventually transplanted in.

Once 45.9 comes out, we will switch to our own Github repository and the source code will be uploaded and maintained from there (no more changeset overlays!). However, pull requests won't be accepted unless they're tied to an issue already accepted on the worklist, and we will still enforce the policy that non-contributor bug reports need to be triaged through Tenderapp first. Watch for the repo's magical population shortly after 45.9's final release on April 18.

Unfortunately, I don't think there will be a Tenfourbird FPR1: it appears that our anonymous colleague in the Land of the Rising Sun has not made any builds since 38.9, and that is a real shame. :(

Planet MozillaUnification in Chalk, part 1

So in my first post on chalk, I mentioned that unification and normalization of associated types were interesting topics. I’m going to write a two-part blog post series covering that. This first part begins with an overview of how ordinary type unification works during compilation. The next post will add in associated types and we can see what kinds of mischief they bring with them.

What is unification?

Let’s start with a brief overview of what unification is. When you are doing type-checking or trait-checking, it often happens that you wind up with types that you don’t know yet. For example, the user might write None – you know that this has type Option<T>, but you don’t know what that type T is. To handle this, the compiler will create a type variable. This basically represents an unknown, to-be-determined type. To denote this, I’ll write Option<?T>, where the leading question mark indicates a variable.

The idea then is that as we go about type-checking we will later find out some constraints that tell us what ?T has to be. For example, imagine that we know that Option<?T> must implement Foo, and we have a trait Foo that is implemented only for Option<String>:

trait Foo { }
impl Foo for Option<String> { }

In order for this impl to apply, it must be the case that the self types are equal, i.e., the same type. (Note that trait matching never considers subtyping.) We write this as a constraint:

Option<?T> = Option<String>

Now you can probably see where this is going. Eventually, we’re going to figure out that ?T must be String. But it’s not immediately obvious – all we see right now is that two Option types have to be equal. In particular, we don’t yet have a simple constraint like ?T = String. To arrive at that, we have to do unification.

Basic unification

So, to restate the previous section in mildly more formal terms, the idea with unification is that we have:

  • a bunch of type variables like ?T. We often call these existential type variables because, when you look at things in a logical setting, they arise from asking questions like exists ?T. (Option<String> = Option<?T>) – i.e., does there exist a type ?T that can make Option<String> equal to Option<?T>.1
  • a bunch of unification constraints U1..Un like T1 = T2, where T1 and T2 are types. These are equalities that we know have to be true.

We would like to process these unification constraints and get to one of two outcomes:

  • the unification cannot be solved (e.g., u32 = i32 just can’t be true);
  • we’ve got a substitution (mapping) from type variables to their values (e.g., ?T => String) that makes all of the unification constraints hold.

Let’s start out with a really simple type system where we only have two kinds of types (in particular, we don’t yet have associated types):

T = ?X             // type variables
  | N<T1, ..., Tn> // "applicative" types

The first kind of type is type variables, as we’ve seen. The second kind of type I am calling “applicative” types, which is really not a great name, but that’s what I called it in chalk for whatever reason. Anyway they correspond to types like Option<T>, Vec<T>, and even types like i32. Here the name N is the name of the type (i.e., Option, Vec, i32) and the type parameters T1...Tn represent the type parameters of the type. Note that there may be zero of them (as is the case for i32, which is kind of “shorthand” for i32<>).

So the idea for unification then is that we start out with an empty substitution S and we have this list of unification constraints U1..Un. We want to pop off the first constraint (U1) and figure out what to do based on what category it falls into. At each step, we may update our substitution S (i.e., we may figure out the value of a variable). In that case, we’ll replace the variable with its value for all the later steps. Other times, we’ll create new, simpler unification problems.

  • ?X = ?Y – if U equates two variables together, we can replace one variable with the other, so we add ?X => ?Y to our substitution, and then we replace all remaining uses of ?X with ?Y.
  • ?X = N<T1..Tn> – if we see a type variable equated with an applicative type, we can add ?X => N<T1..Tn> to our substitution (and replace all uses of it). But there is catch – we have to do one check first, called the occurs check, which I’ll describe later on.
  • N<X1..Xn> = N<Y1..Yn> – if we see two applicative types with the same name being equated, we can convert that into a bunch of smaller unification problems like X1 = Y1, X2 = Y2, …, Xn = Yn. The idea here is that Option<Foo> = Option<Bar> is true if Foo = Bar is true; so we can convert the bigger problem into the smaller one, and then forget about the bigger one.
  • N<...> = M<...> where N != M – if we see two application types being equated, but their names are different, that’s just an error. This would be something like Option<T> = Vec<T>.

OK, let’s try to apply those rules to our example. Remember that we had one variable (?T) and one unification problem (Option<?T> = Option<String>). We start an initial state like this:

S = [] // empty substitution
U = [Option<?T> = Option<String>] // one constraint

The head constraint consists of two applicative types with the same name (Option), so we can convert that into a simpler equation, reaching this state:

S = [] // empty substitution
U = [?T = String] // one constraint

Now the next constraint is of the kind ?T = String, so we can update our substitution. In this case, there are no more constraints, but if there were, we would replace any uses of ?T in those constraints with `String:

S = [?T => String] // empty substitution
U = [] // zero constraints

Since there are no more constraints left, we’re done! We found a solution.

Let’s do another example. This one is a bit more interesting. Imagine that we had two variables (?T and ?U) and this initial state:

S = []
U = [(?T, u32) = (i32, ?U),
     Option<?T> = Option<?U>]

The first constraint is unifying two tuples – you can think of a tuple as an applicative type, so (?T, u32) is kind of like Tuple2<?T, u32>. Hence, we will simplify the first equation into two smaller ones:

// After unifiying (?T, u32) = (i32, ?U)
S = []
U = [?T = i32,
     ?U = u32,
     Option<?T> = Option<?U>]

To process the next equation ?T = i32, we just update the substitution. We also replace ?T in the remaining problems with i32, leaving us with this state:

// After unifiying ?T = i32
S = [?T => i32]
U = [?U = u32,
     Option<i32> = Option<?U>]

We can do the same for ?U:

// After unifiying ?U = u32
S = [?T => i32, ?U = u32]
U = [Option<i32> = Option<u32>]

Now we, as humans, see that this problem is going to wind up with an error, but the compiler isn’t that smart yet. It has to first break down the remaining unification problem by one more step:

// After unifiying Option<i32> = Option<u32>
S = [?T => i32, ?U = u32]
U = [i32 = u32]             // --> Error!

And now we get an error, because we have two applicative types with different names (i32 vs u32).

The occurs check: preventing infinite types

When describing the unification procedure, I left out one little bit, but it is kind of important. When we have a unification constraint like ?X = T for some type T, we can’t just immediately add ?X => T to our substitution. We have to first check and make sure that ?X does not appear in T; if it does, that’s also an error. In other words, we would consider a unification constraint like this to be illegal:

?X = Option<?X>

The problem here is that this results in an infinitely big type. And I don’t mean a type that occupies an infinite amount of RAM on your computer (although that may be true). I mean a type that I can’t even write down. Like if I tried to write down a type that satisfies this inequality, it would look like:

Option<Option<Option<Option< /* ad infinitum */ >>>>

We don’t want types like that, they cause all manner of mischief (think non-terminating compilations). We already know that no such type arises from our input program (because it has finite size, and it contains all the types in textual form). But they can arise through inference if we’re not careful. So we prevent them by saying that whenever we unify a variable ?X with some value T, then ?X cannot occur in T (hence the name “occurs check”).

Here is an example Rust program where this could arise:

fn main() {
    let mut x;    // x has type ?X
    x = None;     // adds constraint: ?X = Option<?Y>
    x = Some(x);  // adds constraint: ?X = Option<?X>

And indeed if you try this example on the playpen, you will get “cyclic type of infinite size” as an error.

How this is implemented

In terms of how this algorithm is typically implemented, it’s quite a bit different than how I presented it here. For example, the “substitution” is usually implemented through a mutable unification table, which uses Tarjan’s Union-Find algorithm (there are a number of implementations available on; the set of unification constraints is not necessarily created as an explicit vector, but just through recursive calls to a unify procedure. The relevant code in chalk, if you are curious, can be found here.

The main procedure is unify_ty_ty, which unifies two types. It begins by normalizing them, which corresponds to applying the substitution that we have built up so far. It then analyzes the various cases in roughly the way we’ve described (ignoring the cases we haven’t talked about yet, like higher-ranked types or associated types):

(Note: these links are fixed to the head commit in chalk as of the time of this writing; that code may be quite out of date by the time you read this, of course.)


This post describes how basic unification works. The unification algorithm roughly as I presented it was first introduced by Robinson, I believe, and it forms the heart of Hindley-Milner type inference (used in ML, Haskell, and Rust as well) – as such, I’m sure there are tons of other blog posts covering the same material better, but oh well.

In the next post, I’ll talk about how I chose to extend this basic system to cover associated types. Other interesting topics I would like to cover include:

  • integrating subtyping and lifetimes;
  • how to handle generics (in particular, universal quantification like forall);
  • why it is decidedly non-trivial to integrate add where-clauses like where T = i32 into Rust (it breaks some assumptions that we made in this post, in particular).


Post any comments or questions in this internals thread.


  1. Later on, probably not in this post, we’ll see universal type variables (i.e., forall !T); if you’re interested in reading up on how they interact with inference, I recommend “A Proof Procedure for the Logic of Hereditary Harrop Formulas”, by Gopalan Nadathur, which has a very concrete explanation.

Planet MozillaIntroducing ScrollingCardView for iOS

For Project Prox, we were asked to implement a design1 like this:

Scrolling the card view

Specifically, this is a card view that:

  • Hugs its content, dynamically expanding the height when the content does
  • Will scroll its content if the content is taller than the card

After some searching, we discovered that no such widgets existed! Hoping our efforts could be useful to others, we created the ScrollingCardView library.


ScrollingCardView is used much like any other view.

First, create your view, enable autolayout, and add it to the view hierarchy:

    let cardView = ScrollingCardView()
    cardView.translatesAutoresizingMaskIntoConstraints = false
    parentView.addSubview(cardView) // e.g. parent could be the ViewController's

Then constrain the card view as you would any other view:

            equalTo: topLayoutGuide.bottomAnchor, constant: 16),
            equalTo: view.leadingAnchor, constant: 16),
            equalTo: view.trailingAnchor, constant: -16),

        // If you don't constrain the height, the card
        // will grow to match its intrinsic content size.

        // Or use lessThanOrEqualTo to allow your card
        // view to grow only until a certain size, e.g.
        // the size of the screen.
            lessThanOrEqualTo: bottomLayoutGuide.topAnchor, constant: -16),

        // Or you can constrain it to a particular height:
        // cardView.bottomAnchor.constraint(
        //     equalTo: bottomLayoutGuide.topAnchor, constant: -16),
        // cardView.heightAnchor.constraint(equalToConstant: 300),

Finally, specify the card view’s content:

    // 3. Set your card view's content.
    let content = UILabel()
    content.text = "Hello world!"
    content.numberOfLines = 0

    cardView.contentView = content

The card view comes with smart visual defaults (including a shadow), but you can also customize them:

    cardView.backgroundColor = .white

    cardView.cornerRadius = 2

    cardView.layer.shadowOffset = CGSize(width: 0, height: 2)
    cardView.layer.shadowRadius = 2
    cardView.layer.shadowOpacity = 0.4

Want it? ScrollingCardView is available on CocoaPods: you can find installation instructions and the source on GitHub.

Questions? Feature requests? File an issue or find us on #mobile.


1: This particular design was not actually used because we felt we could provide a better user experience if we also moved the card itself, which lets the user fill the screen with the long-form content they were trying to read. Further discussion in prox#372.

Planet MozillaMigrating AdBlock for Firefox to WebExtensions

AdBlock for Firefox is a fast and powerful ad blocker with over 40 million users. They are in the process of transitioning to WebExtensions, and have completed the first step of porting their data using Embedded WebExtensions. You can read more about the AdBlock extension here.

For more resources on updating your extension, please check out MDN. You can also contact us via these methods.

  1. Please provide a short background on your add-on. What does it do, when was it created, and why was it created?

    We created our original Firefox extension in 2014. We had seen some early success on Chrome and Safari and believed we could replicate that success on Firefox, which had developed a good community of users that downloaded add-ons for Firefox. It seemed like a natural place for us to be.

  1. What add-on technologies or APIs were used to build your add-on?

    The Firefox Add-On SDK was being promoted at the time, which wasn’t compatible with the Chrome Extension API, so we went through the Chrome code to identify areas where we could leverage work we had done previously. Since the APIs were a little different, we ended up having to modify some modules to use the Firefox Add-on SDK.

  1. Why did you decide to transition your add-on to WebExtensions APIs?

    With the Firefox SDK set to be deprecated, we knew our extension would slowly become unusable, so it made sense to transition to the WebExtension API. The benefit, from our standpoint, was that by using this API our software would be on a similar codebase and have similar features and functionalities to what we do on some of the other browsers we support.

  2. Walk us through the process of how you are making the transition. How was the experience of finding WebExtensions APIs to replace legacy APIs? What are some advantages and limitations?

    Last year we ported our Chrome extension to Edge, so when Firefox announced its plans, we had a good idea of what we wanted to do and how to go about it. Also, we were familiar with the WebExtension API from our years working on Chrome, but we knew we needed to educate ourselves on the differences. Fortunately, the Firefox documentation on the difference was very helpful in that education process. These pages, in particular, were helpful:

    We did run into a few challenges. Chrome allows us to create an alert message or a confirm message from the background page (e.g., “Are you sure you want do this…”), and Firefox doesn’t allow us to do that. We use that type of messaging in our Chrome extension that we had to find a workaround for, which we were able to do. For us, this impacted our ability to message our users when they were manipulating custom filters within AdBlock, but was not a major issue.

    We hope to land Permission capabilities in Firefox 54, and you can read about its implementation progress in the WebExtensions in Firefox 53 blog post.

  1. What, if anything, will be different about your add-on when it becomes a WebExtension? Will you be able to transition with all the features intact?

    Anecdotally, the extension appears to be faster, specifically around page load times. But the big advantage, from our perspective, is that we will be able to manage the transition with almost all of our features intact. As a result, we aren’t losing any meaningful functionality of AdBlock, which was our main concern before we embarked upon this transition.

    We did notice that a few of the APIs that AdBlock utilizes are not available on Firefox for Android, so we are currently unable to release a new version of AdBlock that supports Firefox for Android. We hope to address this issue in a coming version of AdBlock.

    We have lots of work planned for Android in upcoming releases, with the goal of making ad blockers possible in Firefox 57.

  1. What advice would you give other legacy add-on developers?

    Make sure you have a migration plan that is well-tested on various versions and operating systems before you start migrating user data. One thing we learned the hard way, when we migrated our users’ data, we generated some migration messages to the console, but those message were not persisted.  It would have been more helpful to use if the messages were persisted for a period of time, to aid with debugging any user issues.

    Embedded Extensions are available as of Firefox 51 to help you transfer your user data.

  1. Anything else you’d like to add?

    If you are going to upgrade your extension, only do what’s necessary to get the current functionality working first. Don’t try and do too much in the release that you are using to migrate users over.

The post Migrating AdBlock for Firefox to WebExtensions appeared first on Mozilla Add-ons Blog.

Planet Mozilla[worklog] Edition 060. Watching stars through the bamboos

webcompat life

More people in the team, and we seem to have increased the number of meetings, maybe just a perception. But I liked the non verbal one that Sergiu did last week. Giving plenty of discussion points and the rest of us going to it for replying during the next couple of days.

webcompat issues dev

To read

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.

I would add that the difference of power makes this trust unbalanced. And so when a power is asking for trust, we need very strong guarantee and counter-system once the trust has been breached.


Planet WebKitMichael Catanzaro: A Web Browser for Awesome People (Epiphany 3.24)

Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

You Can Load Webpages!

Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

Screenshot showing address bar visibleDiscover GNOME 3! Discover the address bar!

You Can Set a Homepage!

A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

New Bookmarks Interface

There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

Manage Tons of Tabs

One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

Improved Tracking Protection

I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

Insecure Password Form Warning

Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

New Search Engine Manager

Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

New Icon

Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

WebKitGTK+ 2.16

WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

  • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
  • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
  • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
  • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the

Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

Future Features

Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

Help Out

Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

OK, I’m done typing stuff now. Onwards to 3.26!

Planet MozillaPrivacy Features, Tab Tools & Other New WebExtensions

Tabzen is great for tab hoarders.

As of late March, (AMO) has around 2,000 listed add-ons built with WebExtensions APIs, the new cross-browser standard for writing add-ons. Average daily installs for them total more than 5,000,000. That’s nice momentum as we hurtle towards the Firefox 57 release.

Volume aside, I continue to be impressed with the quality of content emerging…

Smart HTTPS (revived) is the WebExtensions version of one of my favorite yet simple security add-ons—it changes HTTP addresses to the secure HTTPS. Disconnect for Facebook (WebExtension) is another fine privacy tool that prevents Facebook from tracking your Web movement by blocking all Facebook related requests sent between third-party sites.

While we’re talking Facebook, you know what annoys me? Facebook’s “suggested posts” (for some reason I get served a lot of “suggested” content that implies I may have a fatty liver). Kick Facebook Suggested Posts puts an end to that nonsense.

History Cleaner conveniently wipes away your browsing history after a set amount of time, while History Zebra features white and black lists to manage specific sites you want to appear (or not) in your browsing history.

Tab Auto Refresh lets you set timed refresh intervals per tab.

Don’t Touch My Tabs! protects against hyperlinks that try to hijack your previous tab. In other words, when you typically click a link, it grants the new page control over the one you clicked from, which is maybe not-so awesome for a number of reasons, like reloading the page with intrusive ads, or hackers throwing up a fake login to phish your info.

Speaking of tabs, I dig Tab Auto Refresh because I follow a couple of breaking news sites, so with this add-on I set certain tabs to refresh at specific time intervals, then I just click over every so often to catch the latest.

Tabzen is another dynamic tab tool that treats tab management in a familiar bookmarking fashion.

Turn any Reddit page into literally the darkest corner of the Web with dark page themer Reddit Slate Night 2.0 (while we’re on the Reddit tip, this comment collapser presents a compelling alternate layout for perusing conversations).

Dark Mode turns the entire internet goth.

Link Cleaner removes extraneous guck from your URLs, including tracking parameters from commerce sites Amazon and AliExpress.

These are awesome add-ons from very creative developers! It’s great to see such diverse, interesting WebExtensions crop up.

If you’re a developer of a legacy add-on or Chrome extension and want more info about the WebExtensions porting process, this should help. Or if you’re interested in writing a WebExtension from scratch, check this out.

The post Privacy Features, Tab Tools & Other New WebExtensions appeared first on Mozilla Add-ons Blog.

Planet MozillaCaspia Projects and Thunderbird – Open Source In Absentia

People of Thunderbird - Chinook Nation

Clallam Bay is located among various Native American tribes where the Thunderbird is an important cultural symbol.

I’m recycling an old trademark that I’ve used, Caspia, to describe my projects to involve Washington State prisoners in open-source projects. After an afternoon of brainstorming, Caspia is a new acronym “Creating Accomplished Software Professionals In Absentia”.
What does this have to do with Thunderbird? I sat in a room a few weeks ago with 10 guys at Clallam Bay, all who have been in a full-time, intensive software training program for about a year, who are really interested in trying to do real-world projects rather than simply hidden internal projects that are classroom assignments, or personal projects with no public outlet. I start in April spending two days per week with these guys. Then there are another 10 or so guys at WSR in Monroe that started last month, though the situation there is more complex. The situation is similar to other groups of students that might be able to work on Thunderbird or Mozilla projects, with these differences:1) Student or GSOC projects tend to have a duration of a few months, while the expected commitment time for this group is much longer.

2) Communication is extremely difficult. There is no internet access. Any communication of code or comments is accomplished through sneakernet options. It is easier to get things like software artifacts in rather than bring them out. The internal issues of allowing this to proceed at all are tenuous at both facilities, though we are further along at Clallam Bay.

3) Given the men’s situation, they are very sensitive to their ability to accumulate both publicly accessible records of their work, and personal recommendations of their skill. Similarly, they want marketable skills.

4) They have a mentor (me) that is heavily engaged in the Thunderbird/Mozilla world.

Because they are for the most part not hobbyists trying to scratch an itch, but rather people desperate to find a pathway to success in the future, I feel a very large responsibility to steer them in the direction of projects that would demonstrate skills that are likely to be marketable, and provide visibility that would be easily accessible to possible future employees. Fixing obscure regressions in legacy Thunderbird code, with contributions tracked only in and BMO, does not really fit that very well. For those reasons, I have a strong bias in favor of projects that 1) involve skills usable outside the narrow range of the Mozilla platform, and 2) can be tracked on github.

I’ve already mentioned one project that we are looking at, which is the broad category of Contact manager. This is the primary focus of the group at WSR in Monroe. For the group at Clallam Bay, I am leaning toward focusing on the XUL->HTML conversion issue. Again I would look at this more broadly than just the issues in Thunderbird, perhaps developing a library of Web Components that emulate XUL functionality, and can be used both to easily migrate existing XUL to HTML, but also as a separate library for desktop-focused web applications. This is one of the triad of platform conversions that Thunderbird needs to do (the others being C++->JavaScript, and XPCOM->SomethingElse).

I can see that if the technical directions I am looking at turn out to work with Thunderbird, it will mean some big changes. These projects will mostly be done using GitHub repos, so we would need to improve our ability to work with external libraries. (We already do that with JsMime but poorly). The momentum in the JS world these days, unfortunately, is with Node and Chrome V8. That is going to cause a lot of grief as we try to co-exist with Node/V8 and Gecko. I could also see large parts of our existing core functionality (such as the IMAP backend) migrated to a third-party library.

Our progress will be very slow at first as we undergo internal training, but I think these groups could start having a major impact on Thunderbird in about a year.


Planet MozillaLastPass: Security done wrong

Disclaimer: I am the author of Easy Passwords which is also a password manager and could be considered LastPass competitor in the widest sense.

Six month ago I wrote a detailed analysis of LastPass security architecture. In particular, I wrote:

So much for the general architecture, it has its weak spots but all in all it is pretty solid and your passwords are unlikely to be compromised at this level. However, as described in my blog post the browser integration turned out to be a massive weakness. The LastPass extension on your computer works with decrypted data, so it needs to be extra careful – and at the moment it isn’t.

I went on to point out Auto Fill functionality and internal messaging as the main weak spots of the Last Pass browser extensions. And what do I read in the news today? Google reporter Tavis Ormandy found two security vulnerabilities in LastPass. In which areas? Well, Auto Fill and internal messaging of course.

Now I could congratulate myself on a successful analysis of course, but predicting these reports wasn’t really a big feat. See, I checked out LastPass after reports about two security vulnerabilities have been published last August. I looked into what those vulnerabilities were and how they have been resolved. And I promptly found that the issues haven’t been resolved completely. Six months later Tavis Ormandy appears to have done the same and… well, you can still find ways to exploit the same old issues.

Altogether it looks like LastPass is a lot better at PR than they are at security. Yes, that’s harsh but this is what I’ve seen so far. In particular, security vulnerabilities have been addressed punctually, only the exact scenario reported has been tested by the developers. This time LastPass has driven it to an extreme by fixing a critical bug in their Chrome extension and announcing the fix even though the exact same exploit was working against their Firefox extension as well. But also with the bugs I reported previously nobody seemed to have an interest in going through the code base looking for other instances of the same issue, let alone taking obvious measures to harden the code against similar attacks or reconsidering the overall approach.

In addition to that, LastPass is very insistently downplaying the impact of the vulnerabilities. For example, an issue where I couldn’t provide an exploit (hey, I’m not even a user of the product, I don’t know it too well) was deemed not a vulnerability — Tavis Ormandy has now demonstrated that it is exploitable after all. On other occasions LastPass only admitted what the proof of concept exploit was doing, e.g. removing passwords in case of the vulnerability published by Tavis Ormandy in August last year. The LastPass developers should have known however that the messaging interface he hijacked could do far more than that.

This might be the reason why this time Tavis Ormandy shows how you can run arbitrary applications through LastPass, it’s hard to deny that the issue is really, really bad. So this time their announcement says:

  • Our investigation to date has not indicated that any sensitive user data was lost or compromised
  • No site credential passwords need to be changed

Sure it didn’t — because compromising clients this way doesn’t require access to LastPass servers. So even if black hats found this vulnerability years ago and are abusing it on a large scale, LastPass wouldn’t be likely to know. This should really have been:

We messed up and we don’t know whether your passwords are compromised as a result. You should probably change them now, just to be sure.

Planet MozillaReps Weekly Meeting Mar. 23, 2017

Reps Weekly Meeting Mar. 23, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaCan’t you graph that graph

I’m going to just recreate blame, he said. It’s going to be easy, he said.

We have a project to migrate the localization of Firefox to one repository for all channels, nick-named cross-channel, or x-channel in short. The plan is to create one repository that holds all the en-US strings we need for Firefox and friends on all channels. One repository to rule them all, if you wish. So you need to get the contents of mozilla-central, comm-central, *-aurora, *-beta, *-release, and also some of *-esr?? together in one repository, with, say, one toolkit/chrome/global/customizeToolbar.dtd file that has all the strings that are used by any of the apps on any branch.

We do have some experience with merging the content of localization files as part of l10n-merge which is run at Firefox build time. So this shouldn’t be too hard, right?

Enter version control, and the fact that quite a few of our localizers are actually following the development of Firefox upstream, patch by patch. That they’re trying to find the original bug if there’s an issue or a question. So, it’d be nice to have the history and blame in the resulting repository reflect what’s going on in mozilla-central and its dozen siblings.

Can’t we just hg convert and be done with it? Sadly, that only converts one DAG into another hg DAG, and we have a dozen. We have a dozen heads, and we want a single head in the resulting repository.

Thus, I’m working on creating that repository. One side of the task is to update that target repository as we see updates to our 12 original heads. I’m pretty close to that one.

The other task is to create a good starting point. Or, good enough. Maybe if we could just create a repo that had the same blame as we have right now? Like, not the hex or integer revisions, but annotate to the right commit message etc? That’s easy, right? Well, I thought it was, and now I’m learning.

To understand the challenges here, one needs to understand the data we’re throwing at any algorithm we write, and the mercurial code that creates the actual repository.

As of FIREFOX_AURORA_45_BASE, just the blame for the localized files for Firefox and Firefox for Android includes 2597 hg revisions. And that’s not even getting CVS history, but just what’s in our usual hg repository. Also, not including comm-central in that number. If that history was linear, things would probably be pretty easy. At least, I blame the problems I see in blame on things not being linear.

So, how non-linear is that history. The first attempt is to look at the revision set with hg log -G -r .... . That creates a graph where the maximum number of parents of a single changeset is at 1465. Yikes. We can’t replay that history in the target repository, as hg commits can only have 2 parents. Also, that’s clearly not real, we’ve never had that many parallel threads of development. Looking at the underlying mercurial code, it’s showing all reachable roots as parents of a changeset, if you have a sparse graph. That is, it gives you all possible connections through the underlying full graph to the nodes in your selection. But that’s not what we’re interested in. We’re interested in the graph of just our nodes, going just through our nodes.

In a first step, I wrote code that removes all grandchildren from our parents. That reduces the maximum number of parents to 26. Much better, but still bad. At least it’s at a size where I can start to use graphviz to create actual visuals to inspect and analyze. Yes, I can graph that graph.

The resulting graph has a few features that are actually already real. mozilla-central has multiple roots. One is the initial hg import of the Firefox code. Another is including Firefox for Android in mozilla-central, which used to be an independent repository. Yet another is the merge of services/sync. And then I have two heads, which isn’t much of a problem, it’s just that their merge commit didn’t create anything to blame for, and thus doesn’t show up in my graph. Easy to get to, too.

Looking at a subset of the current graph, it’s clear that there are more arcs to remove:

Anytime you have an arc that just leap-frogs to an ancestor, you can safely remove that. I indicated some in the graph above, and you’ll find more – I was just tired of annotating in Preview. As said before, I already did that for grandchildren. Writing this post I realize that it’s probably easy enough to do it for grandgrandchildren, too. But it’s also clear from the full graph, that that algorithm probably won’t scale up. Seems I need to find a good spot at which to write an explicit loop detection.

This endeavour sounds a bit academic at first, why would you care? There are two reasons:

Blame in mercurial depends on the diff that’s stored in the backend, and the diff depends on the previous content. So replaying the blame in some way out of band doesn’t actually create the same blame. My current algorithm is to just add the final lines one by one to the files, and commit. Whitespace and reoccurring lines get all confused by that algorithm, sadly.

Also, this isn’t a one-time effort. The set of files we need to expose in the target depends on the configuration, and often we fix the configuration of Firefox l10n way after the initial landing of the files to localize. So having a sound code-base to catch up on missed history is an important step to make the update algorithm robust. Which is really important to get it run in automation.

PS: The tune for this post is “That Smell” by Lynyrd Skynyrd.

Planet MozillaWhy is the git-cinnabar master branch slower to clone?

Apart from the memory considerations, one thing that the data presented in the “When the memory allocator works against you” post that I haven’t touched in the followup posts is that there is a large difference in the time it takes to clone mozilla-central with git-cinnabar 0.4.0 vs. the master branch.

One thing that was mentioned in the first followup is that reducing the amount of realloc and substring copies made the cloning more than 15 minutes faster on master. But the same code exists in 0.4.0, so this isn’t part of the difference.

So what’s going on? Looking at the CPU usage during the clone is enlightening.

On 0.4.0:

On master:

(Note: the data gathering is flawed in some ways, which explains why the git-remote-hg process goes above 100%, which is not possible for this python process. The data is however good enough for the high level analysis that follows, so I didn’t bother to get something more acurate)

On 0.4.0, the git-cinnabar-helper process was saturating one CPU core during the File import phase, and the git-remote-hg process was saturating one CPU core during the Manifest import phase. Overall, the sum of both processes usually used more than one and a half core.

On master, however, the total of both processes barely uses more than one CPU core.

What happened?

This and that happened.

Essentially, before those changes, git-remote-hg would send instructions to git-fast-import (technically, git-cinnabar-helper, but in this case it’s only used as a wrapper for git-fast-import), and use marks to track the git objects that git-fast-import created.

After those changes, git-remote-hg asks git-fast-import the git object SHA1 of objects it just asked to be created. In other words, those changes replaced something asynchronous with something synchronous: while it used to be possible for git-remote-hg to work on the next file/manifest/changeset while git-fast-import was working on the previous one, it now waits.

The changes helped simplify the python code, but made the overall clone process much slower.

If I’m not mistaken, the only real use for that information is for the mapping of mercurial to git SHA1s, which is actually rarely used during the clone, except at the end, when storing it. So what I’m planning to do is to move that mapping to the git-cinnabar-helper process, which, incidentally, will kill not 2, but 3 birds with 1 stone:

  • It will restore the asynchronicity, obviously (at least, that’s the expected main outcome).
  • Storing the mapping in the git-cinnabar-helper process is very likely to take less memory than what it currently takes in the git-remote-hg process. Even if it doesn’t (which I doubt), that should still help stay under the 2GB limit of 32-bit processes.
  • The whole thing that spikes memory usage during the finalization phase, as seen in previous post, will just go away, because the git-cinnabar-helper process will just have prepared the git notes-like tree on its own.

So expect git-cinnabar 0.5 to get moar faster, and to use moar less memory.

Planet MozillaAnalyzing git-cinnabar memory use

In previous post, I was looking at the allocations git-cinnabar makes. While I had the data, I figured I’d also look how the memory use correlates with expectations based on repository data, to put things in perspective.

As a reminder, this is what the allocations look like (horizontal axis being the number of allocator function calls):

There are 7 different phases happening during a git clone using git-cinnabar, most of which can easily be identified on the graph above:

  • Negotiation.

    During this phase, git-cinnabar talks to the mercurial server to determine what needs to be pulled. Once that is done, a getbundle request is emitted, which response is read in the next three phases. This phase is essentially invisible on the graph.

  • Reading changeset data.

    The first thing that a mercurial server sends in the response for a getbundle request is changesets. They are sent in the RevChunk format. Translated to git, they become commit objects. But to create commit objects, we need the entire corresponding trees and files (blobs), which we don’t have yet. So we keep this data in memory.

    In the git clone analyzed here, there are 345643 changesets loaded in memory. Their raw size in RawChunk format is 237MB. I think by the end of this phase, we made 20 million allocator calls, have about 300MB of live data in about 840k allocations. (No certainty because I don’t actually have definite data that would allow to correlate between the phases and allocator calls, and the memory usage change between this phase and next is not as clear-cut as with other phases). This puts us at less than 3 live allocations per changeset, with “only” about 60MB overhead over the raw data.

  • Reading manifest data.

    In the stream we receive, manifests follow changesets. Each changeset points to one manifest ; several changesets can point to the same manifest. Manifests describe the content of the entire source code tree in a similar manner as git trees, except they are flat (there’s one manifest for the entire tree, where git trees would reference other git trees for sub directories). And like git trees, they only map file paths to file SHA1s. The way they are currently stored by git-cinnabar (which is planned to change) requires knowing the corresponding git SHA1s for those files, and we haven’t got those yet, so again, we keep everything in memory.

    In the git clone analyzed here, there are 345398 manifests loaded in memory. Their raw size in RawChunk format is 1.18GB. By the end of this phase, we made 23 million more allocator calls, and have about 1.52GB of live data in about 1.86M allocations. We’re still at less than 3 live allocations for each object (changeset or manifest) we’re keeping in memory, and barely over 100MB of overhead over the raw data, which, on average puts the overhead at 150 bytes per object.

    The three phases so far are relatively fast and account for a small part of the overall process, so they don’t appear clear-cut to each other, and don’t take much space on the graph.

  • Reading and Importing files.

    After the manifests, we finally get files data, grouped by path, such that we get all the file revisions of e.g. .cargo/.gitignore, followed by all the file revisions of .cargo/, .clang-format, and so on. The data here doesn’t depend on anything else, so we can finally directly import the data.

    This means that for each revision, we actually expand the RawChunk into the full file data (RawChunks contain patches against a previous revision), and don’t keep the RawChunk around. We also don’t keep the full data after it was sent to the git-cinnabar-helper process (as far as cloning is concerned, it’s essentially a wrapper for git-fast-import), except for the previous revision of the file, which is likely the patch base for the next revision.

    We however keep in memory one or two things for each file revision: a mapping of its mercurial SHA1 and the corresponding git SHA1 of the imported data, and, when there is one, the file metadata (containing information about file copy/renames) that lives as a header in the file data in mercurial, but can’t be stored in the corresponding git blobs, otherwise we’d have irrelevant data in checkouts.

    On the graph, this is where there is a steady and rather long increase of both live allocations and memory usage, in stairs for the latter.

    In the git clone analyzed here, there are 2.02M file revisions, 78k of which have copy/move metadata for a cumulated size of 8.5MB of metadata. The raw size of the file revisions in RawChunk format is 3.85GB. The expanded data size is 67GB. By the end of this phase, we made 622 million more allocator calls, and peaked at about 2.05GB of live data in about 6.9M allocations. Compared to the beginning of this phase, that added about 530MB in 5 million allocations.

    File metadata is stored in memory as python dicts, with 2 entries each, instead of raw form for convenience and future-proofing, so that would be at least 3 allocations each: one for each value, one for the dict, and maybe one for the dict storage ; their keys are all the same and are probably interned by python, so wouldn’t count.

    As mentioned above, we store a mapping of mercurial to git SHA1s, so for each file that makes 2 allocations, 4.04M total. Plus the 230k or 310k from metadata. Let’s say 4.45M total. We’re short 550k allocations, but considering the numbers involved, it would take less than one allocation per file on average to go over this count.

    As for memory size, per this answer on stackoverflow, python strings have an overhead of 37 bytes, so each SHA1 (kept in hex form) will take 77 bytes (Note, that’s partly why I didn’t particularly care about storing them as binary form, that would only save 25%, not 50%). That’s 311MB just for the SHA1s, to which the size of the mapping dict needs to be added. If it were a plain array of pointers to keys and values, it would take 2 * 8 bytes per file, or about 32MB. But that would be a hash table with no room for more items (By the way, I suspect the stairs that can be seen on the requested and in-use bytes is the hash table being realloc()ed). Plus at least 290 bytes per dict for each of the 78k metadata, which is an additional 22M. All in all, 530MB doesn’t seem too much of a stretch.

  • Importing manifests.

    At this point, we’re done receiving data from the server, so we begin by dropping objects related to the bundle we got from the server. On the graph, I assume this is the big dip that can be observed after the initial increase in memory use, bringing us down to 5.6 million allocations and 1.92GB.

    Now begins the most time consuming process, as far as mozilla-central is concerned: transforming the manifests into git trees, while also storing enough data to be able to reconstruct manifests later (which is required to be able to pull from the mercurial server after the clone).

    So for each manifest, we expand the RawChunk into the full manifest data, and generate new git trees from that. The latter is mostly performed by the git-cinnabar-helper process. Once we’re done pushing data about a manifest to that process, we drop the corresponding data, except when we know it will be required later as the delta base for a subsequent RevChunk (which can happen in bundle2).

    As with file revisions, for each manifest, we keep track of the mapping of SHA1s between mercurial and git. We also keep a DAG of the manifests history (contrary to git trees, mercurial manifests track their ancestry ; files do too, but git-cinnabar doesn’t actually keep track of that separately ; it just relies on the manifests data to infer file ancestry).

    On the graph, this is where the number of live allocations increases while both requested and in-use bytes decrease, noisily.

    By the end of this phase, we made about 1 billion more allocator calls. Requested allocations went down to 1.02GB, for close to 7 million live allocations. Compared to the end of the dip at the beginning of this phase, that added 1.4 million allocations, and released 900MB. By now, we expect everything from the “Reading manifests” phase to have been released, which means we allocated around 620MB (1.52GB – 900MB), for a total of 3.26M additional allocations (1.4M + 1.86M).

    We have a dict for the SHA1s mapping (345k * 77 * 2 for strings, plus the hash table with 345k items, so at least 60MB), and the DAG, which, now that I’m looking at memory usage, I figure has the one of the possibly worst structure, using 2 sets for each node (at least 232 bytes per set, that’s at least 160MB, plus 2 hash tables with 345k items). I think 250MB for those data structures would be largely underestimated. It’s not hard to imagine them taking 620MB, because really, that DAG implementation is awful. The number of allocations expected from them would be around 1.4M (4 * 345k), but I might be missing something. That’s way less than the actual number, so it would be interesting to take a closer look, but not before doing something about the DAG itself.

    Fun fact: the amount of data we’re dealing with in this phase (the expanded size of all the manifests) is close to 2.9TB (yes, terabytes). With about 4700 seconds spent on this phase on a real clone (less with the release branch), we’re still handling more than 615MB per second.

  • Importing changesets.

    This is where we finally create the git commits corresponding to the mercurial changesets. For each changeset, we expand its RawChunk, find the git tree we created in the previous phase that corresponds to the associated manifest, and create a git commit for that tree, with the right date, author, and commit message. For data that appears in the mercurial changeset that can’t be stored or doesn’t make sense to store in the git commit (e.g. the manifest SHA1, the list of changed files[*], or some extra metadata like the source of rebases), we keep some metadata we’ll store in git notes later on.

    [*] Fun fact: the list of changed files stored in mercurial changesets does not necessarily match the list of files in a `git diff` between the corresponding git commit and its parents, for essentially two reasons:

    • Old buggy versions of mercurial have generated erroneous lists that are now there forever (they are part of what makes the changeset SHA1).
    • Mercurial may create new revisions for files even when the file content is not modified, most notably during merges (but that also happened on non-merges due to, presumably, bugs).
    … so we keep it verbatim.

    On the graph, this is where both requested and in-use bytes are only slightly increasing.

    By the end of this phase, we made about half a billion more allocator calls. Requested allocations went up to 1.06GB, for close to 7.7 million live allocations. Compared to the end of the previous phase, that added 700k allocations, and 400MB. By now, we expect everything from the “Reading changesets” phase to have been released (at least the raw data we kept there), which means we may have allocated at most around 700MB (400MB + 300MB), for a total of 1.5M additional allocations (700k + 840k).

    All these are extra data we keep for the next and final phase. It’s hard to evaluate the exact size we’d expect here in memory, but if we divide by the number of changesets (345k), that’s less than 5 allocations per changeset and less than 2KB per changeset, which is low enough not to raise eyebrows, at least for now.

  • Finalizing the clone.

    The final phase is where we actually go ahead storing the mappings between mercurial and git SHA1s (all 2.7M of them), the git notes where we store the data necessary to recreate mercurial changesets from git commits, and a cache for mercurial tags.

    On the graph, this is where the requested and in-use bytes, as well as the number of live allocations peak like crazy (up to 21M allocations for 2.27GB requested).

    This is very much unwanted, but easily explained with the current state of the code. The way the mappings between mercurial and git SHA1s are stored is via a tree similar to how git notes are stored. So for each mercurial SHA1, we have a file that points to the corresponding git SHA1 through git links for commits or directly for blobs (look at the output of git ls-tree -r refs/cinnabar/metadata^3 if you’re curious about the details). If I remember correctly, it’s faster if the tree is created with an ordered list of paths, so the code created a list of paths, and then sorted it to send commands to create the tree. The former creates a new str of length 42 and a tuple of 3 elements for each and every one of the 2.7M mappings. With the 37 bytes overhead by str instance and the 56 + 3 * 8 bytes per tuple, we have at least 429MB wasted. Creating the tree itself keeps the corresponding fast-import commands in a buffer, where each command is going to be a tuple of 2 elements: a pointer to a method, and a str of length between 90 and 93. That’s at least another 440MB wasted.

    I already fixed the first half, but the second half still needs addressing.

Overall, except for the stupid spike during the final phase, the manifest DAG and the glibc allocator runaway memory use described in previous posts, there is nothing terribly bad with the git-cinnabar memory usage, all things considered. Mozilla-central is just big.

The spike is already half addressed, and work is under way for the glibc allocator runaway memory use. The manifest DAG, interestingly, is actually mostly useless. It’s only used to track the heads of the DAG, and it’s very much possible to track heads of a DAG without actually storing the entire DAG. In fact, that’s what git-cinnabar already does for changeset heads… so we would only need to do the same for manifest heads.

One could argue that the 1.4GB of raw RevChunk data we’re keeping in memory for later user could be kept on disk instead. I haven’t done this so far because I didn’t want to have to handle temporary files (and answer questions like “where to put them?”, “what if there isn’t enough disk space there?”, “what if disk access is slow?”, etc.). But the majority of this data is from manifests. I’m already planning changes in how git-cinnabar stores manifests data that will actually allow to import them directly, instead of keeping them in memory until files are imported. This would instantly remove 1.18GB of memory usage. The downside, however, is that this would be more CPU intensive: Importing changesets will require creating the corresponding git trees, and getting the stored manifest data. I think it’s worth, though.

Finally, one thing that isn’t obvious here, but that was found while analyzing why RSS would be going up despite memory usage going down, is that git-cinnabar is doing way too many reallocations and substring allocations.

So let’s look at two metrics that hopefully will highlight the problem:

  • The cumulated amount of requested memory. That is, the sum of all sizes ever given to malloc, realloc, calloc, etc.
  • The compensated cumulated amount of requested memory (naming is hard). That is, the sum of all sizes ever given to malloc, calloc, etc. except realloc. For realloc, we only count the delta in size between what the size was before and after the realloc.

Assuming all the requested memory is filled at some point, the former gives us an upper bound to the amount of memory that is ever filled or copied (the amount that would be filled if no realloc was ever in-place), while the the latter gives us a lower bound (the amount that would be filled or copied if all reallocs were in-place).

Ideally, we’d want the upper and lower bounds to be close to each other (indicating few realloc calls), and the total amount at the end of the process to be as close as possible to the amount of data we’re handling (which we’ve seen is around 3TB).

… and this is clearly bad. Like, really bad. But we already knew that from the previous post, although it’s nice to put numbers on it. The lower bound is about twice the amount of data we’re handling, and the upper bound is more than 10 times that amount. Clearly, we can do better.

We’ll see how things evolve after the necessary code changes happen. Stay tuned.

Planet MozillaMarch Privacy Lab: Cryptographic Engineering for Everyone

March Privacy Lab: Cryptographic Engineering for Everyone Our March speaker is Justin Troutman, creator of PocketBlock - a visual, gamified curriculum that makes cryptographic engineering fun. It's suitable for everyone from an...

Planet MozillaMarch Privacy Lab: Cryptographic Engineering for Everyone 3.22.17

March Privacy Lab: Cryptographic Engineering for Everyone 3.22.17 Our March speaker is Justin Troutman, creator of PocketBlock - a visual, gamified curriculum that makes cryptographic engineering fun. It's suitable for everyone from an...

Planet MozillaLinking to GitHub issues from

I wanted to draw your attention to a lovely, new feature in BMO which went out with this week's push: auto-linking to GitHub issues.

Now, in a Bugzilla bug's comments, if you reference a GitHub issue, such as mozilla-bteam/bmo#26, Bugzilla converts that to a link to the issue on GitHub.

References to GitHub Issues now become links

This will save you some typing in the future, and if you used this format in earlier comments, they'll be linkified as well.

Thanks to Sebastin Santy for his patch, and Xidorn Quan for the suggestion and code review.

If you come across a false positive, please file a bug against

The original bug: 1309112 - Detect and linkify GitHub issue in comment

comment count unavailable comments

Planet MozillaLoad Testing at Mozilla

After a stabilization phase, I am happy to announce that Molotov 1.0 has been released!

(Logo by Juan Pablo Bravo)

This release is an excellent opportunity to explain a little bit how we do load testing at Mozilla, and what we're planning to do in 2017 to improve the process.

I am talking here specifically about load testing our HTTP services, and when this blog post mentions what Mozilla is doing there, it refers mainly to the Mozilla QA team, helped with Services developers team that works on some of our web services.

What's Molotov?

Molotov is a simple load testing tool

Molotov is a minimalist load testing tool you can use to load test an HTTP API using Python. Molotov leverages Python 3.5+ asyncio and uses aiohttp to send some HTTP requests.

Writing load tests with Molotov is done by decorating asynchronous Python functions with the @scenario function:

from molotov import scenario

async def my_test(session):
    async with session.get('http://localhost:8080') as resp:
        assert resp.status == 200

When this script is executed with the molotov command, the my_test function is going to be repeatedly called to perform the load test.

Molotov tries to be as transparent as possible and just hands over session objects from the aiohttp.client module.

The full documentation is here:

Using Molotov is the first step to load test our services. From our laptops, we can run that script and hammer a service to make sure it can hold some minimal charge.

What Molotov is not

Molotov is not a fully-featured load testing solution

Load testing application usually comes with high-level features to understand how the tested app is performing. Things like performance metrics are displayed when you run a test, like what Apache Bench does by displaying how many requests it was able to perform and their average response time.

But when you are testing web services stacks, the metrics you are going to collect from each client attacking your service will include a lot of variation because of the network and clients CPU overhead. In other words, you cannot guarantee reproducibility from one test to the other to track precisely how your app evolves over time.

Adding metrics directly in the tested application itself is much more reliable, and that's what we're doing these days at Mozilla.

That's also why I have not included any client-side metrics in Molotov, besides a very simple StatsD integration. When we run Molotov at Mozilla, we mostly watch our centralized metrics dashboards and see how the tested app behaves regarding CPU, RAM, Requests-Per-Second, etc.

Of course, running a load test from a laptop is less than ideal. We want to avoid the hassle of asking people to install Molotov & all the dependencies a test requires everytime they want to load test a deployment -- and run something from their desktop. Doing load tests occasionally from your laptop is fine, but it's not a sustainable process.

And even though a single laptop can generate a lot of loads (in one project, we're generating around 30k requests per second from one laptop, and happily killing the service), we also want to do some distributed load.

We want to run Molotov from the cloud. And that's what we do, thanks to Docker and Loads.

Molotov & Docker

Since running the Molotov command mostly consists of using the right command-line options and passing a test script, we've added in Molotov a second command-line utility called moloslave.

Moloslave takes the URL of a git repository and will clone it and run the molotov test that's in it by reading a configuration file. The configuration file is a simple JSON file that needs to be at the root of the repo, like how you would do with Travis-CI or other tools.


From there, running in a Docker can be done with a generic image that has Molotov preinstalled and picks the test by cloning a repo.


Having Molotov running in Docker solves all the dependencies issues you can have when you are running a Python app. We can specify all the requirements in the configuration file and have moloslave installs them. The generic Docker image I have pushed in the Docker Hub is a standard Python 3 environment that works in most case, but it's easy to create another Docker image when a very specific environment is required.

But the bottom line is that anyone from any OS can "docker run" a load test by simply passing the load test Git URL into an environment variable.

Molotov & Loads

Once you can run load tests using Docker images, you can use specialized Linux distributions like CoreOS to run them.

Thanks to boto, you can script the Amazon Cloud and deploy hundreds of CoreOS boxes and run Docker images in them.

That's what the Loads project is -- an orchestrator that will run hundreds of CoreOS EC2 instances to perform a massively distributed load test.

Someone that wants to run such a test has to pass to a Loads Broker that's running in the Amazon Cloud a configuration that tells where is the Docker that runs the Molotov test, and says for how long the test needs to run.

That allows us to run hours-long tests without having to depend on a laptop to orchestrate it.

But the Loads orchestrator has been suffering from reliability issues. Sometimes, EC2 instances on AWS are not responsive anymore, and Loads don't know anymore what's happening in a load test. We've suffered from that and had to create specific code to clean up boxes and avoid keeping hundreds of zombie instances sticking around.

But even with these issues, we're able to perform massive load tests distributed across hundreds of boxes.

Next Steps

At Mozilla, we are in the process of gradually switching all our load testing scripts to Molotov. Using a single tool everywhere will allow us to simplify the whole process that takes that script and performs a distributed load test.

I am also investigating on improving metrics. One idea is to automatically collect all the metrics that are generated during a load test and pushing them in a specialized performance trend dashboard.

We're also looking at switching from Loads to Ardere. Ardere is a new project that aims at leveraging Amazon ECS. ECS is an orchestrator we can use to create and manage EC2 instances. We've tried ECS in the past, but it was not suited to run hundreds of boxes rapidly for a load test. But ECS has improved a lot, and we started a prototype that leverages it and it looks promising.

For everything related to our Load testing effort at Mozilla, you can look at

And of course, everything is open source and open to contributions.

Planet MozillaBugzilla Project Meeting, 22 Mar 2017

Bugzilla Project Meeting The Bugzilla Project developers meeting.

Planet WebKitRelease Notes for Safari Technology Preview 26

Safari Technology Preview Release 26 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 213542-213822.



  • Added support for history.scrollRestoration (r213590)
  • Aligned Document.elementFromPoint() with the CSSOM specification (r213646)
  • Changed the parameter to input.setCustomValidity() to not be nullable (r213606)
  • Fixed transitions and animations of background-position with right-relative and bottom-relative values (r213603)
  • Fixed an issue where WebSQL directories were not removed when removing website data (r213547)
  • Made the XMLHttpRequest method setRequestHeader() use “,” (including a space) as the separator (r213766)
  • Prevented displaying the label of an <option> element in quirks mode (r213542)
  • Prevented extra downloads of preloaded CSS (r213672)
  • Dropped support for non-standard document.all.tags() (r213619)


  • Implemented stroke-width CSS property (r213634)


  • Enabled asynchronous image decoding for large images (r213764, r213563)
  • Fixed memory estimate for layers supporting subpixel-antialised text (r213767)
  • Fixed columns getting clipped horizontally in CSS Multicolumn (r213593)

Web Inspector

  • Added DOM breakpoints for pausing on node and subtree modifications (r213626)
  • Added XHR breakpoints for pausing on requests by URL (r213691)
  • Added a “Create Breakpoint” context menu item for linked source locations (r213617)
  • Added settings for controlling Styles sidebar intelligence (r213635)
  • Added cache source information (Memory Cache or Disk Cache) in the Network tab (r213621)
  • Added protocol, remote address, priority, and connection ID in the Network tab (r213682)
  • Added individual messages to the content pane for a WebSocket (r213666)
  • Fixed an issue where the DOM tree is broken if an element has a debounce attribute (r213565)
  • Fixed an issue in the Resources tab navigation bar allowing the same file from a contextual menu item to be saved more than once (r213738)
  • Improved the layout of the compositing reasons in the Layers sidebar popover (r213739)


  • Fixed an issue where automation commands hang making it impossible to navigate back or forward (r213790)


  • Implemented ECDH ImportKey and ExportKey operations (r213560)

Planet MozillaWhen the memory allocator works against you, part 2

This is a followup to the “When the memory allocator works against you” post from a few days ago. You may want to read that one first if you haven’t, and come back. In case you don’t or didn’t read it, it was all about memory consumption during a git clone of the mozilla-central mercurial repository using git-cinnabar, and how the glibc memory allocator is using more than one would expect. This post is going to explore how/why it’s happening.

I happen to have written a basic memory allocation logger for Firefox, so I used it to log all the allocations happening during a git clone exhibiting the runaway memory increase behavior (using a python that doesn’t use its own allocator for small allocations).

The result was a 6.5 GB log file (compressed with zstd ; 125 GB uncompressed!) with 2.7 billion calls to malloc, calloc, free, and realloc, recorded across (mostly) 2 processes (the python git-remote-hg process and the native git-cinnabar-helper process ; there are other short-lived processes involved, but they do less than 5000 calls in total).

The vast majority of those 2.7 billion calls is done by the python git-remote-hg process: 2.34 billion calls. We’ll only focus on this process.

Replaying those 2.34 billion calls with a program that reads the log allowed to reproduce the runaway memory increase behavior to some extent. I went an extra mile and modified glibc’s realloc code in memory so it doesn’t call memcpy, to make things faster. I also ran under setarch x86_64 -R to disable ASLR for reproducible results (two consecutive runs return the exact same numbers, which doesn’t happen with ASLR enabled).

I also modified the program to report the number of live allocations (allocations that haven’t been freed yet), and the cumulated size of the actually requested allocations (that is, the sum of all the sizes given to malloc, calloc, and realloc calls for live allocations, as opposed to what the memory allocator really allocated, which can be more, per malloc_usable_size).

RSS was not tracked because the allocations are never filled to make things faster, such that pages for large allocations are never dirty, and RSS doesn’t grow as much because of that.

Full disclosure: it turns out the “system bytes” and “in-use bytes” numbers I had been collecting in the previous post were smaller than what they should have been, and were excluding memory that the glibc memory allocator would have mmap()ed. That however doesn’t affect the trends that had been witnessed. The data below is corrected.

(Note that in the graph above and the graphs that follow, the horizontal axis represents the number of allocator function calls performed)

While I was here, I figured I’d check how mozjemalloc performs, and it has a better behavior (although it has more overhead).

What doesn’t appear on this graph, though, is that mozjemalloc also tells the OS to drop some pages even if it keeps them mapped (madvise(MADV_DONTNEED)), so in practice, it is possible the actual RSS decreases too.

And jemalloc 4.5:

(It looks like it has better memory usage than mozjemalloc for this use case, but its stats are being thrown off at some point, I’ll have to investigate)

Going back to the first graph, let’s get a closer look at what the allocations look like when the “system bytes” number is increasing a lot. The highlights in the following graphs indicate the range the next graph will be showing.

So what we have here is a bunch of small allocations (small enough that they don’t seem to move the “requested” line ; most under 512 bytes, so under normal circumstances, they would be allocated by python, a few between 512 and 2048 bytes), and a few large allocations, one of which triggers a bump in memory use.

What can appear weird at first glance is that we have a large allocation not requiring more system memory, later followed by a smaller one that does. What the allocations actually look like is the following:

void *ptr0 = malloc(4850928); // #1391340418
void *ptr1 = realloc(some_old_ptr, 8000835); // #1391340419
free(ptr0); // #1391340420
ptr1 = realloc(ptr1, 8000925); // #1391340421
/* ... */
void *ptrn = malloc(879931); // #1391340465
ptr1 = realloc(ptr1, 8880819); // #1391340466
free(ptrn); // #1391340467

As it turns out, inspecting all the live allocations at that point, while there was a hole large enough to do the first two reallocs (the second actually happens in place), at the point of the third one, there wasn’t a large enough hole to fit 8.8MB.

What inspecting the live allocations also reveals, is that there is a large number of large holes between all the allocated memory ranges, presumably coming from previous similar patterns. There are, in fact, 91 holes larger than 1MB, 24 of which are larger than 8MB. It’s the accumulation of those holes that can’t be used to fulfil larger allocations that makes the memory use explode. And there aren’t enough small allocations happening to fill those holes. In fact, the global trend is for less and less memory to be allocated, so, smaller holes are also being poked all the time.

Really, it’s all a straightforward case of memory fragmentation. The reason it tends not to happen with jemalloc is that jemalloc groups allocations by sizes, which the glibc allocator doesn’t seem to be doing. The following is how we got a hole that couldn’t fit the 8.8MB allocation in the first place:

ptr1 = realloc(ptr1, 8880467); // #1391324989; ptr1 is 0x5555de145600
/* ... */
void *ptrx = malloc(232); // #1391325001; ptrx is 0x5555de9bd760 ; that is 13 bytes after the end of ptr1.
/* ... */
free(ptr1); // #1391325728; this leaves a hole of 8880480 bytes at 0x5555de145600.

All would go well if ptrx was free()d, but it looks like it’s long-lived. At least, it’s still allocated by the time we reach the allocator call #1391340466. And since the hole is 339 bytes too short for the requested allocation, the allocator has no other choice than request more memory to the system.

What’s bothering, though, is that the allocator chose to allocate ptrx in the space following ptr1, when it allocated similarly sized buffers after allocating ptr1 and before allocating ptrx in completely different places, and while there are plenty of holes in the allocated memory where it could fit.

Interestingly enough, ptrx is a 232 bytes allocation, which means under normal circumstances, python itself would be allocating it. In all likeliness, when the python allocator is enabled, it’s allocations larger than 512 bytes that become obstacles to the larger allocations. Another possibility is that the 256KB fragments that the python allocator itself allocates to hold its own allocations become the obstacles (my original hypothesis). I think the former is more likely, though, putting back the blame entirely on glibc’s shoulders.

Now, it looks like the allocation pattern we have here is suboptimal, so I re-ran a git clone under a debugger to catch when a realloc() for 8880819 bytes happens (the size is peculiar enough that it only happened once in the allocation log). But doing that with a conditional breakpoint is just too slow, so I injected a realloc wrapper with LD_PRELOAD that sends a SIGTRAP signal to the process, so that an attached debugger can catch it.

Thanks to the support for python in gdb, it was then posible to pinpoint the exact python instructions that made the realloc() call (it didn’t come as a surprise ; in fact, that was one of the places I had in mind, but I wanted definite proof):

new = ''
end = 0
# ...
for diff in RevDiff(rev_patch):
    new += data[end:diff.start]
    new += diff.text_data
    end = diff.end
    # ...
new += data[end:]

What happens here is that we’re creating a mercurial manifest we got from the server in patch form against a previous manifest. So data contains the original manifest, and rev_patch the patch. The patch essentially contains instructions of the form “replace n bytes at offset o with the content c“.

The code here just does that in the most straightforward way, implementation-wise, but also, it turns out, possibly the worst way.

So let’s unroll this loop over a couple iterations:

new = ''

This allocates an empty str object. In fact, this doesn’t actually allocate anything, and only creates a pointer to an interned representation of an empty string.

new += data[0:diff.start]

This is where things start to get awful. data is a str, so data[0:diff.start] creates a new, separate, str for the substring. One allocation, one copy.

Then appends it to new. Fortunately, CPython is smart enough, and just assigns data[0:diff.start] to new. This can easily be verified with the CPython REPL:

>>> foo = ''
>>> bar = 'bar'
>>> foo += bar
>>> foo is bar

(and this is not happening because the example string is small here ; it also happens with larger strings, like 'bar' * 42000)

Back to our loop:

new += diff.text_data

Now, new is realloc()ated to have the right size to fit the appended text in it, and the contents of diff.text_data is copied. One realloc, one copy.

Let’s go for a second iteration:

new += data[diff.end:new_diff.start]

Here again, we’re doing an allocation for the substring, and one copy. Then new is realloc()ated again to append the substring, which is an additional copy.

new += new_diff.text_data

new is realloc()ated yet again to append the contents of new_diff.text_data.

We now finish with:

new += data[new_diff.end:]

which, again creates a substring from the data, and then proceeds to realloc()ate new one freaking more time.

That’s a lot of malloc()s and realloc()s to be doing…

  • It is possible to limit the number of realloc()s by using new = bytearray() instead of new = ''. I haven’t looked in the CPython code what the growth strategy is, but, for example, appending a 4KB string to a 500KB bytearray makes it grow to 600KB instead of 504KB, like what happens when using str.
  • It is possible to avoid realloc()s completely by preallocating the right size for the bytearray (with bytearray(size)), but that requires looping over the patch once first to know the new size, or using an estimate (the new manifest can’t be larger than the size of the previous manifest + the size of the patch) and truncating later (although I’m not sure it’s possible to truncate a bytearray without a realloc()). As a downside, this initializes the buffer with null bytes, which is a waste of time.
  • Another possibility is to reuse bytearrays previously allocated for previous manifests.
  • Yet another possibility is to accumulate the strings to append and use ''.join(). CPython is smart enough to create a single allocation for the total size in that case. That would be the most convenient solution, but see below.
  • It is possible to avoid the intermediate allocations and copies for substrings from the original manifest by using memoryview.
  • Unfortunately, you can’t use ''.join() on a list of memoryviews before Python 3.4.

After modifying the code to implement the first and fifth items, memory usage during a git clone of mozilla-central looks like the following (with the python allocator enabled):

(Note this hasn’t actually landed on the master branch yet)

Compared to what it looked like before, this is largely better. But that’s not the only difference: the clone was also about 1000 seconds faster. That’s more than 15 minutes! But that’s not all so surprising when you know the volumes of data handled here. More insight about this coming in an upcoming post.

But while the changes outlined above make the glibc allocator behavior less likely to happen, it doesn’t totally obliviate it. In fact, it seems it is still happening by the end of the manifest import phase. We’re still allocating increasingly large temporary buffers because the size of the imported manifests grows larger and larger, and every one of them is the result of patching a previous one.

The only way to avoid those large allocations creating holes would be to avoid doing them in the first place. My first attempt at doing that, keeping manifests as lists of lines instead of raw buffers, worked, but was terribly slow. So slow, in fact, that I had to stop a clone early and estimated the process would likely have taken a couple days. Iterating over multiple generators at the same time, a lot, kills performance, apparently. I’ll have to try with significantly less of that.

Dev.OperaWhat’s new in Chromium 57 and Opera 44

Opera 44 (based on Chromium 57) for Mac, Windows, Linux is out! To find out what’s new for users, see our Desktop blog. Here’s what it means for web developers.

CSS Grid

CSS Grid is now available. CSS Grid supports a two-dimensional grid-based layout system, optimized for responsive user interface design. Elements within the grid can be specified to span multiple columns or rows. Elements positioned in a CSS grid can also be named, making layout code easier to understand. There are lots of excellent resources available to learn more:


The WebAssembly API has been enabled by default, allowing developers to run near-native code in the browser without a plugin. WebAssembly is essentially a better replacement for asm.js. We believe it has the potential to bring browser games to the next level.

Credential Management

Opera 44 also adds support for the Credential Management API. This gives users a simpler sign-in process across devices and provides websites with more control over the usage of credentials. The website can use password-based sign-ins via this API. Once logged in, users will be automatically signed back into a site, even if their session has expired.

Other CSS features

  • The caret-color property enables developers to specify the color of the text input cursor.
  • text-decoration-skip: ink can be used to make underlines skip descenders, the portion of letters that extend below the text’s baseline.
  • New text-decoration properties are now available, allowing developers to specify visual effects such as line color and style.


Other JS and DOM features

  • The Fetch API Response class now supports the .redirected attribute to help web developers avoid untrustworthy responses and reduce the risk of open redirectors.
  • The new padStart and padEnd formatting tools enable text padding, facilitating tasks like aligning console output or printing numbers with a fixed number of digits.
  • To preserve consistency with other on<event> attributes, ongotpointercapture and onlostpointercapture are now part of the GlobalEventHandlers mixin.

Improved interoperability

Deprecated and removed features

  • Support for the <keygen> element has been removed, causing it to no longer display any controls nor submit form element data, to align with other browsers.
  • Locally-trusted SHA-1 certificates will now result in a certificate error page.
  • The <cursor> element has been removed, but cursor icons can still be set via the cursor CSS property.
  • A legacy caller has been removed from HTMLEmbedElement and HTMLObjectElement, so the interfaces will now throw exceptions rather than having their instances called as functions.
  • All webkit-prefixed IndexedDB global aliases have been removed, after their deprecation in M38 (Opera 25).
  • Support for webkitClearResourceTimings(), webkitSetResourceTimingBufferSize(), and onwebkitresourcetimingbufferfull has been removed from the Performance interface, in favor of clearResourceTimings(), setResourceTimingBufferSize(), and onresourcetimingbufferfull.
  • The -internal-media-controls-text-track-list* and related pseudo-elements are deprecated and will be removed in M59 (Opera 46). Use custom controls for custom text track selection.
  • Support for the obsolete API webkitCancelRequestAnimationFrame has been removed in favor of cancelAnimationFrame.
  • The webkit prefix has been removed from AudioContext and OfflineAudioContext.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaConduit's Commit Index

As with MozReview, Conduit is being designed to operate on changesets. Since the end result of work on a codebase is a changeset, it makes sense to start the process with one, so all the necessary metadata (author, message, repository, etc.) are provided from the beginning. You can always get a plain diff from a changeset, but you can’t get a changeset from a plain diff.

Similarly, we’re keeping the concept of a logical series of changesets. This encourages splitting up a unit of work into incremental changes, which are easier to review and to test than large patches that do many things at the same time. For more on the benefits of working with small changesets, a few random articles are Ship Small Diffs, Micro Commits, and Large Diffs Are Hurting Your Ability To Ship.

In MozReview, we used the term commit series to refer to a set of one or more changesets that build up to a solution. This term is a bit confusing, since the series itself can have multiple revisions, so you end up with a series of revisions of a series of changesets. For Conduit, we decided to use the term topic instead of commit series, since the commits in a single series are generally related in some way. We’re using the term iteration to refer to each update of a topic. Hence, a solution ends up being one or more iterations on a particular topic. Note that the number of changesets can vary from iteration to iteration in a single topic, if the author decides to either further split up work or to coalesce changesets that are tightly related. Also note that naming is hard, and we’re not completely satisfied with “topic” and “iteration”, so we may change the terminology if we come up with anything better.

As I noted in my last post, we’re working on the push-to-review part of Conduit, the entrance to what we sometimes call the commit pipeline. However, technically “push-to-review” isn’t accurate, as the first process after pushing might be sending changesets to Try for testing, or static analysis to get quick automated feedback on formatting, syntax, or other problems that don’t require a human to look at the code. So instead of review repository, which we’ve used in MozReview, we’re calling it a staging repository in the Conduit world.

Along with the staging repository is the first service we’re building, the commit index. This service holds the metadata that binds changesets in the staging repo to iterations of topics. Eventually, it will also hold information about how changesets moved through the pipeline: where and when they were landed, if and when they were backed out, and when they were uplifted into release branches.

Unfortunately a simple “push” command, whether from Mercurial or from Git, does not provide enough information to update the commit index. The main problem is that not all of the changesets the author specifies for pushing may actually be sent. For example, I have three changesets, A, B, and C, and pushed them up previously. I then update C to make C′ and push again. Despite all three being in the “draft” phase (which is how we differentiate work in progress from changes that have landed in the mainline repository), only C′ will actually be sent to the staging repo, since A and B already exist there.

Thus, we need a Mercurial or Git client extension, or a separate command-line tool, to tell the commit index exactly what changesets are part of the iteration we’re pushing up—in this example, A, B, and C′. When it receives this information, the commit index creates a new topic, if necessary, and a new iteration in that topic, and records the data in a data store. This data will then be used by the review service, to post review requests and provide information on reviews, and by the autoland service, to determine which changesets to transplant.

The biggest open question is how to associate a push with an existing topic. For example, locally I might be working on two bugs at the same time, using two different heads, which map to two different topics. When I make some local changes and push one head up, how does the commit index know which topic to update? Mercurial bookmarks, which are roughly equivalent to Git branch names, are a possibility, but as they are arbitrarily named by the author, collisions are too great a possibility. We need to be sure that each topic is unique.

Another straightforward solution is to use the bug ID, since the vast majority of commits to mozilla-central are associated with a bug in BMO. However, that would restrict Conduit to one topic per bug, requiring new bugs for all follow-up work or work in parallel by multiple developers. In MozReview, we partially worked around this by using an “ircnick” parameter and including that in the commit-series identifiers, and by allowing arbitrary identifiers via the --reviewid option to “hg push”. However this is unintuitive, and it still requires each topic to be associated with a single bug, whereas we would like the flexibility to associate multiple bugs with a single topic. Although we’re still weighing options, likely an intuitive and flexible solution will involve some combination of commit-message annotations and/or inferences, command-line options, and interactive prompts.

Planet MozillaRust Libs Meeting 2017-03-21

Rust Libs Meeting 2017-03-21 Rust Libs Meeting 2017-03-21

Planet MozillaGuest post: “That Bug about Mobile Bookmarks”

Hi, SUMO Nation!

Time for a guest blog post by Seburo – one of our “regulars”, who wanted to share a very personal story about Firefox with all of you. He originally posted in on Mozilla’s Discourse, but the more people it reaches, the better. Thank you for sharing, Seburo! (As always, if you want to post something to our blog about your Mozilla and/or SUMO adventures and experiences, let us know.)

Here we go…


As a Mozillian I like to set myself goals and targets. It helps me to plan what I would like to do and to ensure that I am constantly focusing on activities that help Mozilla as well as maintain a level of contribution. But under these “public” goals are a number of things that are more long term, that are possible and have been done by many Mozillians, but for me just seem a little out of reach. If you were to see the list, it may seem a little odd and possibly a little egotistical, even laughable, but however impossible some of them are, they serve as a reminder of what I may be able to achieve.

This blog entry is about me achieving one of them…

In the time leading up to the London All-Hands, I had been invited by a fellow SUMO contributor to attend a breakfast meeting to learn more about the plans around Nightly. This clashed with another breakfast meeting between SUMO and Sync to continue to work to improve our support for this great and useful feature of Firefox. Not wanting to upset anyone, I went with the first invite, but hoped to catch up with members of the Sync team during the week.

Having spent the morning better understanding how SUMO fits into the larger corporate structure, I made use of the open time in the schedule to visit the Firefox Homeroom which was based in a basement meeting room, home for the week to all the alchemists and magicians that bring Mozilla software to life. It was on the way back up the stairs that I bumped into Mark from the Firefox Desktop team. Expecting to arrange some time for later in the week, Mark was free to have a chat there and then.

Sync is straightforward when used to connect desktop and mobile versions of Firefox but I wanted to better understand how it would work if a third device was included. It was at the end of the conversation that one of us mentioned about how the bookmarks coming to desktop Firefox could be seen in the Mobile Bookmarks folder in the bookmark drop down menus. But it is not there, which can make it look like your bookmarks have disappeared. Sure, you can open the bookmark library, but this is extra mouse clicks to open a separate tool. Mark suggested that this could be easy to fix and that I should file a bug, a task that duly went in the list of things to do on returning from the week.

A key goal for contributors at an All-Hands is to come back with a number of ways to build upon your ability to contribute in the future and I came back with a long list that took time to work through. The bug was also delayed in filing due to natural pessimism about its chances of success. But I realised…what if we all thought like that? All things that we have done started with someone having an idea that was put forward knowing that other ideas had failed, but they still went ahead regardless.

So I wrote a bug and submitted it and nothing much happened. But after a while there was a spark of activity. Thom from the Sync team had decided to resolve it and seemed to fully understand how this could work. The bug was assigned various flags and it soon became clear to me that work was being done on it. Not having any coding ability, I was not able to provide any real help to Thom aside from positive feedback to an early mock up of how the user experience would look. But to be honest, I was too nervous to say much more. A number of projects I had come back from MozLondon with had fallen through and I did not say anything much that could “jinx it” and it not proceed.

A few months passed after which I started getting copied in on bugmail about code needing review with links to systems I barely knew existed. And there, partway down a page were two words:

Ship It.

I know that these words are not unusual for many people at Mozilla, indeed their very existence is one of the reasons that many staff turn on their computers (the other is probably cat gifs), but for me it was the culmination of something that I never thought would happen. The sobriety of this moment increased with the release of Nightly 54 – I could actually see and use what Thom and Mark had spent time and effort crafting. If you use version 54 (which is currently Firefox Developer Edition) and use Firefox Sync, you should now see a “Mobile Bookmarks” folder in the drop down from the menu bar and from the toolbar. This folder is an easier way for you to access the bookmarks that you have saved on the bus, in the pub, on the train or during that really boring meeting you thought would never end.

I never thought that I would be able to influence the Firefox end product, and I had in a very small way. Whilst full credit should go to Thom and Mark and the Sync team for building this and those who herded and QA’d the bug (never forget these people, their work is vital), credit should also go to the SUMO team for enabling me to be a position to understand the user perspective to help make Sync work for more users. Sync is a great feature of Firefox and one that I hope can be improved and enhanced further.

I sincerely hope that you have enjoyed reading this little story, but I hope that you have learned from it and that those learnings will help you as a contributor. In particular:

  • Have goals, however impossible.
  • Contribute your ideas. Nobody else in the world has the same idea as you and imagines it in the same way.
  • Work outside of your own team, build bridges to other areas.
  • Use Nightly and (if you also use a mobile version of Firefox) use it with Firefox Sync.
  • Be respectful of Mozilla staff as they are at work and they are busy people, but also be prepared to be in awe of their awesomeness.

Whilst this was (I have been told) a simple piece of code, the result for me was to see a feature in Firefox that I helped make happen. Along the way, I have broadened my understanding of the effort that goes into Firefox but I can also see that some of the bigger goals I have are achievable.

There is still so much I want to do.

Planet MozillaFirefox 53 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – March 17th – we held a new Testday event, for Firefox 53 Beta 3.

Thank you all for helping us making Mozilla a better place – Iryna Thompsn, Surentharan and Suren, Jeremy Lam and jaustinlam.

From Bangladesh team: Nazir Ahmed Sabbir | NaSb, Rezaul Huque Nayeem, Md.Majedul islam, Rezwana Islam Ria, Maruf Rahman, Aminul Islam Alvi | AiAlvi, Sayed Mahmud, Mohammad Mosfiqur Rahman, Ridwan, Tanvir Rahman, Anmona Mamun Monisha, Jaber Rahman, Amir Hossain Rhidoy, Ahmed Safa, Humayra Khanum, Sajal Ahmed, Roman Syed, Md Rakibul Islam, Kazi Nuzhat Tasnem, Md. Almas Hossain, Md. Asif Mahmud Apon, Syeda Tanjina Hasan, Saima Sharleen, Nusrat jahan, Sajedul Islam, আল-যুনায়েদ ইসলাম ব্রোহী, Forhad Hossain and Toki Yasir.

From India team: Guna / Skrillex, Subhrajyoti Sen / subhrajyotisen, Pavithra R, Nagaraj.V, karthimdav7, AbiramiSD/@Teens27075637, subash M, Monesh B, Kavipriya.A, Vibhanshu Chaudhary | vibhanshuchaudhary, R.KRITHIKA SOWBARNIKA, HARITHA KAMARAJ and VIGNESH B S.


– several test cases executed for the WebM Alpha, Compact Themes and Estimated Reading Time features.

– 2 bugs verified: 1324171, 1321472.

– 2 new bugs filed: 1348347, 1348483.

Again thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaDeterministic Hardware Performance Counters And Information Leaks

Summary: Deterministic hardware performance counters cannot leak information between tasks, and more importantly, virtualized guests.

rr relies on hardware performance counters to help measure application progress, to determine when to inject asynchronous events such as signal delivery and context switches. rr can only use counters that are deterministic, i.e., executing a particular sequence of application instructions always increases the counter value by the same amount. For example rr uses the "retired conditional branches" (RCB) counter, which always returns exactly the number of conditional branches actually retired.

rr currently doesn't work in environments such as Amazon's cloud, where hardware performance counters are not available to virtualized guests. Virtualizing hardware counters is technically possible (e.g. rr works well in Digital Ocean's KVM guests), but for some counters there is a risk of leaking information about other guests, and that's probably one reason other providers haven't enabled them.

However, if a counter's value can be influenced by the behavior of other guests, then by definition it is not deterministic in the sense above, and therefore it is useless to rr! In particular, because the RCB counter is deterministic ("proven" by a lot of testing), we know it does not leak information between guests.

I wish Intel would identify a set of counters that are deterministic, or at least free of cross-guest information leaks, and Amazon and other cloud providers would enable virtualization of them.

Planet MozillaThis Week in Rust 174

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

We don't have a Crate of this Week for lack of suggestions. Sorry.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last week.

New Contributors

  • David Roundy
  • Dawid Ciężarkiewicz
  • Petr Zemek
  • portal
  • projektir
  • Russell Mackenzie
  • ScottAbbey
  • z1mvader

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

Issues in final comment period:

Other significant issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

#rustlang is a very strange place sans null deref nor data race it has its own styles but once it compiles it will not blow up in your face

llogiq on Twitter. Check out his Twitter feed for more #rustlang limericks!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaHow Do We Connect First-Time Internet Users to a Healthy Web?

Fresh research from Mozilla, supported by the Bill & Melinda Gates Foundation, explores how low-income, first-time smartphone users in Kenya experience the web — and what digital skills can make a difference


Three billion of us now share the Internet. But our online experiences differ greatly, depending on geography, gender and income.

For a software engineer in San Francisco, the Internet can be open and secure. But for a low-income, first-time smartphone user in Nairobi, the Internet is most often a small collection of apps in an unfamiliar language, limited further by high data costs.

This undercuts the Internet’s potential as a global public resource — a resource everyone should be able to use to improve their lives and societies.

Twelve months ago, Mozilla set out to study this divide. We wanted to understand the barriers that low-income, first-time smartphone users in Kenya face when adapting to online life. And we wanted identify the skills and education methods necessary to overcome them.

To do this, Mozilla created the Digital Skills Observatory: a participatory research project exploring the complex relationship between devices, digital skills, social life, economic life and digital life. The work — funded by the Bill & Melinda Gates Foundation — was developed and led by Mozilla alongside Digital Divide Data and A Bit of Data Inc.

Today, we’re sharing our findings.


For one year, Mozilla researchers and local Mozilla community members worked with about 200 participants across seven Kenyan regions. All participants identified as low income and were coming online for the first time through smartphones. To hone our focus, we paid special attention to the impact of digital skills on digital financial services (DFS) adoption. Why? A strong grasp of digital financial services can open doors for people to access the formal financial environment and unlock economic opportunity.

In conducting the study, one group of participants was interviewed regularly and shared smartphone browsing and app usage data. A second group did the same, but also received digital skills training on topics like app stores and cybersecurity.

Our findings were significant. Among them:

  • Without proper digital skills training, smartphone adoption can worsen — not improve — existing financial and social problems.
    • Without media literacy and knowledge of online scams, users fall prey to fraudulent apps and news. The impact of these scams can be devastating on people who are already financially precarious
    • Users employ risky methods to circumvent the high price of data, like sharing apps via Bluetooth. As a result, out-of-date apps with security vulnerabilities proliferate
  • A set of 53 teachable skills can reduce barriers and unlock opportunity.
    • These skills — identified by both participants and researchers — range from managing data usage and recognizing scams to resetting passwords, managing browser settings and understanding business models behind app stores
    • Our treatment group learned these skills, and the end-of-study evaluation showed increased agency and understanding of what is possible online
    • Without these fundamental skills, users are blocked in their discoveries and adoption of digital products
  • Gender and internet usage are deeply entwined.
    • Men often have an effect on the way women use apps and services — for example, telling them to stop, or controlling their usage
    • Women were almost three times as likely to be influenced by their partner when purchasing a smartphone, usually in the form of financial support
  • Language and Internet usage are deeply entwined.
    • The web is largely in English — a challenge for participants who primarily speak Swahili or Sheng (a Swahili-English hybrid)
    • Colloquial language (like Sheng) increases comfort with technology and accommodates learning
  • Like most of us, first-time users found an Internet that is highly centralized.
    • Participants encountered an Internet dominated by just a few entities. Companies like Google, Facebook and Safaricom control access to apps, communication channels and more. This leads to little understanding of what is possible online and little agency to leverage the web
  • Digital skills are best imparted through in-person group workshops or social media channels.
    • Community-based learning was the most impactful — workshops provide wider exposure to what’s possible online and build confidence
    • Mobile apps geared toward teaching digital skills are less effective. Many phones cannot support them, and they are unlikely to “stick”
    • Social networks can be highly effective for teaching digital skills. Our chatbot experiment on WhatsApp showed positive results
  • Local talent is important when teaching digital skills.
    • Without a community of local practitioners and teachers, teaching digital skills becomes far more difficult
    • Research and teaching capacity can be grown and developed within a community
  • Digital skills are critical, but not a panacea.
    • Web literacy is one part of a larger equation. To become empowered digital citizens, individuals also must have access (like hardware and affordable data) and need (a perceived use and value for technology).

Mozilla’s commitment to digital literacy doesn’t end with this research. We’re holding roundtables and events in Kenya — and beyond — to share findings with allies like NGOs and technologists. We’re asking others to contribute to the conversation.

We’re also rolling our learnings into our ongoing Internet Health work, and building on the concept that access alone isn’t enough — we need solutions that account for the nuances of social and economic life, too.

Read the full report here.

The post How Do We Connect First-Time Internet Users to a Healthy Web? appeared first on The Mozilla Blog.

Planet MozillaOn technical leadership

I have been leading a team of data engineers for over a year and I feel like I have a much better idea of what leadership entails since the beginning of my journey. Here is a list of things I learned so far:

Have a vision

As a leader, you are supposed to have an idea of where you are leading your project or product to. That doesn’t mean you have to be the person that comes up with a plan and has all the answers! You work with smart people that have opinions and ideas, use them to shape the vision but make sure everyone on your team is aligned.

Be your team’s champion

During my first internship in 2008, I worked on a real-time monitoring application used to assess the quality of the data coming in from the ATLAS detector. My supervisor at the time had me present my work at CERN in front of a dozen scientists and engineers. I recall being pretty anxious, especially because my spoken English wasn’t that great. Even so, he championed my work and pushed me beyond my comfort zone and ultimately ensured I was recognized for what I built. Having me fly over from Pisa to present my work in person might not have been a huge deal for him but it made all the difference to me.

I had the luck to work with amazing technical leads over the years and they had all one thing in common: they championed my work and made sure people knew about it. You can build amazing things but if nobody knows about it, it’s like it never happened.

Split your time between maker & manager mode

Leading a team means you are going to be involved in lots of non-coding related activities that don’t necessarily feel immediately productive, like meetings. While it’s all too easy just jump back to coding, one shouldn’t neglect the managerial activities that a leadership role necessarily entails. One of your goals should be to improve the productivity of your colleagues and coding isn’t the best way to do that. That doesn’t mean you can’t be a maker, though!

When I am in manager mode, interruptions and meetings dominate all my time. On the other hand, when I am in maker mode I absolutely don’t want to be interrupted. One simple thing I do is to schedule some time on my calendar to fully focus on my maker mode. It really helps to know that I have a certain part of the day that others can’t schedule over. I also tend to stay away from IRC/Slack during those hours. So far this has been working great; I stopped feeling unproductive and not in control of my time as soon as I adopted this simple hack.

Pick strategic tasks

After being around long enough in a company you will have probably picked up a considerable baggage of domain-specific technical expertise. That expertise allows you to not only easily identify the pain points of your software architecture, but also know what solutions are appropriate to get rid of them. When those solutions are cross-functional in nature and involve changes to various components, they provide a great way for you to add value as they are less likely to be tackled spontaneously by more junior peers.

Be a sidekick

The best way I found to mentor a colleague is to be their sidekick on a project of which they are the technical lead, while I act more as a consultant intervening when blockers arise. Ultimately you want to grow leaders and experts and the best way to do that is to give them responsibilities even if they only have 80% of the skills required. If you give your junior peers the chance to prove themselves, they will work super hard to learn the missing 20% and deliver something amazing. When they reach a milestone, let them take the credit they deserve by having them present it to the stakeholders and the rest of the team.

Have regular one-on-ones with people you work with

Having a strong relationship based on mutual respect is fundamental to lead people. There isn’t a magic bullet to get there but you can bet it requires time and dedication. I found recurring one-on-ones to be very helpful in that regard as your colleagues know there is a time during the week they can count on having your full attention. This is even more important if you work remotely.

Talk to other leads

Projects don’t live in isolation and sooner or later your team will be blocked on someone else’s team. It doesn’t matter how fast your team can build new features or fix bugs if there is a bottleneck in the rest of the pipeline. Similarly, your team might become the blocker for someone else.

To avoid cross-functional issues, you should spend time aligning your team’s vision with the goals of the teams you work with. A good way to do that is to schedule recurring one-on-ones with technical leads of teams you work with.

Give honest feedback

If something isn’t right with a colleague then just be honest about it. Don’t be afraid to tell the truth even when it’s uncomfortable. Even though it might hurt to give, and receive, negative feedback, ultimately people appreciate that they always know where they stand.

Be open about your failures

The best teams I worked in were the ones in which everyone felt safe admitting mistakes. We all do mistakes and if you don’t hear about them it simply means they are getting covered up.

The only way to become an expert is to do all possible mistakes one can possibly make. When you are open about your missteps, you not only encourage others to do the same but you are also sharing a learning opportunity with the rest of the team.

Planet MozillaWebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon

Imagine an online application that lets city planners walk through three-dimensional virtual versions of proposed projects, or a math program that helps students understand complex concepts by visualizing them in three dimensions. Both CityViewR & MathworldVR are amazing applications experiences that bring to life the possibilities of virtual reality (VR).
Both are concept virtual reality applications for the web that were generated for the Virtuleap WebVR Hackathon. Amazingly, nine out of ten of the winning projects used AFrame, an open source project sponsored by Mozilla, which makes it much easier to create VR experiences.. CityView really illustrates the capabilities of WebVR to have real life benefits that impact the quality of people’s daily lives beyond the browser.

A top-notch batch of leading VR companies, including Mozilla, funded and supported this global event with the goal of building the grassroots community for WebVR. For non-techies, WebVR is the experimental JavaScript API that allows anyone with a web browser to experience immersive virtual reality on almost any device. WebVR is designed to be completely platform and device agnostic and so it is a scalable and democratic path to stoking a mainstream VR industry that can take advantage of the most valuable thing the web has to offer: built-in traffic and hundreds of millions of users.

Over three months, long contest teams from a dozen countries submitted 34 VR concepts. Seventeen judges and audience panels voted on the entries. Below is a list of the top 10 projects. I wanted to congratulate @ThePascalRascal and @Geczy for their work that won the €30,000 prize and spots to VR accelerator programs in Amsterdam, respectively.

Here’s the really excellent part. With luck and solid code, virtual reality should start appearing in standard general availability web browsers in 2017. That’s a big deal. To date, VR has been accessible primarily on proprietary platforms. To put that in real world terms, the world of VR has been like a maze with many doors opening into rooms. Each room held something cool. But there was no way to walk easily and search through the rooms, browse the rooms, or link one room to another. This ability to link, browse, collaborate and share is what makes the web powerful and it’s what will help WebVR take off.

To get an idea of how we envision this might work, consider the APainter app built by Mozilla’s team. It is designed to let artists create virtual art installations online. Each APainter work has a unique URL and other artists can come in and add to or build on top of the creation of the first artist, because the system is open source. At the same time, anyone with a browser can walk through an APainter work. And artists using APainter can link to other works within their virtual works, be it a button on a wall, a traditional text block, or any other format.

Mozilla participated in this hackathon, and is supporting WebVR,  because we believe keeping the web open and ensuring it is built on open standards that work across all devices and browsers is a key to keeping the internet vibrant and healthy. To that same end, we are sponsoring the AFrame Project. The goal of AFrame is to make coding VR apps for the web even easier than coding web apps with standard HTML and javascript. Our vision at Mozilla is that, in the very near future, any web developer that wants to build VR apps can learn to do so, quickly and easily. We want to give them the power of creative self-expression.

It’s gratifying to see something we have worked so hard on enjoy such strong community adoption. And we’re also super grateful to Amir and the folks that put in the time and effort to organize and staff the Virtualeap Global Hackathon. If you are interested in learning more about AFrame, you can do so here.

The post WebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon appeared first on The Mozilla Blog.

Planet WebKitCarlos García Campos: WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.


The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

Planet Mozillacurlup 2017: curl now

At curlup 2017 in Nuremberg, I did a keynote and talked a little about the road to what we are and where we are right now in the curl project. There will hopefully be a recording of this presentation made available soon, but I wanted to entertain you all by also presenting some of the graphs from that presentation in a blog format for easy access and to share the information.

Some stats and numbers from the curl project early 2017. Unless otherwise mentioned, this is based on the availability of data that we have. The git repository has data from December 1999 and we have detailed release information since version 6.0 (September 13, 1999).

Web traffic

First out, web site traffic to over the seven last full years that I have stats for. The switch to a HTTPS-only site happened in February 2016. The main explanation to the decrease in spent bandwidth in 2016 is us removing the HTML and PDF versions of all documentation from the release tarballs (October 2016).

My log analyze software also tries to identify “human” traffic so this graph should not include the very large amount of bots and automation that hits our site. In total we serve almost twice the amount of data to “bots” than to human. A large share of those download the cacert.pem file we host.

Since our switch to HTTPS we have a 301 redirect from the HTTP site, and we still suffer from a large number of user-agents hitting us over and over without seemingly following said redirect…

Number of lines in git

Since we also have documentation and related things this isn’t only lines of code. Plain and simply: lines added to files that we have in git, and how the number have increased over time.

There’s one notable dip and one climb and I think they both are related to how we have rearranged documentation and documentation formatting.

Top-4 author’s share

This could also talk about how seriously we suffer from “the bus factor” in this project. Look at how large share of all commits that the top-4 commiters have authored. Not committed; authored. Of course we didn’t have proper separation between authors and committers before git (March 2010).

Interesting to note here is also that the author listed second here is Yang Tse, who hasn’t authored anything since August 2013. Me personally seem to have plateaued at around 57% of all commits during the recent year or two and the top-4 share is slowly decreasing but is still over 80% of the commits.

I hope we can get the top-4 share well below 80% if I rerun this script next year!

Number of authors over time

In comparison to the above graph, I did one that simply counted the total number of unique authors that have contributed a change to git and look at how that number changes over time.

The time before git is, again, somewhat of a lie since we didn’t keep track of authors vs committers properly then so we shouldn’t put too much value into that significant knee we can see on the graph.

To me, the main take away is that in spite of the top-4 graph above, this authors-over-time line is interestingly linear and shows that the vast majority of people who contribute patches only send in one or maybe a couple of changes and then never appear again in the project.

My hope is that this line will continue to climb over the coming years.

Commits per release

We started doing proper git tags for release for curl 6.5. So how many commits have we done between releases ever since? It seems to have gone up and down over time and I added an average number line in this graph which is at about 150 commits per release (and remember that we attempt to do them every 8 weeks since a few years back).

Towards the right we can see the last 20 releases or so showing a pattern of high bar, low bar, and I’ll get to that more in a coming graph.

Of course, counting commits is a rough measurement as they can be big or small, easy or hard, good or bad and this only counts them.

Commits per day

As the release frequency has varied a bit over time I figured I should just check and see how many commits we do in the project per day and see how that has changed (or not) over time. Remember, we are increasing the number of unique authors fairly fast but the top-4 share of “authorship” is fairly stable.

Turns our the number of commits per day has gone up and down a little bit through the git history but I can’t spot any obvious trend here. In recent years we seem to keep up more than 2 commits per day and during intense periods up to 8.

Days per release

Our general plan is since bunch of years back to do releases every 8 weeks like a clock work. 8 weeks is 56 days.

When we run into serious problems, like bugs that are really annoying or tedious to users or if we get a really serious security problem reported, we sometimes decide to go outside of the regular release schedule and ship something before the end of the 8-week cycle.

This graph clearly shows that over the last, say 20, releases we clearly have felt ourselves “forced” to do follow-up releases outside of the regular schedule. The right end of the graph shows a very clear saw-tooth look that proves this.

We’ve also discussed this slightly on the mailing list recently, and I’m certainly willing to go back and listen to people as to what we can do to improve this situation.

Bugfixes per release

We keep close track of all bugfixes done in git and mark them up and mention them in the RELEASE-NOTES document that we ship in every new release.

This makes it possible for us to go back and see how many bug fixes we’ve recorded for each release since curl 6.5. This shows a clear growth over time. It’s interesting since we don’t see this when we count commits, so it may just be attributed to having gotten better at recording the bugs in the files. Or that we now spend fewer commits per bug fix. Hard to tell exactly, but I do enjoy that we fix a lot of bugs…

Days spent per bugfix

Another way to see the data above is to count the number of bug fixes we do over time and just see how many days we need on average to fix bugs.

The last few years we do more bug fixes than there are days so if we keep up the trend this shows for 2017 we might be able to reach down to 0.5 days per bug fix on average. That’d be cool!

Coverity scans

We run coverity scans on the curl cover regularly and this service keeps a little graph for us showing the number of found defects over time. These days we have a policy of never allowing a defect detected by Coverity to linger around. We fix them all and we should have zero detected defects at all times.

The second graph here shows a comparison line with “other projects of comparable size”, indicating that we’re at least not doing badly here.

Vulnerability reports

So in spite of our grand intentions and track record shown above, people keep finding security problems in curl in a higher frequency than every before.

Out of the 24 vulnerabilities reported to the curl project in 2016, 7 was the result of the special security audit that we explicitly asked for, but even if we hadn’t asked for that and they would’ve remained unknown, 17 would still have stood out in this graph.

I do however think that finding – and reporting – security problem is generally more good than bad. The problems these reports have found have generally been around for many years already so this is not a sign of us getting more sloppy in recent years, I take it as a sign that people look for these problems better and report them more often, than before. The industry as a whole looks on security problems and the importance of them differently now than it did years ago.

Planet WebKitEnrique Ocaña: Media Source Extensions upstreaming, from WPE to WebKitGTK+

A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

Thank you for reading.


Planet Mozillacurl up 2017, the venue

The fist ever physical curl meeting took place this last weekend before curl’s 19th birthday. Today curl turns nineteen years old.

After much work behind the scenes to set this up and arrange everything (and thanks to our awesome sponsors to contributed to this), over twenty eager curl hackers and friends from a handful of countries gathered in a somewhat rough-looking building at curl://up 2017 in Nuremberg, March 18-19 2017.

The venue was in this old factory-like facility but we put up some fancy signs so that people would find it:

Yes, continue around the corner and you’ll find the entrance door for us:

I know, who’d guessed that we would’ve splashed out on this fancy conference center, right? This is the entrance door. Enter and look for the next sign.

Yes, move in here through this door to the right.

And now, up these stairs…

When you’ve come that far, this is basically the view you could experience (before anyone entered the room):

And when Igor Chubin presents about wttr,in and using curl to do console based applications, it looked like this:

It may sound a bit lame to you, but I doubt this would’ve happened at all and it certainly would’ve been less good without our great sponsors who helped us by chipping in what we didn’t want to charge our visitors.

Thank you very much Kippdata, Ergon, Sevenval and Haxx for backing us!

Planet Mozilla19 years ago

19 years ago on this day I released the first ever version of a software project I decided to name curl. Just a little hobby you know. Nothing fancy.

19 years ago that was a few hundred lines of code. Today we’re at around 150.000 lines.

19 years ago that was mostly my thing and I sent it out hoping that *someone* would like it and find good use. Today virtually every modern internet-connected device in the world run my code. Every car, every TV, every mobile phone.

19 years ago was a different age not only to me as I had no kids nor house back then, but the entire Internet and world has changed significantly since.

19 years ago we’d had a handful of persons sending back bug reports and a few patches. Today we have over 1500 persons having helped out and we’re adding people to that list at a rapid pace.

19 years ago I would not have imagined that someone can actually stick around in a project like this for this long time and still find it so amazingly fun and interesting still.

19 years ago I hadn’t exactly established my “daily routine” of spare time development already but I was close and for the larger part of this period I have spent a few hours every day. All days really. Working on curl and related stuff. 19 years of a few hours every day equals a whole lot of time

I took us 19 years minus two days to have our first ever physical curl meeting, or conference if you will.

Planet MozillaThis Week In Servo 95

In the last week, we landed 110 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017 and Q1. Please check it out and provide feedback!

This week’s status updates are here.

Congratulations to our new reviewers, avadacatavra and canaltinova. Diane joined the Servo team last year and has been upgrading our networking and security stack, while Nazım has been an important part of the Stylo effort so far. We’re excited to see them both use their new powers for good!

Notable Additions

  • SimonSapin reduced the overhead of locking associated with CSSOM objects.
  • nox corrected a case that did not properly merge adjacent text nodes.
  • glennw improved the rendering quality of transforms in WebRender.
  • Manishearth added support for CSS system colors in Stylo.
  • mukilan and canaltinova implemented HTML parser support for form owners.
  • n0max fixed a panic when resizing canvases.
  • ajeffrey implemented support for setting document.domain.
  • mchv removed assumptions that browsing context’s could be safely unwrapped in many circumstances.
  • ajeffrey made the constellation store more information about the original request for a document, rather than just the URL.
  • montrivo implemented missing constructors for the ImageData API.
  • ajeffrey made the top and parent APIs work for cross-thread origins.
  • paulrouget added support for vetoing navigation in embeddings.
  • ajeffrey implemented cross-thread postMessage support.
  • Manishearth converted a macro into a higher-order macro for cleaner, more idiomatic code.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


Updated: .  Michael(tm) Smith <>