Planet MozillaZero coverage report

One of the nice things we can do with code coverage data is looking at which files are completely not covered by any test.

<aside>This article was initially published on Marco Casteluccio's blog.</aside>

These files might be interesting for two reasons. Either they are:

  1. dead code;
  2. code that is not tested at all.

Dead code can obviously be removed, bringing a lot of advantages for developers and users alike:

  • Improve maintainability (no need to update the code in case of refactorings in the case of dead code);
  • Reduce build time for developers and CI;
  • Reduce the attack surface;
  • Decrease the size of the resulting binary which can have effects on performance, installation duration, etc.

Untested code, instead, can be really problematic. Changes to this code can take more time to be verified, require more QA resources, and so on. In summary, we can’t trust them as we trust code that is properly tested.

A study from Google Test Automation Conference 2016 showed that an uncovered line (or method) is twice as likely to have a bug fix than a covered line (or method). On top of that, testing a feature prevents unexpected behavior changes.

Using these reports, we have managed to remove a good amount of code from mozilla-central, so far around 60 files with thousands of lines of code. We are confident that there’s even more code that we could remove or conditionally compile only if needed.

As any modern software, Firefox relies a lot on third party libraries. Currently, most (all?) the content of these libraries is built by default. For example,~400 files are untested in the gfx/skia/ directory).

Reports (updated weekly) can be seen at It allows filtering by language (C/C++, JavaScript), filtering out third-party code or header files, showing completely uncovered files only or all files which have uncovered functions (sorted by number of uncovered functions).

uncovered code

Currently there are 2730 uncovered files (2627 C++, 103 JavaScript), 557 if ignoring third party files. As our regular code coverage reports on, these reports are restricted to Windows and Linux platforms.

Planet MozillaShipping a security update of Firefox in less than a day

One of Mozilla’s top priorities is to keep our users safe; this commitment is written into our mission. As soon as we discover a critical issue in Firefox, we plan a rapid mitigation. This post will describe how we fixed a Pwn2Own exploit discovery in less than 22 hours, through the collaborative and well-coordinated efforts of a global cross-functional team of release and QA engineers, security experts, and other stakeholders.

Pwn2Own is an annual computer hacking contest. The goal of this event is to find security vulnerabilities in major software such as browsers. Last week, this event took place in Vancouver. Without getting into technical details of the exploit here, this blog post will describe how Mozilla responded quickly to ship updated builds of Firefox once an exploit was found during Pwn2Own.

We will share some of the processes that enable us to update and release a new version of the Firefox browser to hundreds of millions of users on a regular recurring basis.

This browser is a huge piece of software: 18 million+ lines of code, 6 platforms (Windows 32 & 64bit, GNU/Linux 32 & 64bit, Mac OS X and Android), 90 languages, plus installers, updaters, etc. Releasing such a beast involves coordination among many people from several cross-functional teams spanning locations such as San Francisco, Philadelphia, Paris, Cluj in Romania, and Rangiora in New Zealand.

The timing of the Pwn2Own event is known weeks beforehand, and so Mozilla is prepared! The Firefox train release calendar takes into consideration the timing of Pwn2Own. We try not to ship a new version of Firefox to end users on the release channel on the same day as Pwn2Own.

A Firefox Chemspill

A chemspill is a “security-driven dot release of our product.”  It’s an internal name for the Mozilla machinery that produces updated builds of Firefox on all channels (Nightly, Beta, Release, ESR) in response to an event that negatively impacts browser stability or user security.

Our rapid response model is similar to the way emergency personnel organize and mobilize to deal with a chemical spill and its hazards. All key people stop working on their current tasks and focus only on the cleanup itself. Because our focus is our end users, we need to ensure that they are using the safest and fastest version of Firefox!

This year, we created a private Slack channel prior to Pwn2Own to coordinate all the activity related to the event. The initial Slack group consisted only of security experts, directors of engineering, senior engineers, release managers and release engineers – essential staff.

We prepared a release checklist in advance with added items and a specific focus on the potential for a chemspill triggered by Pwn2Own. This document helped track the cross-functional tasks, their owners, status and due date, which helped track individual tasks and the necessary coordination. It also helped stakeholders view and report chemspill status down to the minute.

Screenshot of the release checklist

One of the members of our security team was attending the Pwn2Own event. After it was announced that one of the participants, Richard Zhu, found the security issue in Firefox, this Mozilla representative received the exploit directly from Richard Zhu as part of the regular Pwn2Own disclosure process for affected vendors. The bug was added to our bug tracking system at 10:59AM PDT on March 15th with the necessary privacy settings. Soon after, the chemspill team reviewed the issue and made a decision to ship updated builds ASAP.

In parallel, there was a discussion happening on the private Slack channel. When we saw the tweet from cybersecurity reporter @howelloneill that made the news public, we knew it was time to identify the developer who’d be getting to work on fixing the bug…

And so, quickly, the developer got to work.

The fix: planning, risk analysis, go-live timelines

While engineers were investigating the exploit and coming up with a fix, the cross-functional coordination needed to ship updated builds had already begun. The chemspill team met within 2 hours of the event. We discussed the next steps in terms of fix readiness, test plans, go-to-build, QA sign-offs, and determined the sequence of steps along with rough timelines. We needed to ensure a smooth hand-off from folks in North America to folks in Europe (France, Romania, UK) and then back to California by morning.

From the moment we had information about the exploit, two discussions began in parallel: a technical discussion on the bug tracking system; and a release-oriented discussion, driven by the release and security managers, on the Slack channel.

12 minutes later, at 11:11AM, a relevant developer is contacted.

11:17AM: The bug is updated to confirm that our long-term support release (ESR) has also been impacted by the issue.
12:32PM: Less than 3 hours after the disclosure, the developer provides a first patch addressing the issue.
14:21PM: An improved version of the fix is pushed.
15:23PM: This patch is pushed to the development branch. Then, in the next 70 minutes, we go through the process of getting the patch landed into the other release and pre-release repositories.

17:16PM: Little more than 6 hours after the publication of the exploit, the Beta and Release builds (desktop and Android) are in progress.

During the build phase

Let’s take a step back to describe the regular workflow that happens every time a new build of Firefox is released. Building the Firefox browser with our complete test suite for all platforms takes about 5 hours. While the builds are in progress, many teams are working in parallel.

Test plan

The QA team designs a test plan with the help of engineering. When fixing security issues, we always have two goals in mind:

  1. Verify that the fix addresses the security issue,
  2. Catch any other potential regressions due to the fix.

With these two goals, the QA team aims to cover a wide range of cases using different inputs.

For example, the following test case #3 has been played on the various impacted versions and platforms:

Test Case 3 (ogg enabled false – Real .ogg File)

  • Select a channel
  • Navigate to about:config
  • Set pref “media.ogg.enabled” to false
  • Download an .ogg file
  • Drag the .ogg file into the Mozilla build
  • Observe an error message/prompt “You have chosen to open [name of file].ogg
  • Try and open the file with Firefox as the application
  • Observe that Firefox does not play the selected .ogg file (or any sound)
  • Repeat step 1 for all builds (ESR, RC, Beta/DevEdition, Fennec)

Exploit analysis

In parallel, our security experts jumped on the exploit to analyze it.

They look closely at several things:

  • How the exploit works technically
  • How we could have detected the issue ourselves
  • The in progress efforts: How to mitigate this kind of attack
  • The stalled efforts: What we started but didn’t finish
  • The future efforts: Scoping the long term work to eliminate or mitigate this category of attacks


The vulnerability was found to be in a library that did not originate with the Mozilla project, and is used by other software. Because we didn’t want to 0-day the vulnerable software library and make the vulnerability more widely known, we reached out to the maintainer of the library directly. Then, we investigated which other applications use this code and we tried to notify them and make them aware of the issue.

In parallel, we worked with the library maintainers to prepare a new version of the standalone library code.

Last but not least, as GNU/Linux distributions provide packages of this library, we also informed these distributions about the issue.

Once the builds are ready

After roughly 5 hours, the builds were ready. This is when the QA team starts executing the test plans.

They verify all the scenarios on a bunch of different platforms/operating systems.

A screenshot of the chart showing the readiness of all builds

In a matter of 22 hours, less than a day from when the exploit was found, Mozilla was ready to push updated builds of Firefox for Desktop and Android on our Nightly, Beta, ESR and release update channel.

For the release go live, the security team wrote the security advisories and created an entry for the CVE (Common Vulnerabilities and Exposures), a public reference that lists publicly known cybersecurity vulnerabilities.

And then, at the last moment, we discovered a second variant of the affected code and had to rebuild the Android version. This was also impacting Firefox ESR on ARM devices. We shipped this fix as well at 23:10PM.

Nobody likes to see their product get pwned, but as with so much in software development, preparation and coordination can make the difference between a chemspill where no damage is done, and a potentially endangering situation.

Through the combined work of several distributed teams, and good planning and communication, Mozilla was able to test and release a fix for the vulnerability as fast as possible, ensuring the security of users around the world. That’s a story we think is worth sharing.

Related Resources

If you’re interested in learning more about Mozilla’s security initiatives or Firefox security, here are some resources to help you get started:

Mozilla Security
Mozilla Security Blog
Bug Bounty Program
Mozilla Security playlist on YouTube

Planet WebKitClipboard API Improvements

The Clipboard API provides a mechanism for websites to support accessing the system pasteboard (pasteboard is the macOS and iOS counterpart to clipboard on Windows and Linux). Copy and paste is one of the most basic interactions in modern operating systems. We use it for all sorts of purposes, from copying a hyperlink on one website to another, to copying a blog post typed in a native word processing application to a blog platform on the web. For this reason, creating a compelling productivity application such as a word processor and a presentation application on the Web requires interacting with the system pasteboard just as much as other native applications.

Over the last couple of months, we have added support for new API for better interoperability with other browsers, and refined our implementations to allow more use cases in macOS and iOS ports of WebKit. These changes are available for you review in Safari 11.1 and iOS 11.3 beta programs.

First, we modernized our DataTransfer API. We added support for items, and fixed many bugs on macOS and iOS. Because most websites don’t support uploading TIFF files, WebKit now automatically converts TIFF images to PNG images and exposes PNG images as files when there are images in the system pasteboard.

Directory Upload

In r221177, we added support for uploading directories via DataTransfer.webkitGetAsEntry() and input.webkitdirectory to be interoperable with other browsers such as Chrome, Firefox, and Edge which had already implemented this WebKit-prefixed feature. This new API allows users to upload a whole directory onto Cloud storage and file sharing services such as iCloud and Dropbox. On iOS, directory upload is supported when dragging folders from the Files app and dropping into web pages.

Custom MIME Types

Because the system pasteboard is used by other native applications, there are serious security and privacy implications when exposing data to web content through the clipboard API. If a website could insert arbitrary content into the system pasteboard, the website can exploit security bugs in any native application which reads the pasteboard content — for instance, a utility application which shows the content put into the pasteboard. Similarly, if a website could read the system pasteboard at any given point in time, it can potentially steal sensitive information such as user’s real full name and mailing addresses that the user was copying.

For this reason, we previously didn’t allow reading of anything but plain text and URL in DataTransfer objects. We relaxed this restriction in r222595 by allowing reading and writing of arbitrary MIME types between web pages of the same origin. This change allows web applications from a single origin to seamlessly share information using their own MIME types and MIME types we don’t support, while still hiding privacy and security sensitive information other native applications may put into the system pasteboard. Because custom MIME types used by websites are bundled together under a special MIME type that WebKit controls, web pages can’t place malicious payloads of arbitrary MIME types in the system pasteboard to exploit bugs in native applications.

Getting and Setting Data

Apart from custom MIME types, web applications may now write text/html, text/plain and text/uri-list to the system pasteboard using DataTransfer.setData or DataTransfer.items.add during a copy or dragstart event. This content is written with the appropriate UTI for macOS and iOS, so pasting into native applications that are already capable of pasting HTML markup, plain text strings, or URLs will work as expected.

On the reading side, web applications may now also use DataTransfer.getData and DataTransfer.items during a paste and drop event to read text/html, text/plain and text/uri-list data from the system pasteboard. If any files were written to the pasteboard — for example, when copying a PDF file in Finder — this information will be accessible through DataTransfer.files and DataTransfer.items; for backwards compatibility, the “Files” type will also be added to the list of types in DataTransfer.types to indicate that file data may be requested by the page.

An important caveat is that native applications may write file paths to the pasteboard as URLs or plain text while copying files. This may cause users to unknowingly expose file paths to the home directory and private containers of native applications. Thus, WebKit implements heuristics to suppress access to this data via DataTransfer API in such cases. If the pasteboard contains at least one file and text/uri-list is requested, the scheme of the URL must be http, https, data, or blob in order for WebKit to expose it to the page. Other schemes, such as file or ftp, will result in an empty string. Likewise, requests for text/plain will return the empty string when there are files on the pasteboard.

Reading and Writing HTML Content

Among other MIME types, HTML content is most pervasive on the web. Unfortunately, letting arbitrary websites write HTML content into the system pasteboard is problematic because HTML can contain script tags and event handlers which can end up executing malicious scripts in the application reading the content. Letting websites read arbitrary HTML content in the system pasteboard is also problematic because some word processor and spreadsheet applications put privacy sensitive information such as local file paths and user information into the HTML placed in the system pasteboard. For example, if an user typed 12345 into an unsaved spreadsheet, and copied & pasted into a random website, the website might be able to learn user’s local home directory path if we were to expose the raw HTML content placed in the pasteboard by other native applications. For this reason, we previously didn’t allow reading or writing of HTML content via DataTransfer objects. Instead, websites had to wait for WebKit’s native editing code to paste the content and process it afterwards.

In r223440, we introduced a mechanism to sanitize HTML read from and written to the system pasteboard, allowing us to lift this restriction. When the website tries to write HTML to the pasteboard, we paste the HTML into a dummy document, re-serialize it to HTML, and then write the re-serialized HTML into the system pasteboard. This process ensures any script elements, event handlers, and other potentially dangerous content will be stripped away. We also package all the necessary sub-resources in the HTML such as images into WebArchive so that native applications which reads the pasteboard content doesn’t have to re-fetch those resources upon paste. Similarly, when a website tries to read the HTML content placed by other native applications, we run through the same steps of pasting the content into a dummy document and re-serializing HTML, stripping away any private information the user didn’t intend to include in the pasted content. Sanitization also happens when HTML content is copied and pasted across different origins but not within web pages of the same origin. As a result, websites can write arbitrary HTML contents via clipboard API and read the exact same content back later within a single origin.

Pasting HTML Content with Images

We also made a major change in the way we handle local files included in the pasted HTML content. Previously, sub-resources (such as image files in pasted content) used URLs of the form webkit-fake-url://<filename> where <filename> is the filename of the sub-resource. Because this is not a standard protocol the website can access, the pasted images’ data were inaccessible to websites. Even though WebKit is capable of loading these images, there was no way for websites to save the images either to their service or into browser’s storage API. r223440 replaces these fake URLs with blob URLs so that the website can save the images. We also use blob URLs instead of fake URLs when pasting RTFD content since r222839.

This change provides a mechanism for Web applications to save images included in pasted content using the Blob API. For example, an online e-mail editor now has the capability to save images that a user copied and pasted from TextEdit or Microsoft Word on iOS and macOS. We’re pleased to be the first browser to provide this powerful platform integration capability to Web developers.


We’re excited to empower productivity apps on the Web to more seamlessly integrate with native applications on macOS and iOS via the updated clipboard API. We’d also like to give special thanks to the developers of TinyMCE who have tirelessly worked with us to resolve many bugs involving copy and paste from Microsoft Word to high profile websites which use TinyMCE.

Planet MozillaReps Weekly Meeting, 22 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting, 22 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaFirefox Performance Update #4

Another week, another slew of Firefox performance updates for you fine folks!

Firefox contributors have been hacking away to make Firefox faster in a bunch of different ways. Have you seen a bug land recently that improved Firefox performance? Let me know so I can include it in an upcoming post and give these contributors the recognition they deserve!

So let’s get to it – here’s what’s been happening lately:

Planet MozillaThe Essential Elements of Digital Literacies (Startklar?! March 2018)

The Essential Elements of Digital Literacies (Goethe Institut, March 2017).png
Livestream recording: Periscope


I presented today in Berlin at the Goethe Institute’s Startklar?! event. I went after a keynote (in German) by Cathleen Berger, Mozilla’s Global Engagement Lead. My time at Mozilla didn’t overlap with hers, but the subjects covered in our presentations certainly did!

It was good to see Cathleen reference the Web Literacy Map, work that I led from 2012 to 2015 at Mozilla. She also referenced the recent Cambridge Analytica revelations and the DQ Institute.

My presentation, which was only around 20 minutes long, focused on:

  • Part 1: Recent news and research
  • Part 2: The problem(s) with frameworks
  • Part 3: The Essential Elements of Digital Literacies

Although I like to keep things fresh by referring to recent news, and by diving into people’s specific context in the Q&A, much of this is what I’ve been presenting on for the last few years. I think choosing an ‘off the shelf’ digital literacy framework is both lazy and dangerous.

Given the time constraints, I didn’t have time to do a deep dive into my work around the Essential Elements of Digital Literacies. However, I’m hoping that the audience follow the links in the slides to both my doctoral thesis and ebook.

Comments? Questions? Get in touch! Email:

Planet MozillaMozilla Presses Pause on Facebook Advertising

Mozilla is pressing pause on our Facebook advertising. Facebook knows a great deal about their two billion users — perhaps more intimate information than any other company does. They know everything we click and like on their site, and know who our closest friends and relationships are. Because of its scale, Facebook has become one of the most convenient platforms to reach an audience for all companies and developers, whether a multibillion corporation or a not-for-profit.

We understand that Facebook took steps to limit developer access to friends’ data beginning in 2014. This was after Facebook started its relationship with Cambridge University Professor Aleksandr Kogan, whose decision to share data he collected from Facebook with Cambridge Analytica is currently in the news. This news caused us to take a closer look at Facebook’s current default privacy settings given that we support the platform with our advertising dollars. While we believe there is still more to learn, we found that its current default settings leave access open to a lot of data – particularly with respect to settings for third party apps.

We are encouraged that Mark Zuckerberg has promised to improve the privacy settings and make them more protective. When Facebook takes stronger action in how it shares customer data, specifically strengthening its default privacy settings for third party apps, we’ll consider returning.

We look forward to Facebook instituting some of the things that Zuckerberg promised today

The post Mozilla Presses Pause on Facebook Advertising appeared first on The Mozilla Blog.

Dev.OperaWhat’s new in Chromium 65 and Opera 52

Opera 52 (based on Chromium 65) for Mac, Windows, Linux is out! To find out what’s new for users, see our Desktop blog post. Here’s what it means for web developers.

Paint Worklet, Paintlet, Paint API

With paintlets it is possible to write a script that does the background rendering of an element. This can be more flexible than generating images on the server side or using client side generated data urls for backgrounds.

The image below is a rather silly example of what can be done with paintlets.

This effect was accomplished by adding a paint module to the paint worklet, and then, in its paint() method, drawing on the supplied ctx as if it had been a canvas. It is also possible to supply configuration from CSS to the script, something that can make paintlets easier to use than other ways to generate background images.

<!-- demo.html -->
  textarea {
  background-image: paint(circlearcpainter);
  --circle-arc-pixel-size: 24;
<!-- Textarea is a good demo since it can be resized. -->

The module itself is defined with the javascript below.

// circle_arc.js
class CircleArcPainter {
    circle_arc(ctx, x, y, radius, angle) {
        let angleInRad = (angle * 0.9 / 2 + 10) * Math.PI / 180;
        ctx.fillStyle = 'yellow';
        ctx.arc(x, y, radius,
                angleInRad, Math.PI * 2 - angleInRad, false);
        ctx.lineTo(x, y);
    static get inputProperties() { return ['--circle-arc-pixel-size']; }
    paint(ctx, geom, properties) {
        const css_prop = properties.get("--circle-arc-pixel-size");
        const size = css_prop ? parseInt(css_prop.toString()) : 100;
        ctx.fillStyle = "black";
        ctx.fillRect(0, 0, geom.width, geom.height);
        for (let x = 0; x < geom.width/size; x++) {
            for (let y = 0; y < geom.height/size; y++) {
                const circle_size = Math.abs(
                    Math.sin((x + y) / 6)) * size / 4 + size / 12;
                const opening = Math.random() * 90;
                this.circle_arc(ctx, (x + 0.5) * size, (y + 0.5) * size,
                                circle_size, opening);

// Register our class under a specific name
registerPaint('circlearcpainter', CircleArcPainter);

Server Timing API

With the new Server Timing API there is a well defined way for servers to pass performance information to the browser through HTTP headers. With this information web pages can make even better informed decisions.

In Developer Tools this information is displayed in the Network view by selecting the resource and activating the Timing tab.


  • The :any-link pseudo selector can be used to style both visited and non-visited links at the same time.
  • For colors, the rgb and hsl functions now take an optional fourth alpha value, making them identical to rgba and hsla.
  • display: contents allows an element to wrap other elements without creating a box for itself.


Feature Policy

  • A new feature policy sync-xhr will allow a site to block synchronous XMLHttpRequests. Synchronous XMLHttpRequests block scripts until a server has responded which can make them harmful for the user experience. If a site embeds untrusted, possibly malicious, content (think ads), this adds a tool to lock that content down a bit.


  • The HTTPS code is updated to match the draft-23 version of the TLS protocol.
  • Request.destination is added to give service workers a better understanding of why a resource is fetched.

Performance APIs

  • A toJSON() method has been added to PerformanceResourceTiming, PerformanceLongTaskTiming and TaskAttributionTiming.


  • The download attribute will be ignored on anchor elements with cross-origin attributes.
  • Certificates from Symantec’s Legacy PKI issued after 2017-12-01 will no longer be trusted. This will only affect sites that selected to not transition to DigiCert’s new PKI. More information can be found in the offical announcement.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaResults of the MDN “Duplicate Pages” SEO experiment

Following in the footsteps of MDN’s “Thin Pages” SEO experiment done in the autumn of 2017, we completed a study to test the effectiveness and process behind making changes to correct cases in which pages are perceived as “duplicates” by search engines. In SEO parlance, “duplicate” is a fuzzy thing. It doesn’t mean the pages are identical—this is actually pretty rare on MDN in particular—but that the pages are similar enough that they are not easily differentiated by the search engine’s crawling technology.

This can happen when two pages are relatively short but are about a similar topic, or on reference pages which are structurally and content-wise quite similar to one another. From a search engine’s standpoint, if you create articles about the background-position and background-origin CSS attributes, you have two pages based on the same template (a CSS reference page) whose contents are easily prone to being identical. This is especially true given how common it is to start a new page by copying content from another, similar, page and making the needed changes, and sometimes even only the barest minimum changes needed.

All that means that from an SEO perspective, we actually have tons of so-called “duplicate” pages on MDN. As before, we selected a number of pages that were identified as duplicates by our SEO contractor and made appropriate changes to them per their recommendations, which we’ll get into briefly in a moment.

Once the changes were made, we waited a while, then compared the before and after analytics to see what the effects were.

The content updates

We wound up choosing nine pages to update for this experiment. We actually started with ten but one of them wound up being removed from the set after the fact when I realized it wasn’t a usable candidate. Unsurprisingly, essentially all duplicate pages were found in reference documentation. Guides and tutorials were with almost no exceptions immune to the phenomenon.

The pages were scattered through various areas of the open web reference documentation under

Things to fix or improve

Each page was altered as much as possible in an attempt to differentiate it from the pages which were found to be similar. Tactics to accomplish this included:

  • Make sure the page is complete. If it’s a stub, write the page’s contents up completely, including all relevant information, sections, etc.
  • Ensure all content on the page is accurate. Look out for places where copy-and-pasted information wasn’t corrected as well as any other errors.
  • Make sure the page has appropriate in-content (as opposed to sidebar or “See also”) links to other pages on MDN. Feel free to also include links in the “See also” section, but in-content links are a must. At least the first mention of any term, API, element, property, attribute, technology, or idea should have a link to relevant documentation (or to an entry in MDN’s Glossary). Sidebar links are not generally counted when indexing web sites, so relying on them entirely for navigation will wreak utter havoc on SEO value.
  • If there are places in which the content can be expanded upon in a sensible way, do so. Are there details not mentioned or not covered in as much depth as they could be? Think about the content from the perspective of the first-time reader, or a newcomer to the technology in question. Will they be able to answer all possible questions they might have by reading this page or pages that the page links to through direct, in-content links?
  • What does the article assume the reader knows that they might not yet? Add the missing information. Anything that’s skimmed over, leaving out details and assuming the reader can figure it out from context needs to be fleshed out with more information.
  • Ensure that all of the API’s features are covered by examples. Ideally, every attribute on an element, keyword for a CSS property’s value, parameter for a method, and so forth should be used in at least one example.
  • Ensure that each example is accompanied by descriptive text. There should be an introduction explaining what the example does and what feature or features of the technology or API are demonstrated as well as any text that explains how the example works. For example, on an HTML element reference page, simply listing the properties then providing an example that only uses a subset of those properties isn’t enough.  Add more examples that cover the other properties, or at least the ones that are likely to be used by anyone who isn’t a deeply advanced user.
  • Do not simply add repetitive or unhelpful text. Beefing up the word count just to try to differentiate the page from other content is actually going to do more harm than good.
  • It’s frequently helpful to also change the pages which are considered to be duplicate pages. By making different changes to each of the similar pages, you can create more variations than by changing one page alone.

Pages to be updated

The pages we selected to update:


In most if not all of these cases, the pages which are similar are obvious.

The results

After making appropriate changes to the pages listed above as well as certain other pages which were similar to them, we allowed 60 days to pass. This is less than the ideal 90 days, or, better, six months, but time was short. We will check the data again in a few months to see how things change given more time.

The changes made were not always as extensive as would normally be preferred, again due to time constraints created by the one-person experimental model. When we do this work on a larger scale, it will be done by a larger community of contributors. In addition, much of this work will be done from the beginning of the writing process as we will be revising our contributor guide to incorporate the recommendations as appropriate after these experiments are complete.

Pre-experiment unvisited pages

As was the case with the “thin pages” experiment, pages which were getting literally zero—or even close to zero— traffic before the changes continued to not get much traffic. Those pages were:


For what it’s worth, we did learn from this eventually and experiments that were begun after the “thin pages” results were in no longer included pages which got no traffic during the period leading up to the start of the experiment, but this experiment had already begun running by then.

Post-experiment unvisited pages

There were also two pages which had a small amount of traffic before the changes were made but no traffic afterward. This is likely a statistical anomaly or fluctuation, so we’re discounting these pages for now:


The outcome

The remaining three pages had useful data. This is a small data set but is what we have, so let’s take a look.

 Page URL Sept. 16-Oct. 16 Nov. 18 – Dec 18
Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % 678 8,751 862 13,435 27.14% 34.86% 1,193 3,724 1,395 7,700 14.48% 51.64% 447 2,961 706 7,906 36.69% 62.55%

For each of these three pages, the results are promising. Both click and impression counts are up for each of them, with click counts increasing by anywhere from 14% to 36% and impression counts increasing between 34% and 62% (yes, I rounded down for each of these values, since this is pretty rough data anyway). We’ll check the results again soon and see if the results changed further.


Because of certain implementation specifics of this experiment, there are obviously some uncertainties in the results:

  • We didn’t allow as much time as is recommended for the changes to fully take effect in search data before measuring the results. This was due to time constraints for the experiment being performed, but, as previously mentioned, we’ll look at the data again later to double-check our results.
  • The set of pages with useful results was very small, and even the original set of pages was fairly small.
  • There was substantial overall site growth during this period, so it’s likely the results are affected by this. However, the size of the improvements seen here suggests that even with that in mind, the outcome was significant.


After a team review of these results, we came to some conclusions. We’ll revisit them later, of course, if we decide that a review of the data later suggests changes be made.

  1. The amount of improvement seen strongly suggests we should prioritize fixing duplicate pages, at least in cases where at least one of the pages which are considered duplicates of one another is getting at least a low-to-moderate amount of traffic.
  2. The MDN meta-documentation, in particular the writer’s guide and the guides to writing each of the types of reference content, will be updated to incorporate the recommendations into the general guidelines for contributing to MDN. Additionally, the article on MDN about writing SEO-friendly content will be updated to include this information.
  3. It turns out that many of the changes needed to fix “thin” pages also applies when fixing “duplicate” pages.
  4. We’ll re-evaluate prioritization after reviewing the latest data after more time has passed.

The most interesting thing I’m learning about SEO, I think, is that it’s really about making great content. If the content is really top-notch, SEO practically attends to itself. It’s all about having thorough, accurate content with solid internal and external links.


If you have comments or questions about this experiment or the changes we’ll be making, please feel free to follow-up or comment on this thread on Mozilla’s Discourse site.

Planet WebKitRelease Notes for Safari Technology Preview 52

Safari Technology Preview Release 52 is now available for download for macOS Sierra and macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 228856-229535.

Legacy NPAPI Plug-ins

  • Removed support for running legacy NPAPI plug-ins other than Adobe Flash

Service Worker

  • Changed Fetch event release assert to take into account the fetch mode (r228930)
  • Changed to not use a Service Worker in the case of document redirection if it will be already served by AppCache (r229086)
  • Fix loads for a Document controlled by a Service Worker to not use AppCache (r229181)
  • Updated Service Worker to respect IndexedDB and DOM cache partitioning (r229483)


  • Added support for preconnect link headers (r229308)
  • Fixed converting a load to a download to work with async policy delegates (r229177)
  • Prevented DNS prefetching from being re-enabled (r229061)


  • Fixed handling undefined global variables with the same name as an element ID (r229451)
  • Made Number.isInteger an intrinsic (r228968)


  • Added new CSS env() constants for use with fullscreen (r229475)
  • Fixed ::selection CSS stroke-color and stroke-width to be applied to selected text in text fields (r229147)


  • Fixed HTML pattern attribute to set u flag for regular expressions (r229363)
  • Fixed replaceState causing back and forward navigation problems on a page with <base href="/"> (r229375)
  • Fixed to cancel navigation policy check in addition to cancelling the existing provisional load (r228922)


  • Added more accessibility events support (r229310)
  • Dispatched accessiblesetvalue event (r229112)
  • Fixed keyboard focus to follow the VoiceOver cursor into web content or within web content (r228857)
  • Fixed WebKit running spell checker even on non-editable content text (r229500)

Web Driver

  • Fixed clicking on a disabled option element to not produce an error (r229212)
  • Fixed stale elements not getting detected when removed from the DOM (r229210)
  • Fixed failed provisional loads causing “Navigate To” command to hang (r228887)
  • Fixed script evaluations via WebDriver to have a user gesture indicator (r229206)

Web Inspector

  • Changed Canvas Tab to scroll into view and inspect an element if Canvas has a DOM node (r229044)


  • Added cache for memory address and size on an instance (r228966)


  • Fixed the webkitfullscreenchange event to fire at the same time as :-webkit-full-screen pseudo selector changes (r229466, r229487)

Bug Fix

  • Fixed copying a table from the Numbers app and pasting into iCloud Numbers (r229503)

Planet MozillaThe Joy of Coding - Episode 133

The Joy of Coding - Episode 133 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 133

The Joy of Coding - Episode 133 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaSend, getting better


Send continues to improve incrementally. Since our last post we’ve added a few requested features and fixed a bunch of bugs. You can now choose to allow multiple downloads and change the password on a file if you need to.

Send is also more stable and should work more reliably across a wider set of browsers. We’ve brought back support for Microsoft Edge and some older versions of Safari.

We’ve also done some work to improve our code documentation and quality to make it easier for folks to review and contribute. If you’re at all interested in learning how Send works, now is a good time to have a look.

I also want to express my personal appreciation to all the folks who have contributed. Send is a better project because of your help.

Send, getting better was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaSeeking Fellows for a Better Internet: Apply

Mozilla has opened applications for its 2018-2019 Fellowships. Mozilla Fellows are technologists, activists, and policy experts building a more ​humane​ ​digital​ ​world


The internet is vast: It’s layered into billions of lives, influencing everything from economies and governments to education and romance. This pervasive internet can promote opportunity, empowerment, and free expression — but also misinformation, mass surveillance, harassment, and abuse.

More than ever, we need a movement to ensure the internet remains a force for good. We need people who stop the spread of misinformation, who put individuals in control of their data, and who keep artificial intelligence accountable. We need people who ensure smart cities and next-generation voice technology are diverse and equitable, and who conduct open research.

Mozilla Fellows do just this. And today, we’re opening applications for our 2018-2019 cohort of Mozilla Fellows, with $1.2 million in support.

Mozilla Fellowships provide resources, tools, community and amplification to those building a more ​humane​ ​digital​ ​world. During their tenure, Fellows use their skill sets — in technology, in activism, in science, in policy — to design products, run campaigns, influence policy and ultimately lay the groundwork for a more open and inclusive internet.

Mozilla Fellows hail from a range of disciplines and geographies: they are policymakers in Kenya, journalists in Brazil, engineers in Germany, privacy activists in the United States, and data scientists in the Netherlands. During a 10-month tenure, fellows work on individual projects, but also collaborate on cross-disciplinary solutions to the internet’s biggest challenges. The Fellowships run from September 2018 through June 2019.

Mozilla Fellowships are a transformative experience for emerging leaders concerned with making the internet a safer, more accessible resource for everyone. Fellows expand their network and sphere of influence; design impactful projects with the potential to reach millions; and learn from and collaborate with a global community of thousands of Mozillians. Mozilla Fellows are also awarded competitive funding and benefits.

We’re currently seeking Mozilla Fellows that fit three particular profiles:

Open web activists: Fellows who work in the realm of public interest technology, addressing issues like privacy, security, and inclusion online. These open web activists will embed at leading human rights and civil society organizations from around the world, lending their technical expertise. Check out the list of this year’s featured host organizations, and apply to work with them.

Scientists and researchers: Fellows who infuse open-source practices and principles into scientific research. “Science” is defined broadly; Fellows may work in the natural sciences, formal and applied sciences, or humanities, social sciences and library and information sciences. Fellows are based in the research institution with which they are currently affiliated.

Tech policy professionals: Fellows who examine the interplay of technology and public policy, and craft legal, academic, and governmental solutions. These tech policy professionals are independent researchers and are not necessarily matched with a host organization or an institution.

Learn more about Mozilla Fellowships, then apply. Applications close on April 20, 2018. Below, meet a handful of current Mozilla Fellows:


Amba Kak | New Delhi, India | @ambaonadventure

Amba is a Mozilla Fellow focused on tech policy, examining how India’s experience with protecting the open internet can inform the global debate on issues like net neutrality and online privacy. Previously, Amba was a legal consultant at India’s National Institute of Public Finance & Policy. Read Amba’s recent op-ed in the Indian Express.


Orlando Del Aguila | Guadalajara, México | @eatcodetravel

Orlando is a Mozilla Fellow focused on the open web and making it a safer place for marginalized communities. He is working with the Bahraini nonprofit Majal to expand Ahwaa, a secure and anonymous discussion platform for the LGBT community in the Middle East. Learn more about Ahwaa, Majal, and Orlando.


Amel Ghoulia | Tunis, Tunisia | @AmelGhouila

Amel is a Mozilla Fellow focused on science and a bioinformatician at Institut Pasteur de Tunis. She is developing open-source guidelines and resources for improving biomedical research across the African continent. Learn more about Amel’s recent work.  

The post Seeking Fellows for a Better Internet: Apply appeared first on The Mozilla Blog.

Planet MozillaBay Area Rust Meetup March 2018 (Algorithms and Pathfinder)

Bay Area Rust Meetup March 2018 (Algorithms and Pathfinder) Rust Meet up. Bay Area Rust Meetup. This month we have: - Tristan Hume on Algorithms in Rust. - Patrick Walton giving an update on...

Planet MozillaBay Area Rust Meetup March 2018 (Algorithms and Pathfinder)

Bay Area Rust Meetup March 2018 (Algorithms and Pathfinder) Rust Meet up. Bay Area Rust Meetup. This month we have: - Tristan Hume on Algorithms in Rust. - Patrick Walton giving an update on...

Planet Mozillacompare-locales 3.0 – GSOC

There’s something magic about compare-locales 3.0. It comes with Python 3 support.

It took me quite a while to get to it, but the writing is on the wall that I had to add support for Python 3. That’s just been out for 10 years, too. Well, more like 9ish.

We’re testing against Python 2.7, 3.5, and 3.6 now.

Thanks to Emin Mastizada for the reviews of this patch.

Slightly related, we’re having two l10n-tooling related proposals out for this year’s Google Summer of Code. Check out Google’s student guide for how to pick a project. Adrian is mentoring a project to improve the experience of first-time users of Pontoon. I’m mentoring a project to support Android’s localization platform as a first-class citizen. You’d write translation quality checks for compare-locales and add support for the XML dialect to Pontoon.

Planet MozillaMultilingual Gecko Status Update 2018.1

As promised in my previous post, I’d like to do a better job at delivering status updates on Internationalization and Localization technologies at Gecko at shorter intervals than once per year.

In the previous post we covered recent history up to Firefox 58 which got released in January 2018. Since then we finished and shipped Firefox 59 and also finished all major work on Firefox 60, so this post will cover the two.

Firefox 59 (March)

Firefox 58 shipped with a single string localized using Fluent. In 59 we made the next step and migrated 5 strings from an old localization system to use Fluent. This allowed us to test all of our migration code to ensure that as we port Firefox to the new API we preserve all the hard work of hundreds of localizers.

In 59 we also made several improvements to performance of how Fluent operates, all of that while waiting for the stylo-chrome to land in Firefox 60.

In LocaleService, we made another important switch. We replaced old general.useragent.locale and intl.locale.matchOS prefs with a single new pref intl.locale.requested.

This change accomplished several goals:

  • The name represents the function better. Previously it was pretty confusing for people as to why Gecko doesn’t react immediately when they set the pref. Now it is more clear that this is just a requested locale, and there’s some language negotiation that, depending on available locales, will switch to it or not.
  • The new pref is optional. Since by default it matches the defaultLocale, we can now skip it and just treat its non-presence as the default mode in which we follow the default locale. That allowed us to remove some code.
  • The new pref allows us to store locale lists. The new pref is meant to store a comma separated list of requested locales like "fr-CA, fr-FR, en-US", in line with our model of handling locale lists, rather than single locales.
  • If the pref is defined and the value is empty, we’ll look into OS for the locale to use, making it a replacement for the matchOS pref.

This is important particularly because it took us very long time to unify all uses of the pref, remove it from all around the code and finally be able to switch to the new one which should serve us much better.

Next come a usual set of updates, including update to ICU 60 by Andre, and cleanups by Makoto Kato – we’re riding the wave of removing old APIs and unifying code around ICU and encoding_rs.

Lastly, as we start looking more at aligning our language resources with CLDR, Francesco started sorting out our plural rules differences and language and region names. This is the first step on the path to upstream our work to CLDR and reuse it in place of our custom data.

Notable changes [my work] [intl]:

Firefox 60 (May)

Although Firefox 60 has not yet been released as of today, the major work cycle on it has finished, and it is currently in the beta channel for stabilization.

In it, we’ve completed another milestone for Fluent migrating not just a couple, but over 100 strings in Firefox Preferences from the old API. This marks the first release where Fluent is actually used to localize a visible portion of Firefox UI!

As part of that work, we pushed our first significant update of Fluent in Gecko, and landed a special chrome-only API to get Fluent’s performance on par with the old system.

With an increase in the use of Fluent, we also covered it with Mozilla eslint rules, improved missing strings reporting, and wrote an Introduction to Fluent for Firefox Developers.

On the Locale Management side, we separated out mozilla::intl::Locale class and further aligned it with BCP47.

But the big change here is the switch of the source of available locales from the old ChromeRegistry to L10nRegistry.

This is the last symbolic hand-over from the old model to the new, meaning that from that moment the locales registered in L10nRegistry will be used to negotiate language selection for Firefox, and ChromeRegistry becomes a regular customer rather than a provider of the language selection.

We’re very close to finalize the LocaleService model after over a year of refactoring Gecko!

Regular healthy number of cleanups happened as well. Henri switched more code to use encoding_rs, and updated encoding_rs to 0.7.2, Jeff Walden performed a major cleanup of our SpiderMonkey Intl source code, Andre added caches for many Intl APIs to speed them up and Axel updated compare-locales to 2.7,

We also encountered two interesting bugs – Andre dove deep into ICU to fix `Intl.NumberFormat` breaking on roundings in Persian, and I had to disable some of our bidirectionality features in Fluent due to a bug in Windows API.

Notable changes [my work] [intl]:


With all that work in, we’re slowly settling down the new APIs and finalizing the cleanups and the bulk of work now is going directly into switching away from DTD and .properties to Fluent.

As Firefox 60 is getting ready for its stable release, we’re accelerating the migration of Preferences to Fluent hoping to accomplish it in 61 or 62 release. Once that is completed, we’ll evaluate the experience and make recommendations for the rest of Firefox.

Stay tuned!

Planet MozillaEnough is enough. Let’s tell Facebook what we want fixed.

I had one big loud thought pounding in my head as I read the Cambridge Analytica headlines this past weekend: it’s time for Facebook users to say ‘enough is enough‘.

Many people have said we need to regulate Facebook and other platforms. Maybe. What’s clear is we need the platforms to work differently.

A faster route to this outcome — or at least a first big step forward — could be for millions of us who use Facebook to tell the company what we want ‘differently’ to look like. And to ask them to make it happen. Now.

There is a long history of this sort of direct consumer-to-company conversation outside the tech world.

People who care about fair work pushed Nike to raise wages and improve factory conditions. People who care about our forests got Kimberly Clark to stop cutting down old growth. People concerned with human health convinced McDonalds to stop buying antibiotic ridden chicken.

The surprising thing: we have yet to see internet users start a conversation with a company en masse to say: hey, we want things to work differently. Until now.

The concerns people have raised about Facebook and other platforms are wide ranging — and most often tie back to the fact that the ‘big five‘ are near monopolies in key aspects of the tech business.

Yet, many of the problems and harms that people have been pointing to in recent weeks are quite specific. App permissions that allow third party business to access the private information of our friends. Third party data profiling that shows where each of us  stand on issues. And advertising services that allow companies, politicians and trolls to micro target ads at each of us individually based on these profiles. These are all very specific features or services that the companies involved can change — or stop offering altogether.

As a citizen of the internet and a long time Facebook user, I feel like it’s on me to start talking to the company about the specific changes I’d like to see — and to find others who want to do the same.

With this goal in mind, Mozilla launched a campaign today to get users to band together to ask Facebook to change its app permissions and make sure our privacy is protected by default. This one small, specific thing that could make a difference.

Of course, there is also bigger ambition for this campaign: to spark a conversation between the people who make Facebook and the people who use it about how we can make a digital world that is safer and saner and that we all want to live in. I hope that is a conversation they will welcome.

The post Enough is enough. Let’s tell Facebook what we want fixed. appeared first on Mark Surman.

Planet MozillaNew features in Notes v3

Today we are updating TestPilot Notes to v3.1! We have several new user-facing features and behind the scenes changes in this v3 release. The focus of this release was discoverability, speed and a bit of codebase cleanup.

We heard your feedback about “Exporting notes…” and with this release we have added the first export related feature. You can now export the notepad as HTML using the menu. We are still playing around with Markdown and other exporting features.

<figure><figcaption>Export your Notes as HTML today!</figcaption></figure>

A lot of users also had trouble finding and opening Notes via the sidebar. This is why we added new ways to open the notepad. The first way is by using the new “Send to Notes” button in the context menu. This new button will open the notepad and copy the text from the webpage into it.

<figure><figcaption>Use “Send to Notes” to open Notes and insert text into the notepad</figcaption></figure>

The second path to discoverability is by using the new toolbar extension button. This will quickly open the Notes sidebar for you.

<figure><figcaption>The Notes toolbar button helps you open the notepad quickly</figcaption></figure>

The Notes team would like to thank long-time contributor Cedric Amaya for helping out with these new features.

We have also started migrating the codebase to React and Redux. Thanks to our new developer Sébastien we have landed the first pieces of the React refactor. The React changes make the Notes extension faster and make it easier to maintain the codebase. Besides the code changes there are also new UI design changes that make Notes look more like other parts of Firefox. For example the new menu looks a lot more like the Firefox browser menu:


There are other upcoming design changes to make Notes follow the Photon Design System. In the future releases we are also planning to pick up latest updates from the CKEditor editor and introduce multi-note support.

Stay tuned!

New features in Notes v3 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaBringing interactive examples to MDN

“This is scoped to be a pretty small change.”me, January 2017.

Over the last year and a bit, the MDN Web Docs team has been designing, building, and implementing interactive examples for our reference pages. The motivation for this was the idea that MDN should do more to help “action-oriented” users: people who like to learn by seeing and playing around with example code, rather than by reading about it.

We’ve just finished adding interactive examples for the JavaScript and CSS reference pages. This post looks back at the project to see how we got here and what we learned on the way.

First prototypes

The project was first outlined in the MDN product strategy, published at the end of 2016. We discussed some ideas on the MDN mailing list, and developed some prototypes.

The JS editor looked like this:

Early prototype of JavaScript editor

The CSS editor looked like this:

Screenshot of CSS editor after first user testing

We wanted the examples – especially the CSS examples – to show users the different kinds of syntax that an item could accept. In the early prototypes, we did this using autocomplete. When the user deleted the value assigned to a CSS property, we showed an autocomplete popup listing different syntax variations:

First round of user testing

In March 2017 Kadir Topal and I attended the first round of user testing, which was run by Mark Hurst. We learned a great deal about user testing, about our prototypes, and about what users wanted to see. We learned that users wanted examples and appreciated them being quick to find. Users liked interactive examples, too.

But autocomplete was not successful as a way to show different syntax forms. It just wasn’t discoverable, and even people who did accidentally trigger it didn’t seem to understand what it was for.

Especially for CSS, though, we still wanted a way to show readers the different kinds of syntax that an item could accept. For the CSS pages, we already had a code block in the pages that lists syntax options, like this:

transform: matrix(1.0, 2.0, 3.0, 4.0, 5.0, 6.0);
transform: translate(12px, 50%);
transform: translateX(2em);
transform: translateY(3in);
transform: scale(2, 0.5);
transform: scaleX(2);
transform: scaleY(0.5);
transform: rotate(0.5turn);
transform: skew(30deg, 20deg);

One user interaction we saw, that we really liked, was when readers would copy lines from this code block into the editor, to see the effect. So we thought of combining this block with the editor.

In this next version, you can select a line from the block underneath, and the style is applied to the element above:

Looking back at this prototype now, two things stand out: first, the basic interaction model that we would eventually ship was already in place. Second, although the changes we would make after this point were essentially about styling, they had a dramatic effect on the editor’s usability.

Building a foundation

After that not much happened for a while, because our front-end developers were busy on other projects. Stephanie Hobson helped improve the editor design, but she was also engaged in a full-scale redesign of MDN’s article pages. In June Schalk Neethling joined the team, dedicated to this project. He built a solid foundation for the editors and a whole new contribution workflow. This would be the basis of the final implementation.

In this implementation, interactive examples are maintained in the interactive-examples GitHub repository. Once an interactive example is merged to the repo, it is built automatically as a standalone web page which is then served from the “” domain. To include the example in an MDN page, we then embed the interactive example’s document using an iframe.

UX work and more user testing

At the end of June, we showed the editors to Jen Simmons and Dan Callahan, who provided us some very useful feedback. The JavaScript editor seemed pretty good, but we were still having problems with the CSS editor. At this point it looked like this:

Early prototype of CSS editor in June 2017

People didn’t understand that they could edit the CSS, or even that the left-hand side consisted of a list of separate choices rather than a single block.

Stephanie and Schalk did a full UX review of both editors. We also had an independent UX review from Julia Lopez-Mobilia from The Brigade. After all this work, the editors looked like this in static screenshots:

JS editor for the final user test

CSS editor for the final user test

Then we had another round of user testing. This time we ran remote user tests over video, with participants recruited through MDN itself. This gave us a tight feedback loop for the editors: we could quickly make and test adjustments based on user feedback.

This time user testing was very positive, and we decided we were ready for beta.

Beta testing

The beta test started at the end of August and lasted for two weeks. We embedded editors on three JavaScript and three CSS pages, added a survey, and asked for feedback. Danielle Vincent mentioned it in the Mozilla Developer Newsletter, which drove thousands of people to our Discourse announcement post.

Feedback was overwhelmingly positive: 156/159 people who took the survey voted to see the editor on more pages, and the free-form text feedback was very encouraging. We were confident that we had a good UX.

JavaScript examples and page load optimization

Now we had an editor but very few actual examples. We asked Mark Boas to write examples for the JavaScript reference pages, and in a couple of months he had written about 400 beautiful concise examples.

See the example for Array.slice().

We had another problem, though: the editors regressed page load time too much. Schalk and Stephanie worked to wring every last millisecond of performance optimization out of the architecture, and finally, in December 2017, we decided to ship.

We have some extra tricks we plan to implement this year to continue improving page load performance, the fact is we’re still not happy with current performance on interactive pages.

CSS examples

In the first three weeks of 2018, Schalk and I updated 400 JavaScript pages to include Mark’s examples, and then we turned to getting examples written for the CSS pages.

We asked for help, Jen Simmons tweeted about it, and three weeks later our community had contributed more than 150 examples, with over a hundred coming from a single volunteer, mfluehr.

See the example for rotate3d().

After that Rachel Andrew and Daniel Beck started working with us, and they took care of the rest.

See the example for clip-path.

What’s next?

Right now we’re working on implementing interactive examples for the HTML reference. We have just finished a round of user testing, with encouraging results, and hope to start writing examples soon.

As I hope this post makes clear, this project has been shaped by many people contributing a wide range of different skills. If you’d like to help out with the project, please check out the interactive-examples repo and the MDN Discourse forum, where we regularly announce updates.

Planet MozillaMarch Add(on)ness: Ghostery (2) Vs Decentraleyes (3)

It’s the last battle of the first round of March Add(on)ness. Closing out the privacy bracket we have… Ghostery Privacy Ghostery is a powerful privacy extension. Block ads, stop trackers … Read more

The post March Add(on)ness: Ghostery (2) Vs Decentraleyes (3) appeared first on The Firefox Frontier.

Planet MozillaMozilla Statement, Petition: Facebook and Cambridge Analytica

Update, Thursday, March 22:

Facebook reached out to us to discuss how we characterized their settings and to tell us that our original blog post overstated the scope of data sharing with app developers. What we described is an accurate characterization of what appears in Facebook’s settings.

What Facebook told us is that what we have written below is only true generally for third-party apps prior to 2015. Again, this isn’t clear in the user-facing tools and we think this needs to be fixed.

If what Facebook told us is accurate, Facebook users are sharing less data with app developers today. We understand that Facebook is trying to clarify these settings for the user. We appreciate that Facebook is engaging with stakeholders, and we want to see further commitment on bringing greater transparency to this issue.

The headlines speak for themselves: Up to 50 million Facebook users had their information used by Cambridge Analytica, a private company, without their knowledge or consent. That’s not okay.

Facebook is facing a lot of questions right now, but one thing is clear: Facebook needs to act to make sure this doesn’t happen again.

Mozilla is asking Facebook to change its app permissions and ensure users’ privacy is protected by default. And we’re asking users to stand with us by signing our petition.

Facebook’s current app permissions leave billions of its users vulnerable without knowing it. If you play games, read news or take quizzes on Facebook, chances are you are doing those activities through third-party apps and not through Facebook itself. The default permissions that Facebook gives to those third parties currently include data from your education and work, current city and posts on your timeline.

We’re asking Facebook to change its policies to ensure third parties can’t access the information of the friends of people who use an app.

At Mozilla, our approach to data is simple: no surprises, and user choice is critical. We believe in that not just because it makes for good products, but because trust is a key factor in keeping the internet healthy.

The internet is transformative because it’s a place to explore, transact, connect, and create. Trust is key to that. We’re pushing Facebook to improve its privacy practices not just because of its 2 billion users, but also for the health of the internet broadly.

Ashley Boyd is Mozilla’s VP, Advocacy

The post Mozilla Statement, Petition: Facebook and Cambridge Analytica appeared first on The Mozilla Blog.

Planet MozillaCan a GSoC project beat Cambridge Analytica at their own game?

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

where is my cookie?

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

cat plays whack-a-mole

Planet MozillaCrash-Stop, an extension to help handle crashes on Bugzilla

Crash-stop is a webextension I wrote for Bugzilla to display crash stats by builds and patch information.

The goal is to have enough information to be able to decide if a patch helped (hence its name) and, if needed, uplift it to the Beta/ESR/Release trains as appropriate.

This project was initially meant to assist release-managers but it’s been useful for developers who fix/monitor crashes or for folks doing bug triage.

A screen snapshot of crash-top from bug 1432409 (in the “Details” section):

Crash stop table

How to read the data in the table above?

  • The patches landed in beta on the 2018-02-20 at 23:40
  • The buildid of b12 is 20180222170353 and of b11 is 20180219114835
  • The first beta build containing the patches is b12.
  • The builds which don’t contain the patches are shown in pink
  • The builds that contain the patch are shown in green.

As you can see from the example above, the patches had a very positive effect for the first 2 signatures.

For release channel, the builds are shown in light yellow because no patches were found for that channel (the addon reads all the comments to try to find the push urls). As is obvious in this example, the reassuring data from Beta channel makes for a strong case to request an uplift to release channel.

Recently, I added stuff to show startup crashes, for example in bug 1435779:

Crash stop table

Recent updates:

  • The cells are colored in red when more than 50% of the crashes have the flag startup_crash set to true (on each number in Crashes rows there is a tooltip with the percentage of startup_crash == true).
  • I added icons for impacted platforms.
  • Click on signatures or versions to get more information from Socorro.

All feedback is welcome and appreciated! If you want to request features or more data, or report an error, please feel free to file a bug on GitHub.

Source Code and extension download

The extension can be installed from AMO and the development is done on GitHub, pull requests are also welcome!

Planet MozillaCrash-Stop, an extension to help handle crashes on Bugzilla

Crash-stop is a webextension I wrote for Bugzilla to display crash stats by builds and patch information.

The goal is to have enough information to be able to decide if a patch helped (hence its name) and, if needed, uplift it to the Beta/ESR/Release trains as appropriate.

This project was initially meant to assist release-managers but it’s been useful for developers who fix/monitor crashes or for folks doing bug triage.

A screen snapshot of crash-top from bug 1432409 (in the “Details” section):

Crash stop table

How to read the data in the table above?

  • The patches landed in beta on the 2018-02-20 at 23:40
  • The buildid of b12 is 20180222170353 and of b11 is 20180219114835
  • The first beta build containing the patches is b12.
  • The builds which don’t contain the patches are shown in pink
  • The builds that contain the patch are shown in green.

As you can see from the example above, the patches had a very positive effect for the first 2 signatures.

For release channel, the builds are shown in light yellow because no patches were found for that channel (the addon reads all the comments to try to find the push urls). As is obvious in this example, the reassuring data from Beta channel makes for a strong case to request an uplift to release channel.

Recently, I added stuff to show startup crashes, for example in bug 1435779:

Crash stop table

Recent updates:

  • The cells are colored in red when more than 50% of the crashes have the flag startup_crash set to true (on each number in Crashes rows there is a tooltip with the percentage of startup_crash == true).
  • I added icons for impacted platforms.
  • Click on signatures or versions to get more information from Socorro.

All feedback is welcome and appreciated! If you want to request features or more data, or report an error, please feel free to file a bug on github (see [3]).

Source Code and extension download

The extension can be installed from AMO and the development is done on GitHub, pull requests are also welcome!

Planet MozillaThis Week in Rust 226

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is noisy_float, a crate with surprisingly useful floating point types that would rather panic than be Not a Number. Thanks to Ayose Cazorla for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

145 pull requests were merged in the last week

New Contributors

  • Alan Du
  • Alexandre Martin
  • Alex Butler
  • Boris-Chengbiao Zhou
  • Dileep Bapat
  • dragan.mladjenovic
  • Eric Huss
  • snf
  • Yukio Siraichi

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Imagine going back in time and telling the reporter “this bug will get fixed 16 years from now, and the code will be written in a systems programming language that doesn’t exist yet”.

Nicholas Nethercote.

Thanks to jleedev!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaTwenty years, 1998 – 2018

Do you remember this exact day, twenty years ago? March 20, 1998. What exactly happened that day? I’ll tell you what I did then.

First a quick reminder of the state of popular culture at the time: three days later, on the 23rd, the movie Titanic would tangent the record and win eleven academy awards. Its theme song “My heart will go on” was in the top of the music charts around this time.

I was 27 years old and I worked full-time as a software engineer, mostly with embedded systems. I had already been developing software as a profession for several years then. At this moment in time I was involved as a consultant in a (rather boring) project for Ericsson Telecom ETX, in Nacka Strand in the south eastern general Stockholm area.

At some point during that Friday (I don’t remember the details, but presumably it happened during the late evening), I packaged up the source code of the URL transfer tool we were working on and uploaded it to my personal web site to share it with the world. It was the first release ever of the project under the new name: curl. The tool was already supporting HTTP, FTP and GOPHER – including uploads for the two first protocols.

It would take more than a year after this day until we started hosting the curl project on its own dedicated web site. went live in August 1999, and it was changed again to in June the following year, a URL and name we’ve kept since.

(this is the first curl logo we used, made in 1998 by Henrik Hellerstedt)

In my flat in Solna (just north of Stockholm, Sweden) I already then spent a lot of spare time, mostly late nights, in front of my computer. Back then, an Intel Pentium 120Mhz based desktop PC with a huge 19″ Nokia CRT monitor, on which I dialed up to my work’s modem pool to access the Internet and to log in to the Unix machines there on which I did a lot of the early curl development. On SunOS, Solaris and Linux.

In Stockholm, that Friday started out with sub-zero degrees Celsius but the temperature climbed up to a few positive degrees during the day and there was no snow on the ground. Pretty standard March weather in Stockholm. This is usually a period when the light is slowly coming back (winters are really dark here) but the temperatures remind us that spring still isn’t quite here.

curl 4.0 was just a little more than 2000 lines of C code. It featured 23 command line options. curl 4.0 introduced support for the FTP PORT command and now it could do ftp uploads that append to the remote file. The version number was bumped up from the 3.12 which was the last version number used by the tool under the old name, urlget.

<figure class="wp-caption alignnone" id="attachment_10981"><figcaption class="wp-caption-text">This is what the web site looked like in December 1998, the oldest capture I could find. Extracted from so unfortunately two graphical elements are missing!</figcaption></figure>

It was far from an immediate success. An old note mentions how curl 4.8 (released the summer of 1998) was downloaded more than 300 times from the site. In August 1999, we counted 1300 weekly visits on the web site. It took time to get people to discover curl and make it into the tool users wanted. By September 1999 curl had already grown to 15K lines of code

In August 2000 we shipped the first version of libcurl: all the networking transfer powers of curl in a library, ready to be used by your applications. PHP was one of the absolutely first users of libcurl and that certainly helped to drive the early use.

A year later, in August 2001, when Apple started shipping curl by default in Mac OS X 10.1 curl was already widely available in Linux and BSD package collections.

By June 2002, we counted 13000 weekly visits on the site and we had grown to 35K lines of code. And it would not stop there…

Twenty years is both nothing at all and at the same time what feels like an eternity. Just three weeks before curl 4.0 shipped, Mozilla was founded. Google wasn’t founded until six months after. This was long before Facebook or Twitter had even been considered. Certainly a different era. Even the term open source was coined just a month prior to this curl release.

Growth factors over 20 years in the project:

Supported protocols: 7.67x
Command line options: 9x
Lines of code: 75x
Contributors: 100x
Weekly web site visitors: 1,400x
End users using (something that runs) the code: 4,000,000x
Stickers with the curl logo: infinity

Twenty years since the first ever curl release. Of course, it took time to make that first release too so the work is older. curl is the third name or incarnation of the project that I first got involved with already in late 1996…


Planet MozillaWhat we learned about gender identity in Open Source

In research the Open Innovation team ran in 2017 we learned that often ‘Women’ was being used as a catch all for non-male, non-binary people; and that this often results in people feeling excluded or invisible inside open source communities.

“This goes into the gender thing — a lot of the time I see non-binary people get lumped in with “women” in diversity things — which is very dysphoria-inducing, as someone who was assigned female but is definitely *not*.” — community interview

To learn more, we launched a Diversity & Inclusion in Open Source survey earlier this year, which sought to better understand how people identify, including gender-identity.

Our gender spectrum question, was purposely long — to experiment with the value people found in seeing their identity represented in a question. People from over 200 open projects participated. Amazingly, of 17 choices, each was uniquely selected, by a survey participant at least once.

7.9% ** of all respondents, selected something other than male or female; for those under the age of 40 that number was higher at 9.1% .a

<figure class="graf graf--figure graf-after--p" id="7466">
</figure> <section class="section section--body section--first">
<figure class="graf graf--figure graf-after--p" id="7466">

In some regions, many of the gender choices felt unfamiliar or confusing — but the idea that there be more than two options was not. For example, we know that India already recognizes a ‘third gender’.

Through this experience, and other feedback we settled on a 1.0 standard for gender questions and gender pronouns for surveys, and systems.

One way your community can act on these findings, is to ensure that people can express their pronouns on profile pages, and communication channels. After our given names, pronouns are the most frequently used way of referring to each other and when we get people’s pronouns wrong, it’s no different than calling someone by the wrong name.

It’s also super-important for binary folks to take this step , by creating norms of sharing pronouns, we make it easier and safer for others.

One other way to act on this research, to ensure that if you create identity groups for women, but you mean women and non-binary — say so; invite people in by through their expressed identity.

</section> <section class="section section--body section--last">

** Responses that were deemed not to be sincere were filtered out

Join our next Diversity & Inclusion in Open Source Call — April 4th. Details in our wiki.



Planet MozillaWhy we participate in support

Why do you participate in user support?
Have you ever wondered why any of the people who answer support questions, and write documentation take the time to do it?

This is a followup to a post I wrote about dealing with disgruntled users.

Firefox is a tool of Mozilla to influence an industry toward open standards, and against software silos. By having enough market share in the browser world, web-developers are forced to support open standards.
Users will not use Firefox if they don’t know how to use it, or if it is not working as expected. Support exists to retain users. If their experience of using Firefox is a bad, we’re here to make it good, so they continue to use Firefox.

That experience includes user support. The goal is not only to help users with their problems, but remove any negative feeling they may have had. That should be the priority of every person participating in support.

Dealing with disgruntled users is an inherent part of user support. In those cases, it’s important to remind ourselves what the user wants to achieve, and what it takes to make their experience a pleasant one.

In the end, users will be more willing to forgive individual issues out of fondness of the company. That passion for helping users will attract others, and the community will grow.

Planet MozillaWebRender newsletter #16

Greetings! 16th newsletter inbound, with some exciting fixes. Oh yeah, fixes again and again, and spoiler alert: this will remain the main topic for a little while.
So what’s exciting about it this time around? For one, Heftig, Martin Stránský and Snorp figured out what was causing rendering to be so broken with nvidia GPUs on Linux (and fixed it). The problem was that when creating a GLX visual, Gdk by default tries to select one that does not have a depth buffer. However, WebRender relies on the depth buffer for rendering opaque primitives.
The other fix that I am particularly excited about is brought to you by Kvark, who finally ended the content flickering saga on Windows after a series of fixes and workarounds in our own code and upstream in ANGLE.

Notable WebRender changes

  • Simon added support for rendering with ANGLE in wrench on Windows. This will let us run tests in a configuration that better matches what users run.
  • Kvark fixed a division by zero in the brush_blend shader.
  • Glenn fixed some box-shadow artifacts.
  • Lee fixed the way we clear font data when shutting down.
  • Glenn avoided attempting to render a scene if the requested window dimension are unreasonably large.
  • Nical avoided re-buiding the scene when updating dynamic properties (it had already been done by Glenn, but accidentally backed out).
  • Glenn refactored the way pictures and brushes are stored to allow more flexibility.
  • Kats and Nical updated the tidy CI script, and fixed an avalanche of followup issues (2), (3), (4).
  • Martin simplified the clipping API.
  • Glenn fixed text-shadow primitives during batch merging.
  • Glenn ported intermediate blits to the brush_image shader.
  • Nical decoupled the tiled image decomposition from the scene building code (in preparation for moving it to the frame building phase).
  • Kvark refactored the shader management code.
  • Glenn ported blurs to use brush_image instead of the composite shader.
  • Simon implemented an example that uses DirectComposition.
  • Martin fixed a clipping issue in blurred and shadowed text.
  • Kvark worked around an ANGLE bug after backing out another attempt at working around the same dreaded ANGLE flickering bug.
  • Jeff avoided performing divisons and modulos on unsigned integers in the shaders.
  • Glenn optimized the brush_image shader.
  • Glenn changed box-shadow to be a clip source instead of a picture, providing some simplications and better batching.
  • Glenn reduced the number of clip store allocations.
  • Martin removed inversed matrix computations from the shaders.
  • Kvark fixed a bug with zero-sized render tasks.

Notable Gecko changes

  • Snorp, Heftig and Martin Stránský fixed broken rendering with nvidia graphics cards on Linux. WebRender is now usable with nvidia GPUs on Linux.
  • Kvark fixed flickering issues with ANGLE on Windows.
  • Sotaro fixed a crash, and another one.
  • Andrew made background SVGs use blob images instead of the basic layer manager fallback, yielding nice perf imporvements.
  • Nical made gradients rely on WebRender’s pixel snapping instead of snapping incorrectly during dsiplay list building.
  • Sotaro fixed a bug when moving a tab containing a canvas to a different window.
  • Sotaro fixed a jiggling issue on WIndows when resizing the browser window.
  • Nical fixed a race condition in the frame throttling logic causing windows to not paint intermittently.
  • Andrew avoided using the fallback logic for images during decoding.
  • Sotaro fixed an ffi bug causing images to not render on 32bit Windows.
  • Jeff simplified the memory management of WebRenderUserData.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other prefs.

Note that WebRender can only be enabled in Firefox Nightly. We will make it possible to enable it on other release channels as soon as we consider it stable enough to reach a broader audience.

Planet MozillaMarch Add(on)ness: Tab Centre Redux (2) vs Tabby Cat (3)

Do you like your tabs on the side, or with a side of cats? Tell us in today’s March Add(on)ness… Tab Center Redux Customization Move your tabs to the side … Read more

The post March Add(on)ness: Tab Centre Redux (2) vs Tabby Cat (3) appeared first on The Firefox Frontier.

Planet MozillaA good question, from Twitter

Good question on Twitter, but one that might take more than, what is it now, 280 characters? to answer.

Why do I pay attention to Internet advertising? Why not just block it and forget about it? By now, web ad revenue per user is so small that it only makes sense if you're running a platform with billions of users, so sites are busy figuring out other ways to get paid anyway.

To the generation that never had a print magazine subscription, advertising is just a subset of "creepy shit on the Internet." Who wants to do that for a living? According to Charlotte Rogers at Marketing Week, the lack of information out there explaining the diverse opportunities of a career in marketing puts the industry at a distinct disadvantage in the minds of young people. Marketing also has to contend with a perception problem among the younger generation that it is intrinsically linked with advertising, which Generation Z notoriously either distrust or dislike.

Like the man says, Where Did It All Go Wrong?

The answer is that I'm interested in Internet advertising for two reasons.

  • First, because I'm a Kurt Vonnegut fan and have worked for a magazine. Some kinds of advertising can have positive externalities. Vonnegut was able to quit his job at a car dealership, and write full time, because advertising paid for original fiction in Collier's magazine. How did advertising lose its ability to pay for news and cultural works? Can advertising reclaim that ability?

  • Second, because most of the economic role of advertising is in an area that Internet advertising hasn't been able to get a piece of. While Internet advertising plays a game of haha, look what I tricked you into clicking on for chump change, the real money is in signal-carrying advertising that helps build brand reputation. Is it possible to make Internet advertising into a medium that can get a piece of the action?

Maybe make that three reasons. As long as Internet advertising fails to pull its weight in either supporting news and cultural works or helping to send a credible economic signal for brands then the scams, malware and mental manipulation will only continue. More: World's last web advertising optimist tells all!

Planet MozillaThis Week In Servo 108

In the last week, we merged 89 PRs in the Servo organization’s repositories.

We have been working on adding automated performance tests for the Alexa top pages, and thanks to contributions from the Servo community we are now regularly tracking the performance of the top 10 websites.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions

  • UK992 embedded the Servo icon in Windows nightly builds.
  • kwonoj added a hotkey to perform a WebRender capture.
  • Xanewok removed all traces of an unsafe API from the JS bindings.
  • jdm tracked down an intermittent build problem that was interfering with CI.
  • nox fixed a panic that could occur when navigating away from pages that use promises.
  • lsalzman fixed a font-related memory leak in WebRender.
  • Xanewok implemented APIs for storing typed arrays on the heap.
  • alexrs extracted parts of homu’s command parsing routine to add automated tests.
  • Xanewok implemented support for generating bindings for WebIDL APIs that use typed arrays.
  • kvark simplified the management of shaders in WebRender.
  • oOIgnitionOo added Windows support for running nightlies through the mach tool.
  • paul added more typed units to APIs related to the compositor.
  • mrobinson made binary capture recording work again.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Planet MozillaMarch Add(on)ness: Reverse Image Search (2) Vs Unpaywall (3)

Two strong competitors in today’s March Add(on)ness that will help us learn if finding out where an image is from is better than being able to access millions of open-access … Read more

The post March Add(on)ness: Reverse Image Search (2) Vs Unpaywall (3) appeared first on The Firefox Frontier.

Planet WebKitPhilippe Normand: Web Engines Hackfest 2014

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read …

Planet MozillaMarch Add(on)ness: Momentum (2) vs Grammarly (3)

The pen is mightier than the sword, but is personal organization more powerful than having to worry about grammar? You tell us in today’s March Add(on)ness. Momentum Optimization With Momentum, … Read more

The post March Add(on)ness: Momentum (2) vs Grammarly (3) appeared first on The Firefox Frontier.

Planet WebKitProtecting Against HSTS Abuse

HTTP Strict Transport Security (HSTS) is a security standard that provides a mechanism for web sites to declare themselves accessible only via secure connections, and to tell web browsers where to go to get that secure version. Web browsers that honor the HSTS standard also prevent users from ignoring server certificate errors.

Apple uses HSTS on, for example, so that any time a visitor attempts to navigate to the insecure address “” either by typing that web address, or clicking on a link, they will be automatically redirected to “”. HSTS will also cause the web browser to go to the secure site in the future, even if the insecure address is used. This is a great feature that prevents a simple error from placing users in a dangerous state, such as performing financial transactions over an unauthenticated connection.

What could be wrong with that?

Well, the HSTS standard describes that web browsers should remember when redirected to a secure location, and to automatically make that conversion on behalf of the user if they attempt an insecure connection in the future. This creates information that can be stored on the user’s device and referenced later. And this can be used to create a “super cookie” that can be read by cross-site trackers.

HSTS as a Persistent Cross-Site Identifier (aka “Super Cookie”)

An attacker seeking to track site visitors can take advantage of the user’s HSTS cache to store one bit of information on that user’s device. For example, “load this domain with HTTPS” could represent a 1, while no entry in the HSTS cache would represent a 0. By registering some large number of domains (e.g., 32 or more), and forcing resource loads from a controlled subset of those domains, they can create a large enough vector of bits to uniquely represent each site visitor.

The HSTS authors recognized this possibility in Section 14.9 of their spec:

…it is possible for those who control one or more HSTS Hosts to encode information into domain names they control and cause such UAs to cache this information as a matter of course in the process of noting the HSTS Host. This information can be retrieved by other hosts through cleverly constructed and loaded web resources, causing the UA to send queries to (variations of) the encoded domain names.

On the initial website visit:

  • A random number is assigned to the visitor, for example 8396804.
  • This can be represented as a binary value (e.g., 100000000010000000000100)
  • The tracker script then makes subresource requests to a tracker-controlled domain over https, one request per active bit in the tracking identifier.
    • … and so on.
  • The server responds to each HTTPS request with an HSTS response header, which caches the tracking value in the web browser.
  • Now we are guaranteed to load the HTTPS version of,, and, even if the load is attempted over HTTP.

On subsequent website visits:

  • The tracker script loads 32 invisible pixels over HTTP that represent the bits in the binary number.
  • Since some of those bits (,, and in our example) were loaded with HSTS, they will automatically be redirected to HTTPS.
  • The tracking server transmits one image when they are requested over HTTP, and a different image when requested over HTTPS.
  • The tracking script recognizes the different images, turns those into zero (HTTP) and one (HTTPS) bits in the number, and voila — your unique binary value is recreated and you are tracked!

Attempts to mitigate this attack are challenging because of the difficulty in balancing security and privacy goals. Improperly mitigating the attack also runs the risk of weakening important security protections.


Periodically, the privacy risks of HSTS are discussed in the media as a theoretical tracking vector (e.g., [1], [2], and [3]). Absent evidence of actual malicious abuse of the HSTS protocol, browser implementors erred on the side of caution and honored all HSTS instructions provided by sites.

Recently we became aware that this theoretical attack was beginning to be deployed against Safari users. We therefore developed a balanced solution that protects secure web traffic while mitigating tracking.

Apple’s Solution

The HSTS exploit consists of two phases: the initial tracking identifier creation phase, and the subsequent read operation. We decided to apply mitigations to both sides of the attack.

Mitigation 1: Limit HSTS State to the Hostname, or the Top Level Domain + 1

We observed tracking sites constructing long URL’s encoding the digits in various levels of the domain name.

For example:

We also observed tracking sites using large number of sibling domain names, for example:

Telemetry showed that attackers would set HSTS across a wide range of sub-domains at once. Because using HSTS in this way does not benefit legitimate use cases, but does facilitate tracking, we revised our network stack to only permit HSTS state to be set for the loaded hostname (e.g., “”), or the Top Level Domain + 1 (TLD+1) (e.g., “”).

This prevents trackers from efficiently setting HSTS across large numbers of different bits; instead, they must individually visit each domain representing an active bit in the tracking identifier. While content providers and advertisers may judge that the latency introduced by a single redirect through one origin to set many bits is imperceptible to a user, requiring redirects to 32 or more domains to set the bits of the identifier would be perceptible to the user and thus unacceptable to them and content providers. WebKit also caps the number of redirects that can be chained together, which places an upper bound on the number of bits that can be set, even if the latency was judged to be acceptable.

This resolves the setting side of the super cookie equation.

Mitigation 2: Ignore HSTS State for Subresource Requests to Blocked Domains

We modified WebKit so that when an insecure third-party subresource load from a domain for which we block cookies (such as an invisible tracking pixel) had been upgraded to an authenticated connection because of dynamic HSTS, we ignore the HSTS upgrade request and just use the original URL. This causes HSTS super cookies to become a bit string consisting only of zeroes.


Telemetry gathered during internal regression testing, our public seeds, and the final public software release indicates that the two mitigations described above successfully prevented the creation and reading of HSTS super cookies while not regressing the security goals of first party content. We believe them to be consistent with best practices, and to maintain the important security protections provided by HSTS. We have shared the details of Mitigation 1 with the authors of RFC 6797, and are working to incorporate the behavior as part of the standard.

However, the internet is a wide space full of unique and amazing uses of Web Technology. If you feel that you have a legitimate case where these new rules are not working as intended, we would like to know about it. Please send feedback and questions to or @webkit on Twitter, and file any bugs that you run into on WebKit’s bug tracker.



Planet MozillaMarch Add(on)ness: uBlock (1) vs Kimetrack (4)

Decide who will be the ultimate privacy extension in today’s Add-on Madness… uBlock Origin Privacy, tracking uBlock Origin is an efficient blocker. Easy on CPU and memory. Nobody likes to … Read more

The post March Add(on)ness: uBlock (1) vs Kimetrack (4) appeared first on The Firefox Frontier.

Planet MozillaOSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Planet MozillaTenFourFox FPR6 SPR1 coming

Stand by for FPR6 Security Parity Release 1 due to the usual turmoil following Pwn2Own, in which the mighty typically fall and this year Firefox did. We track these advisories and always plan to have a patched build of TenFourFox ready and parallel with Mozilla's official chemspill release; I have already backported the patch and tested it internally.

The bug in question would require a TenFourFox-specific exploit to be useful, but is definitely exploitable, and fortunately was easily repaired. The G5 will chug overnight and have builds tomorrow and heat the rear of the house all at the same time.

Planet MozillaAddressing GitHub Problems: "What PRs are open for this issue?"

When looking at a GitHub issue, I often need to know, “What PRs are open for this issue?” I wrote the GitHub Issue Hoister add-on to address my problem.

It hoists those “mcomella added a commit that references this issue” links to the top of an issue page to make them easier to access and see at a glance:

An example of the Issue Hoister in use

Check out the brief tutorial for caveats and more details, or just download it off AMO. For bugs/issues, file them on github.

Planet MozillaPrepare to be Creeped Out

Mozilla Fellow Hang Do Thi Duc joins us to share her Data Selfie art project. It collects the same basic info you provide to Facebook. Sharing this kind of data about yourself isn’t something we’d normally recommend. But, if you want to know what’s happening behind the scenes when you scroll through your Facebook feed, installing Data Selfie is worth considering. Use at your own risk. If you do, you might be surprised by what you see.

Hi everyone, I’m Hang,

Ever wonder what Facebook knows about you? Why did that ad for motorcycle insurance pop up when you don’t own a motorcycle? Why did that ad for foot cream pop up right after you talked about your foot itching?

I wondered. So I created something to help me find out. I call it Data Selfie. It’s an add-on–a little piece of software you download to use with your web browser–that works in both Firefox and Chrome.

How does it work? Every time you like, click, read, or post something on Facebook, Facebook knows. Even if you don’t comment or share much, Facebook learns about you as you scroll through your feed.

My add-on does something similar. It’s here to help you understand how your actions online can be tracked. It does this by collecting the same information you provide to Facebook, while still respecting your privacy.

NOTE: The add-on is available in Firefox too.

Want to see what your Data Selfie looks like? Here’s how:

  1. Go here:
  2. Download the Firefox or Chrome add-on
  3. Check out my privacy policy if you want to know more about how this works .
  4. You’ll see an eye icon that looks in the upper right corner of your browser. Click on it.
  5. From the list, click “Your Data Selfie.”

You’ll see there’s not much to your Data Selfie yet. Just browse Facebook as you normally do. It takes about a week of regular Facebook use for your Data Selfie to gather enough information to give you a good idea of what Facebook might know about you.

Thanks! I hope you enjoy your Data Selfie.

Hang Do Thi Duc
Mozilla Fellow

PS. My Data Selfie says I’m a laid-back, liberal man who isn’t likely to have a gym membership and prefers style when buying clothes. Pretty accurate, actually.

The post Prepare to be Creeped Out appeared first on The Mozilla Blog.

Planet MozillaReps Weekly Meeting, 15 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting, 15 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaBuilding Mixed Reality spaces for the web

Building Mixed Reality spaces for the web

One of the primary goals of our Social Mixed Reality team is to enable and accelerate access to Mixed Reality-based communication. As mentioned in our announcement blog post, we feel meeting with others around the world in Mixed Reality should be as easy as sharing a link, and creating a virtual space to spend time in should be as easy as building your first website. In this post, we wanted to share an early look at some work we are doing to help achieve the second goal, making it easy for newcomers to create compelling 3D spaces suited for meeting in Mixed Reality.

Anyone who has gone through the A-Frame tutorials and learned the basics of creating boxes, spheres, and other entities soon find themselves wanting to build out a full 3D environment. Components such as the a-frame environment component can be a good start to adding life to the initial black void of an empty virtual space, but that mostly takes care of ‘background’ aspects to the space such as the sky, ground surface, and far-off objects like trees and clouds.

Beyond that, people quickly find themselves facing a roadblock: the kind of space they want to make is often more ambitious than what can be done with a few simple shapes, and needs to be more architectural and grounded in reality. To build such a space today requires a wide variety of knowledge and skills, from the obvious ones like modelling and texturing, to those more specific to Mixed Reality such as optimizing rendering performance and properly designing the architecture for scale and comfort in a headset.

If we want building your first space to be as easy as building your first website, there is clearly a lot of work to be done! So, what can we do to make it easier?

Modular by Design

What do Lego and IKEA have in common? Well, aside from originating from Scandinavian countries, they both make products whose designs embrace modularity to great effect. Through this modularity, just about anyone can put together a desk from IKEA or a spaceship from Lego, and a wide variety of products can be made due to the versatility of the parts. Why not apply these same ideas to building virtual spaces?

We are working on a system, all of which will be open sourced and freely available, which will allow anyone to create virtual spaces using a set of premade architectural elements that can be combined in countless ways. We’re not the first to come up with such a system, it’s been an approach growing in popularity and sophistication within game studios for building large, continuous worlds. In our case, the pieces in our system all follow a strict set of metrics that make the construction process as simple as possible and remove the guesswork involved in assembling a scene. The result is that a person with basic knowledge can quickly put together a virtual space that feels more like a real place and less like a world made up of simple shapes. For more experienced creators, the system can be used for rapid prototyping, allowing them to realize their ideas more quickly.

The most exciting part is that, combined with our other efforts, you’ll soon be able to visit the spaces you build with this system with anyone around the world, all from within Mixed Reality, by simply sharing a link.

Building Mixed Reality spaces for the web

Optimized for Mixed Reality

Creating experiences for Mixed Reality poses a unique set of challenges, such as the need to deliver high frame rates and a comfortable, immersive experience. Things can quickly fall apart when using assets that are too demanding for mobile devices or even lower-end PC hardware. Unfortunately, many assets you might obtain from various asset stores are often not optimized or designed for Mixed Reality experiences.

Our architectural modules are being built for Mixed Reality from the start. Vertex counts, texel density, and draw calls are just a few of the metrics we use to validate performance and to ensure that these assets can be used to build virtual spaces that run well on a wide range of devices. We have designed a grid system and an approach to composability that will ensure your space not only runs well, but also has proper scale and looks good within a Mixed Reality headset.

Our team’s mission is to enable access to Mixed Reality for communication, and this means we are committed to cross-platform compatibility, including lower-end devices. Our performance targets are chosen so that spaces designed with our system should have minimal rendering costs for even the lowest end mobile VR devices. This is hard work, but is necessary if we expect to allow everyone in the world to connect within this new medium, not just those who have access to high-end hardware.

Ultimately, the benefit of these efforts is that you, the creator, will spend less time worrying about graphics performance and basic design needs such as proper scale and proportion. Instead, you’ll be able to focus on what’s important: creating an amazing virtual space that people will want to spend many hours in together!

Building Mixed Reality spaces for the web

<h7>This interior (and the one above) were made in just a few hours.</h7>


Next steps

We hope this work will help end the feeling of being overwhelmed by the ‘blank canvas’ when starting your first virtual space and instead, empower you to create, iterate, and share your creations quickly while reaching as many people as possible. You can expect to see more announcements from us soon on how we’ll be releasing this and other work in trying to deliver on this promise. You can follow our progress at @mozillareality or join the conversation in the #social channel on the WebXR slack. We’ll see you there!

Planet MozillaEnter the Firefox Quantum Extensions Challenge

Firefox users love using extensions to personalize their browsing experience. Now, it’s easier than ever for developers with working knowledge of JavaScript, HTML, and CSS to create extensions for Firefox using the WebExtensions API. New and improved WebExtensions APIs land with each new Firefox release, giving developers the freedom to create new features and fine-tune their extensions.

You’re invited  to use your skill, savvy, and creativity to create great new extensions for the Firefox Quantum Extensions Challenge. Between March 15 and April 15, 2018, use Firefox Developer Edition to create extensions that make full use of available WebExtensions APIs for one of the prize categories. (Legacy extensions that have been updated to WebExtensions APIs, or Chrome extensions that have been ported to Firefox on or after January 1, 2018, are also eligible for this challenge.)

A panel of judges will select three to four finalists in each category, and the community will be invited to vote for the winners. We’ll announce the winners with the release of Firefox 60 in May 2018. Winners in each category will receive an iPad Pro and promotion of their extensions to Firefox users. Runners-up will receive a $250 USD Amazon gift card.

Ready to get started? Visit the challenge site for more information (including the official rules) and download Firefox Developer Edition.

Winners will be notified by the end of April 2018 and will be announced with the release of Firefox 60 in May 2018.

Good luck!

The post Enter the Firefox Quantum Extensions Challenge appeared first on Mozilla Add-ons Blog.

Planet MozillaFirefox Quantum Extensions Challenge

Firefox users love using extensions to personalize their browsing experience. Now, it’s easier than ever for developers with working knowledge of JavaScript, HTML, and CSS to create extensions for Firefox using the WebExtensions API . New and improved WebExtensions APIs land with each new Firefox release, giving developers the freedom to create new features and fine-tune their extensions.

You’re invited to use your skill, savvy, and creativity to create great new extensions for the Firefox Quantum Extensions Challenge . Between March 15 and April 15, 2018, use Firefox Developer Edition to create extensions that make full use of available WebExtensions APIs for one of the prize categories. (Legacy extensions that have been updated to WebExtensions APIs, or Chrome extensions that have been ported to Firefox on or after January 1, 2018, are also eligible for this challenge.)

A panel of judges will select three to four finalists in each category, and the community will be invited to vote for the winners. We’ll announce the winners with the release of Firefox 60 in May 2018. Winners in each category will receive an iPad Pro and promotion of their extensions to Firefox users. Runners-up will receive a $250 USD Amazon gift card.


Best in Tab Management & Organization

Firefox users love customizing their browser tabs. Create the next generation of user-friendly extensions to style, organize, and manage tabs.

Best Dynamic Themes

With the new theme API, developers can create beautiful and responsive dynamic themes to customize Firefox’s appearance and make them interactive. We’re looking for a dynamite combination of aesthetics and utility.

Best in Games & Entertainment

Extensions aren’t just for improving productivity — they’re also great for adding whimsy and fun to your day. We’re looking for high-performing, original ideas that will bring delight to Firefox users.

New & Improved APIs

So many new WebExtensions APIs have landed in the last few Firefox releases, and Firefox 60 will add even more. Let’s start with themes.

The current Theme API supports nearly 20 different visual elements that developers can customize. In Firefox 60, the list will grow to include the following items now in development:

But remember, your goal isn’t just to come up with a nice looking set of UI elements. Wow us with an extension that uses the Theme API to dynamically modify UI elements in order to create something that is visually stunning and equally useful.

For tabs, several new API have been added, including:

The contextualIdentities API is not new, but it is unique to Firefox and may provide developers with some interesting tools for separating online identities. The same goes for the sidebar API, another unique feature of Firefox that allows developers to get creative with alternate user interface models.

Get Started

Winners will be notified by the end of April 2018 and will be announced with the release of Firefox 60 in May 2018.

Good luck!

Planet MozillaMarch Add(on)ness: Tree Style Tab (1) Vs Don’t Touch My Tabs (4)

It’s a head-to-head match up of tab customization for March Add(on)ness… Tree Style Tab Customization Tree Style Tabs opens new tabs as organized “children” of the current tab. Such “branches” … Read more

The post March Add(on)ness: Tree Style Tab (1) Vs Don’t Touch My Tabs (4) appeared first on The Firefox Frontier.

Planet MozillaPoetic License

I found this when going through old documents. It looks like I wrote it and never posted it. Perhaps I didn’t consider it finished at the time. But looking at it now, I think it’s good enough to share. It’s a redrafting of the BSD licence, in poetic form. Maybe I had plans to do other licences one day; I can’t remember.

I’ve interleaved it with the original license text so you can see how true, or otherwise, I’ve been to it. Enjoy :-)

Copyright (c) <YEAR>, <OWNER>
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions 
are met:

You may redistribute and use –
as source or binary, as you choose,
and with some changes or without –
this software; let there be no doubt.
But you must meet conditions three,
if in compliance you wish to be.

1. Redistributions of source code must retain the above copyright 
   notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright 
  notice, this list of conditions and the following disclaimer in the 
  documentation and/or other materials provided with the distribution.
3. Neither the name of the  nor the names of its 
   contributors may be used to endorse or promote products derived 
   from this software without specific prior written permission.

The first is obvious, of course –
To keep this text within the source.
The second is for binaries
Place in the docs a copy, please.
A moral lesson from this ode –
Don’t strip the copyright on code.

The third applies when you promote:
You must not take, from us who wrote,
our names and make it seem as true
we like or love your version too.
(Unless, of course, you contact us
And get our written assensus.)


One final point to be laid out
(You must forgive my need to shout):


When all is told, we sum up thus –
Do what you like, just don’t sue us.

Planet WebKitIntelligent Tracking Prevention 1.1

In June of last year, we announced Intelligent Tracking Prevention, or ITP. ITP is a privacy feature which detects domains that have the ability to track the user cross-site and either partitions or purges the associated website data.

The biggest update to ITP so far is the introduction of the Storage Access API which provides a mechanism for embedded third-party content to get out of cookie partitioning through user interaction. In addition to the Storage Access API, ITP 1.1 includes two behavior changes described below.

Partitioned Cookies No Longer Persisted to Disk

With ITP 1.1, all partitioned cookies are treated as session cookies and are not persisted to disk.

Domains that have their cookies partitioned by ITP have a way to get access to their non-partitioned cookies through the Storage Access API. Therefore there is no longer a need to persist partitioned cookies across browsing sessions.

Cookies Blocked If They Will Be Purged Anyway

ITP’s purging of cookies and other website data happens once an hour for performance reasons. In between purges, ITP 1.0 would partition cookies for domains with a pending purge to make sure there were no gaps where cross-site tracking could happen. This caused a situation where cookies were purged shortly after being created, potentially confusing servers.

With ITP 1.1, domains with a pending purge will not be able to set new cookies and their existing cookies are not sent in requests. This makes the transition from partitioned cookies to purged cookies distinct and easier to handle for developers.

Intelligent Tracking Prevention 1.1 Timeline


These updates to Intelligent Tracking Prevention are available in Safari 11.1 on iOS 11.3 beta and macOS High Sierra 10.13.4 beta, as well as in Safari Technology Preview. Please report bugs through, or send fee dback on Twitter to the team @webkit, or our evangelist @jonathandavis. If you have technical questions about these changes, you can find me on Twitter @johnwilander.

Planet MozillaThese Weeks in Firefox: Issue 34


Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates


Activity Stream

Browser Architecture


Policy Engine

  • Marketing push for Firefox Quantum for ESR (aka Firefox 60) starting soon, which will be talking about this feature
  • YUKI “Piro” (from Tree Style Tabs) contributing to Policy Engine, which is great! Thank you!



Search and Navigation

Address Bar & Search

Sync / Firefox Accounts

Test Pilot

Web Payments

Planet MozillaThe Joy of Coding - Episode 132

The Joy of Coding - Episode 132 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 132

The Joy of Coding - Episode 132 mconley livehacks on real Firefox bugs while thinking aloud.


Updated: .  Michael(tm) Smith <>