Planet MozillaWe See You! Reaching Diverse Audiences in FOSS

<section class="section section--body">
This is the third in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

This post shares findings focused on inclusive outreach, communication and engagement.

<figure class="graf graf--figure">

Photo Credit: “Eye”


When joining Mozilla or any other open source community, we look to see our identities reflected in the values, language(s), methods of connection and behaviors of the community and project leadership. We uncovered a number of challenges, and opportunities in making project communication more accessible and inclusive.

Say my name, say my name

Research showed, that in communication and project initiatives and even advocacy messaging, we appear to tiptoe around naming diversity dynamics such as gender-identity, sexual orientation, age, physical ability and neurodiversity. Some we interviewed, shared an impression that this might be intended to not overstep cultural lines, or to upset religious beliefs. It was also suggested that we lean on ‘umbrella’ identities like ‘Women’ as catch-all for non-male, non-binary people.

“This goes into the gender thing — a lot of the time I see non-binary people get lumped in with “women” in diversity things — which is very dysphoria-inducing, as someone who was assigned female but is definitely *not*.” — community interview

Through inclusive language , by identifying people the way they self-identify in project communication and invitation — only then are we truly welcoming diverse collaboration, connection and engagement.

The Community Participation Guidelines are a great reference for how to work with inclusive language and the full range of diversities we seek to include.

Breaking the Language Barrier

Qualitative research showed only 21% of our respondents spoke English as a first language. Although this snapshot was taken with limited data-sets, our interviews also generated a narrative of struggles for non-native English speakers in an English-first organization*.

Most striking was how the intersection of language and other diversity raises barriers for those already limited — for example parents and non-binary contributors who struggle with English for an almost impossible challenge.

“My friend was a contributor until she had her baby, and since most time she had would be taken trying to translate information from English, she stopped” — community interview

* Primary project communication channels, media, print/copy resources are in English. Mozilla has a very active L10N community.

<figure class="graf graf--figure"></figure>

“Non-English speakers struggle to find opportunity before it expires — People want to feel part, but they feel they are late to everything that Mozilla throws. They need enough contact with those responsibilities or someone who speaks their language.” — community interview

Overall our interviews left us with a strong sense of how difficult, frustrating and even heartbreaking the experience of speaking, and listening within an English-first project is — and that the result was reduced opportunity for innovation, and the mission overall.

As a result it’s clear that creating strategies for speaking and listening to people in their own language is critical in obtaining global perspectives that would otherwise remain hidden.

We found early evidence of this value in this research, conducting a series of first language interviews, especially for people already marginalized within their culture. This could possibly align with our other recommendations for identity groups.

Exclusion by Technical Jargon

‘Technical Jargon/Lingo’, and overly complicated/technical language was cited as a primary challenge for getting involved, and not always for the reasons you might think — with data showing that a type of ‘technical confidence’ might be influencing that choice.

In one community survey, men had much greater confidence in their technical ability than women which might explain low technical participation from women and is backed up by other research showing women only apply for jobs they feel 100% qualified for.

<figure class="graf graf--figure"></figure>

“I feel uncomfortable when they talk about a new technology [at community events].” “I am excited about the new technology and want to jump in, but level of talk and people can be exclusive. I end up leaving.” — community interview

“Technology focus only feels exclusive — we need to say why that technology helps people, not just that it is cool” — community interview

By curbing unnecessary technical language in project descriptions, invitations, issues and outreach anyone can step into a project. This, combined with opportunities for mentorship may have huge impact on diversity of technical projects. Intending to do just that is the Rust project starting with this fantastic initiative.

Making Communication Accessible

<figure class="graf graf--figure">

Photo Credit: Tim Mossholder via Visualhunt


As part of the interview process we offered text-based focus groups, and interviews in Telegram — and approximately 25% of people selected this option over video.

While we initially offered text-based interviews to overcome issues of bandwidth and connectivity it was noticeable that many chose Telegram for other reasons. This method of communication, we found, better allowed people to translate questions at their own pace, and for those leaning towards introvert space to think, and time to respond.

“More than half of the world is still without Internet, and even people who do have access may be limited by factors like high cost, unreliable connections or censorship. Digital Inclusion” — The Internet Health Report

A repeated theme was that the project standard of using Vidyo created struggle, or existed as a blocker for many to engage for reasons of bandwidth and technology compatibility. Additionally, media produced like plenaries and important keynotes from All Hands lack captioning and which would benefit non-English speakers, and those with visual impairment.

Overall.. Connecting people to Mozilla’s mission, and opportunity to participant is largely dependent on our ability, and determination to make communication accessible and inclusive. Recommendations will be formed based on these findings, and we welcome your feedback and ideas for standards & best practices that can help us get there.

</section> <section class="section section--body">

Our next post in this series ‘Frameworks for Incentive & Consequence in FOSS’, will be published on July 28th. Until then, check out the Internet Health Report on digital inclusion.

Cross-posted to Medium.



Planet WebKitUpdate on Web Cryptography

Cryptography is the cornerstone of information security, including various aspects such as data confidentiality, data integrity, authentication, and non-repudiation. These provide support for the fundamental technologies of today’s Internet like HTTPS, DNSSEC, and VPN. The WebCrypto API was created to bring these important high-level cryptography capabilities to the web. This API provides a set of JavaScript functions for manipulating low-level cryptographic operations, such as hashing, signature generation and verification, encryption and decryption, and shared secret derivation. In addition, it supports generation and management of corresponding key materials. Combining the complete support of various cryptographic operations with a wide range of algorithms, the WebCrypto API is able to assist web authors in tackling diverse security requirements.

This blog post first talks about the advantages of implementing web cryptography through native APIs, and then introduces an overview of the WebCrypto API itself. Next, it presents some differences between the updated SubtleCrypto interface and the older webkit- prefixed interface. Some newly-added algorithms are discussed, and finally we demonstrate how to smoothly transition from the webkit- prefixed API to the new, standards-compliant API.

Native or Not Native?

Long before the WebCrypto API was standardized, several JavaScript cryptography libraries were created and have successfully served the open web since. So why bother implementing a web-facing cryptography library built on native APIs? There are several reasons, one of the more important being performance. Numbers tell the truth. We conducted several performance tests to compare our updated WebCrypto API and some famous pure JavaScript implementations.

The latest SJCL (1.0.7), asmcrypto.js, and CryptoJS (3.1) were selected for the comparison. The test suite contains:

  1. AES-GCM: Test encryption/decryption against a 4MB file, repeat certain times and record down the average speed. It uses a 256-bit AES key.
  2. SHA-2: Hash a 512KB file by SHA-512, repeat certain times and record down the average speed.
  3. RSA: Test RSA-PSS signature and verification against a 512KB file, repeat certain times and record down the average speed. It uses a 2048-bit key pair and SHA-512 for hashing.

The content under test was carefully selected to reflect the most frequently used day-to-day cryptography operations and paired with appropriate algorithms. The test platform was a MacBook Pro (MacBookPro11,5) with a 2.8 GHz Intel Core i7 running MacOS 10.13 Beta (17A306f) and Safari Technology Preview 35. Some of the pure JavaScript implementations do not support all of the test content, therefore corresponding results were omitted from those results.

Here are the test results.

AES-GCM Encryption/Decryption SHA-2 RSA

As you can see, the difference in performance is staggering. This was a surprising result, since most modern JavaScript engines are very efficient. Working with our JavaScriptCore team, we learned that the causes of these pure JavaScript implementations not performing well is that most of them are not actively maintained. Few of them take full advantage of our fast JavaScriptCore engine or modern JavaScript coding practices. Otherwise, the gaps may not be that huge.

Besides superior performance, WebCrypto API also benefits better security models. For example, when developing with pure JavaScript crypto libraries, secret or private keys are often stored in the global JavaScript execution context. It is extremely vulnerable as keys are exposed to any JavaScript resources being loaded and therefore allows XSS attackers be able to steal the keys. WebCrypto API instead protects the secret or private keys by storing them completely outside of the JavaScript execution context. This limits the risk of the private key being exfiltrated and reduces the window of compromise if an attacker gets to execute JavaScript in the victim’s browser. What’s more, our WebCrypto implementation on macOS/iOS is based on the CommonCrypto routines, which are highly tuned for our hardware platforms, and are regularly audited and reviewed for security and correctness. WebCrypto API is therefore the best way to ensure users enjoy the highest security protection.

Overview of WebCrypto API

The WebCrypto API starts with crypto global object:

    subtle: SubtleCrypto,
    ArrayBufferView getRandomValues(ArrayBufferView array)

Inside, it owns a subtle object that is a singleton of the SubtleCrypto interface. The interface is named subtle because it warns developers that many of the crypto algorithms have sophisticated usage requirements that must be strictly followed to get the expected algorithmic security guarantees. The subtle object is the main entry point for interacting with underlying crypto primitives. The crypto global object also has the function getRandomValues, which provides a cryptographically strong random number generator (RNG). The WebKit RNG (macOS/iOS) is based on AES-CTR.

The subtle object is composed of multiple methods to serve the needs of low-level cryptographic operations:

    Promise<ArrayBuffer> encrypt(AlgorithmIdentifier algorithm, CryptoKey key, BufferSource data);
    Promise<ArrayBuffer> decrypt(AlgorithmIdentifier algorithm, CryptoKey key, BufferSource data);
    Promise<ArrayBuffer> sign(AlgorithmIdentifier algorithm, CryptoKey key, BufferSource data);
    Promise<boolean> verify(AlgorithmIdentifier algorithm, CryptoKey key, BufferSource signature, BufferSource data);
    Promise<ArrayBuffer> digest(AlgorithmIdentifier algorithm, BufferSource data);
    Promise<CryptoKey or CryptoKeyPair> generateKey(AlgorithmIdentifier algorithm, boolean extractable, sequence<KeyUsage> keyUsages );
    Promise<CryptoKey> deriveKey(AlgorithmIdentifier algorithm, CryptoKey baseKey, AlgorithmIdentifier derivedKeyType, boolean extractable, sequence<KeyUsage> keyUsages );
    Promise<ArrayBuffer> deriveBits(AlgorithmIdentifier algorithm, CryptoKey baseKey, unsigned long length);
    Promise<CryptoKey> importKey(KeyFormat format, (BufferSource or JsonWebKey) keyData, AlgorithmIdentifier algorithm, boolean extractable, sequence<KeyUsage> keyUsages );
    Promise<ArrayBuffer> exportKey(KeyFormat format, CryptoKey key);
    Promise<ArrayBuffer> wrapKey(KeyFormat format, CryptoKey key, CryptoKey wrappingKey, AlgorithmIdentifier wrapAlgorithm);
    Promise<CryptoKey> unwrapKey(KeyFormat format, BufferSource wrappedKey, CryptoKey unwrappingKey, AlgorithmIdentifier unwrapAlgorithm, AlgorithmIdentifier unwrappedKeyAlgorithm, boolean extractable, sequence<KeyUsage> keyUsages );

As the names of these methods imply, the WebCrypto API supports hashing, signature generation and verification, encryption and decryption, shared secret derivation, and corresponding key materials management. Let’s look closer at one of those methods:

Promise<ArrayBuffer> encrypt(AlgorithmIdentifier algorithm,
                             CryptoKey key,
                             BufferSource data)

All of the functions return a Promise, and most of them accept an AlgorithmIdentifier parameter. AlgorithmIdentifier can be either a string that specifies an algorithm, or a dictionary that contains all the inputs to a specific operation. For example, in order to do an AES-CBC encryption, one has to supply the above encrypt method with:

var aesCbcParams = {name: "aes-cbc", iv: asciiToUint8Array("jnOw99oOZFLIEPMr")}

CryptoKey is an abstraction of keying materials in WebCrypto API. Here is an illustration:

    type: "secret",
    extractable: true,
    algorithm: { name: "AES-CBC", length: 128 },
    usages: ["decrypt", "encrypt"]

This code tells us that this key is an extractable (to JavaScript execution context) AES-CBC “secret” (symmetric) key with a length of 128 bits that can be used for both encryption and decryption. The algorithm object is a dictionary that characterizes different keying materials, while all the other slots are generic. Bear in mind that CryptoKey does not expose the underlying key data directly to web pages. This design of WebCrypto keeps the secret and private key data safely within the browser agent, while allowing web authors to still enjoy the flexibility of working with concrete keys.

Changes to WebKitSubtleCrypto

Those of you that have never heard of WebKitSubtleCrypto may skip this section and use SubtleCrypto exclusively. This section is aimed at providing compelling reasons for current WebKitSubtleCrypto users to switch to our new standards-compliant SubtleCrypto.

1. Standards-compliant implementation

SubtleCrypto is a standards-compliant implementation of the current specification, and is completely independent from WebKitSubtleCrypto. Here is an example code snippet that demonstrates the differences between the two APIs for importing a JsonWebKey (JWK) format key:

var jwkKey = {
    "kty": "oct",  
    "alg": "A128CBC",
    "use": "enc",
    "ext": true,
    "k": "YWJjZGVmZ2gxMjM0NTY3OA"

// WebKitSubtleCrypto:
// asciiToUint8Array() takes a string and converts it to an Uint8Array object.
var jwkKeyAsArrayBuffer = asciiToUint8Array(JSON.stringify(jwkKey));
crypto.webkitSubtle.importKey("jwk", jwkKeyAsArrayBuffer, null, false, ["encrypt"]).then(function(key) {
    console.log("An AES-CBC key is imported via JWK format.");

// SubtleCrypto:
crypto.subtle.importKey("jwk", jwkKey, "aes-cbc", false, ["encrypt"]).then(function(key) {
    console.log("An AES-CBC key is imported via JWK format.");

With the new interface, one no longer has to convert the JSON key to UInt8Array. The SubtleCrypto interface is indeed significantly more standards-compliant than our old WebKitSubtleCrypto implementation. Here are the results of running W3C WebCrypto API tests:

W3C WebCrypto TestSuite Result Chart
This test suite is an improved one based on the most updated web-platform-tests GitHub repository. Pull requests are made for all improvements: #6100, #6101, and #6102.

The new implementation’s coverage is around 95% which is 48X higher than our webkit- prefixed one! The concrete numbers for all selected parties are: 999 for prefixed WebKit, 46653 for Safari 11, 45709 for Chrome 59, and 18636 for FireFox 54.

2. DER encoding support for importing and exporting asymmetric keys

The WebCrypto API specification supports DER encoding of public keys as SPKI, and of private key as PKCS8. Prior to this, WebKitSubtleCrypto only supported the JSON-based JWK format for RSA keys. This is convenient when keys are used on the web because of its structure and human readability. However, when public keys are often exchanged between servers and web browsers, they are usually embedded in certificates in a binary format. Even though some JavaScript frameworks have been written to read the binary format of a certificate and to extract its public key, few of them convert a binary public key into its JWK equivalent. This is why support for SPKI and PKCS8 is useful. Here are code snippets that demonstrate what can be done with the SubtleCrypto API:

// Import:
// Generated from OpenSSL
// Base64URL.parse() takes a Base64 encoded string and converts it to an Uint8Array object.
var spkiKey = Base64URL.parse("MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwCjRCtFwvSNYMZ07u5SxARxglJl75T7bUZXFsDVxHkMhpNC2RaN4jWE5bwYUDMeD2fVmxhpaUQn/6AbFLh6gHxtwrCfc7rIo/SfDdGd3GkRlXK5xXwGuM6MvP9nuZHaarIyArRFh2U2UZxFlVsKI0pSHo6n58W1fPZ1syOoVEZ/WYE6gLhMMwfpeAm97mro7mekRdMULOV/mR5Ul3CHm9Zt93Dc8GpnPA8bhLiB0VNyGTEMa06nJul4gj1sjxLDoUvZY2EWq7oUUnfLBUYMfiqK0kQcW94wvBrIq2DQUApLyTTbaAOY46TLwX6c8LtubJriYKTC5a9Bb0/7ovTWB0wIDAQAB");
crypto.subtle.importKey("spki", spkiKey, {name: "RSA-OAEP", hash: "sha-256"}, true, ["encrypt"]).then(function(key) {
    console.log("A RSA-OAEP key is imported via SPKI format.");

// Export:
var rsaKeyGenParams = {
    name: "RSA-OAEP",
    modulusLength: 2048,
    publicExponent: new Uint8Array([0x01, 0x00, 0x01]),  // Equivalent to 65537
    hash: "sha-256"
crypto.subtle.generateKey(rsaKeyGenParams, true, ["decrypt", "encrypt"]).then(function(keyPair) {
    crypto.subtle.exportKey("spki", keyPair.publicKey).then(function(binary) {
        console.log("A RSA-OAEP key is exported via SPKI format.");

A live example from a third party to generate public key certificates can be found here.

3. Asynchronously execute time-consuming SubtleCrypto methods

In the previous WebKitSubtleCrypto implementation, only generateKey for RSA executes asynchronously, while all the other operations are synchronous. Even though synchronous operation works well for methods that finish quickly, most crypto methods are time-consuming. Consequently, all time-consuming methods in the new SubtleCrypto implementation execute asynchronously:

Method encrypt decrypt sign verify digest generateKey* deriveKey deriveBits importKey exportKey wrapKey* unwrapKey*

Note that only RSA key pair generation is asynchronous while EC key pair and symmetric key generation are synchronous. Also notice that AES-KW is the only exception where synchronous operations are still done for wrapKey/unwrapKey. Normally key size is a few hundred bytes, and therefore it is less time-consuming to encrypt/decrypt such small amount of data. AES-KW is the only algorithm that directly supports wrapKey/unwrapKey operations while others are bridged to encrypt/decrypt operations. Hence, it becomes the only algorithm that executes wrapKey/unwrapKey synchronously. Web developers may treat every SubtleCrypto function the same as any other function that returns a promise.

4. Web worker support

Besides making most of the APIs asynchronous, we also support web workers to allow another model of asynchronous execution. Developers can choose which one best suit their needs. Combining these two models, developers now could integrate cryptographic primitives inside their websites without blocking any UI activities. The SubtleCrypto object in web workers uses the same semantics as the one in the Window object. Here is some example code that uses a web worker to encrypt text:

// In Window. 
var rawKey = asciiToUint8Array("16 bytes of key!");
crypto.subtle.importKey("raw", rawKey, {name: "aes-cbc", length: 128}, true, ["encrypt", "decrypt"]).then(function(localKey) {
    var worker = new Worker("crypto-worker.js");
    worker.onmessage = function(evt) {
        console.log("Received encrypted data.");

// In crypto-worker.js.
var plainText = asciiToUint8Array("Hello, World!");
var aesCbcParams = {
    name: "aes-cbc",
    iv: asciiToUint8Array("jnOw99oOZFLIEPMr"),
onmessage = function(key)
    crypto.subtle.encrypt(aesCbcParams, key, plainText).then(function(cipherText) {

A live example is here to demonstrate how asynchronous execution could help to make a more responsive website.

In addition to the four major areas of improvement above, some minor changes that are worth mentioning include:

  • CryptoKey interface enhancement includes renaming from Key to CryptoKey, making algorithm and usages slots cacheable, and exposing it to web workers.
  • HmacKeyParams.length is now bits instead of bytes.
  • RSA-OAEP can now import and export keys with SHA-256.
  • CryptoKeyPair is now a dictionary type.

Newly added cryptographic algorithms

Together with the new SubtleCrypto interface, this update also adds support for a number of cryptographic algorithms:

  1. AES-CFB: CFB stands for cipher feedback. Unlike CBC, CFB does not require the plain text be padded to the block size of the cipher.
  2. AES-CTR: CTR stands for counter mode. CTR is best known for its parallelizability on both encryption and decryption.
  3. AES-GCM: GCM stands for Galois/Counter Mode. GCM is an authenticated encryption algorithm designed to provide both data authenticity (integrity) and confidentiality.
  4. ECDH: ECDH stands for Elliptic Curve Diffie–Hellman. Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC requires smaller keys compared to RSA to provide equivalent security. ECDH is one among many ECC schemes. It allows two parties each of whom owns an ECC key pair to establish a shared secret over an insecure channel.
  5. ECDSA: ECDSA stands for Elliptic Curve Digital Signature Algorithm. It is another ECC scheme.
  6. HKDF: HKDF stands for HMAC-based Key Derivation Function. It transforms secrets into key, allowing to combine additional non-secret inputs when needed.
  7. PBKDF2:PBKDF2 stands for Password-Based Key Derivation Function 2. It takes a password or a passphrase along with a salt value to derive a cryptographic symmetric key.
  8. RSA-PSS: PSS stands for Probabilistic Signature Scheme. It is an improved digital signature algorithm for RSA.

This set of new algorithms not only adds new functionality, e.g. key derivation functions, but also benefits developers from higher efficiency and better security by replacing existing ones having the same functionalities. To demonstrate the benefits, sample code snippets written with selected new algorithms are presented in the following. Implementations under these examples are not written with the best practices and therefore are for demonstration only.

Example 1: AES-GCM

Prior, AES-CBC is the only available block cipher for encryption/decryption. Even though it does a great job for protecting data confidentiality, yet it doesn’t protect the authenticity (integrity) of the produced ciphers. Hence, it often bundles with HMAC-SHA256 to prevent silent corruptions of the ciphers. Here is the corresponding code snippet:

// Assume aesKey and hmacKey are imported before with the same raw key.
var plainText = asciiToUint8Array("Hello, World!");
var aesCbcParams = {
    name: "aes-cbc",
    iv: asciiToUint8Array("jnOw99oOZFLIEPMr"),

// Encryption:
// First encrypt the plain text with AES-CBC.
crypto.subtle.encrypt(aesCbcParams, aesKey, plainText).then(function(result) {
    console.log("Plain text is encrypted.");
    cipherText = result;

    // Then sign the cipher text with HMAC.
    return crypto.subtle.sign("hmac", hmacKey, cipherText);
}).then(function(result) {
    console.log("Cipher text is signed.");
    signature = result;

    // Finally produce the final result by concatenating cipher text and signature.       
    finalResult = new Uint8Array(cipherText.byteLength + signature.byteLength);
    finalResult.set(new Uint8Array(cipherText));
    finalResult.set(new Uint8Array(signature), cipherText.byteLength);
    console.log("Final result is produced.");

// Decryption:
// First decode the final result from the encryption step.
var position = finalResult.length - 32; // SHA-256 length
signature = finalResult.slice(position);
cipherText = finalResult.slice(0, position);

// Then verify the cipher text.
crypto.subtle.verify("hmac", hmacKey, signature, cipherText).then(function(result) {
    if (result) {
        console.log("Cipher text is verified.");

        // Finally decrypt the cipher text.
        return crypto.subtle.decrypt(aesCbcParams, aesKey, cipherText);
    } else
        return Promise.reject();
}).then(function(result) {
    console.log("Cipher text is decrypted.");
    decryptedText = result;
}, function() {
    // Error handling codes ...

So far, the codes are a bit complex with AES-CBC because the extra overhead of HMAC. However, it is much simpler to achieve the same authenticated encryption effect by using AES-GCM as it bundles authentication and encryption together within one single step. Here is the corresponding code snippet:

// Assume aesKey are imported/generated before, and the same plain text is used.
var aesGcmParams = {
    name: "aes-gcm",
    iv: asciiToUint8Array("jnOw99oOZFLIEPMr"),

// Encryption:
crypto.subtle.encrypt(aesGcmParams, key, plainText).then(function(result) {
    console.log("Plain text is encrypted.");
    cipherText = result; // It contains both the cipherText and the authentication data.

// Decryption:
crypto.subtle.decrypt(aesGcmParams, key, cipherText).then(function(result) {
    console.log("Cipher text is decrypted.");
    decryptedText = result;
}, function(error) {
    // If any violation of the cipher text is detected, the operation will be rejected.
    // Error handling codes ...

It is just that simple to use AES-GCM. This simplicity will definitely improve developers’ efficiency. A live example can also be found here to demonstrate how AES-GCM can prevent silent corruption during decrypting corrupted ciphers.

Example 2: ECDH(E)

Block ciphers alone are not sufficient to protect data confidentiality because secret (symmetric) keys need to be shared securely as well. Before this change, only RSA encryption was available for tackling this task. That is to encrypt the shared secret keys and then exchange the ciphers to prevent MITM attacks. This method is not entirely secure as perfect forward secrecy (PFS) is difficult to guarantee. PFS requires session keys, the RSA key pair in this case, to be destroyed once a session is completed, i.e. after a secret key is successfully shared. So the shared secret key can never be recovered even if the MITM attackers are able to record down the exchanged cipher and access the recipient in the future. RSA key pairs are very hard to generate, and therefore maintaining PFS is really a challenge for RSA secret key exchange.

However, maintaining PFS is a piece of cake for ECDH simply because EC key pairs are easy to generate. In average, it takes about 170 ms to generate a RSA-2048 key pair on the same test environment shown in the first section. On the contrary, it only takes about 2 ms to generate a P-256 EC key pair which can provide comparable security to a RSA-3072 alternative. ECDH works in the way that the involved two parties exchange their public keys first and then compute a point multiplication by using the acquired public keys and their own private keys, of which the result is the shared secret. ECDH with PFS is referred as Ephemeral ECDH (ECDHE). Ephemeral merely means that session keys are transient in this protocol. Since the EC key pairs involved with ECDH are transient, they cannot be used to confirm the identities of the involved two parties. Hence, other permanent asymmetric key pairs are needed for authentication. In general, RSA is used as it is widely supported by common public key infrastructures (PKI). To demonstrate how ECDHE works, the following code snippet is shared:

// Assuming Bob and Alice are the two parties. Here we only show codes for Bob's.
// Alice's should be similar.
// Also assumes that permanent RSA keys are obtained before, i.e. bobRsaPrivateKey and aliceRsaPublicKey.
// Prepare to send the hello message which includes Bob's public EC key and its signature to Alice:
// Step 1: Generate a transient EC key pair.
crypto.subtle.generateKey({ name: "ECDH", namedCurve: "P-256" }, extractable, ["deriveKey"]).then(function(result) {
    console.log("EC key pair is generated.");
    bobEcKeyPair = result;

    // Step 2: Sign the EC public key for authentication.
    return crypto.subtle.exportKey("raw", bobEcKeyPair.publicKey);
}).then(function(result) {
    console.log("EC public key is exported.");
    rawEcPublicKey = result;

    return crypto.subtle.sign({ name: "RSA-PSS", saltLength: 16 }, bobRsaPrivateKey, rawEcPublicKey);
}).then(function(result) {
    console.log("Raw EC public key is signed.");
    signature = result;

    // Step 3: Exchange the EC public key together with the signature. We simplify the final result as
    // a concatenation of the raw format EC public key and its signature.
    finalResult = new Uint8Array(rawEcPublicKey.byteLength + signature.byteLength);
    finalResult.set(new Uint8Array(rawEcPublicKey));
    finalResult.set(new Uint8Array(signature), rawEcPublicKey.byteLength);
    console.log("Final result is produced.");

    // Send the message to Alice.
    // ...

// After receiving Alice's hello message:
// Step 1: Decode the counterpart from Alice.
var position = finalResult.length - 256; // RSA-2048
signature = finalResult.slice(position);
rawEcPublicKey = finalResult.slice(0, position);

// Step 2: Verify Alice's signature and her EC public key.
crypto.subtle.verify({ name: "RSA-PSS", saltLength: 16 }, aliceRsaPublicKey, signature, rawEcPublicKey).then(function(result) {
    if (result) {
        console.log("Alice's public key is verified.");

        return crypto.subtle.importKey("raw", rawEcPublicKey, { name: "ECDH", namedCurve: "P-256" }, extractable, [ ]);
    } else
        return Promise.reject();
}).then(function(result) {
    console.log("Alice's public key is imported.");
    aliceEcPublicKey = result;

    // Step 3: Compute the shared AES-GCM secret key.
    return crypto.subtle.deriveKey({ name: "ECDH", public: aliceEcPublicKey }, bobEcKeyPair.privateKey, { name: "aes-gcm", length: 128 }, extractable, ['decrypt', 'encrypt']);
}).then(function(result) {
    console.log("Shared AES secret key is computed.");
    aesKey = result;


    // Step 4: Delete the transient EC key pair.
    bobEcKeyPair = null;
    console.log("EC key pair is deleted.");

In the above example, we omit the way how information, i.e. public keys and their corresponding parameters, is exchanged to focus on parts that WebCrypto API is involved. The ease to implement ECDHE will definitely improve the security level of secret key exchanges. Also, a live example to tell the differences between RSA secret key exchange and ECDH is included here.

Example 3: PBKDF2

The ability to derive a cryptographically secret key from existing secrets such as password is new. PBKDF2 is one of the newly added algorithms that can serve this purpose. The derived secret key from PBKDF2 not only can be used in the subsequent cryptographically operations, but also itself is a strong password hash given it is salted. The following code snippet demonstrates how to derive a strong password hash from a simple password:

var password = asciiToUint8Array("123456789");
var salt = asciiToUint8Array("jnOw99oOZFLIEPMr");

crypto.subtle.importKey("raw", password, "PBKDF2", false, ["deriveBits"]).then(function(baseKey) {
    return crypto.subtle.deriveBits({name: "PBKDF2", salt: salt, iterations: 100000, hash: "sha-256"}, baseKey, 128);
}).then(function(result) {
    console.log("Hash is derived!")
    derivedHash = result;

A live example can be found here.

The above examples are just a tip of capabilities of WebCrypto API. Here is a table listing all algorithms that WebKit currently supports, and corresponding permitted operations of each algorithm.

Algorithm name encrypt decrypt sign verify digest generateKey deriveKey deriveBits importKey** exportKey** wrapKey unwrapKey
* WebKit doesn’t support P-521 yet, see bug 169231.
** WebKit doesn’t check or produce any hash information from or to DER key data, see bug 165436, and bug 165437.
*** RSAES-PKCS1-v1_5 and SHA-1 should be avoided for security reasons.

Transition to the New SubtleCrypto Interface

This section covers some common mistakes that web developers have made when they have tried to maintain compatibility to both WebKitSubtleCrypto and SubtleCrypto, and then we present recommended fixes to those mistakes. Finally, we summarize those fixes into a de facto rule to maintain compatibility.

Example 1:

// Bad code:
var subtleObject = null;
if ("subtle" in self.crypto)
    subtleObject = self.crypto.subtle;
if ("webkitSubtle" in self.crypto)
    subtleObject = self.crypto.webkitSubtle;

This example wrongly prioritizes window.crypto.webkitSubtle over window.crypto.subtle. Therefore, it will overwrite the subtleObject even that the subtle object actually exists. A quick fix for it is to prioritizes window.crypto.subtle over window.crypto.webkitSubtle.

// Fix:
var subtleObject = null;
if ("webkitSubtle" in self.crypto)
    subtleObject = self.crypto.webkitSubtle;
if ("subtle" in self.crypto)
    subtleObject = self.crypto.subtle;

Example 2:

// Bad code:
(window.agcrypto = window.crypto) && !window.crypto.subtle && window.crypto.webkitSubtle && (console.log("Using crypto.webkitSubtle"), window.agcrypto.subtle = window.crypto.webkitSubtle);
var h = window.crypto.webkitSubtle ? a.utils.json2ab(c.jwkKey) : c.jwkKey;
agcrypto.subtle.importKey("jwk", h, g, !0, ["encrypt"]).then(function(a) {

This example incorrectly pairs window.agcrypto and the latter jwkKey. The first line prioritizes window.crypto.subtle over window.crypto.webkitSubtle, which is correct. However, the second line prioritizes window.crypto.webkitSubtle over window.crypto.subtle again.

// Fix:
(window.agcrypto = window.crypto) && !window.crypto.subtle && window.crypto.webkitSubtle && (console.log("Using crypto.webkitSubtle"), window.agcrypto.subtle = window.crypto.webkitSubtle);
var h = window.crypto.subtle ? c.jwkKey : a.utils.json2ab(c.jwkKey);
agcrypto.subtle.importKey("jwk", h, g, !0, ["encrypt"]).then(function(a) {

A deeper analysis of these examples reveals they both assume window.crypto.subtle and window.crypto.webkitSubtle cannot coexist and therefore wrongly prioritize one over the other. In summary, developers should be aware of the coexistence of these two interfaces and should always prioritize window.crypto.subtle over window.crypto.webkitSubtle.


In this blog post, we reviewed WebKit’s update to the WebCrypto API implementation which is available on macOS, iOS, and GTK+. We hope you enjoy it. You can try out all of these improvements in the latest Safari Technology Preview. Let us know how they work for you by sending feedback on Twitter (@webkit, @alanwaketan, @jonathandavis) or by filing a bug.

Planet MozillaFirefox’s Accessibility Preferences

If you use Firefox Nightly, you may notice that there is no more Accessibility section in the preferences screen, this change will arrive in Firefox 56 as part of a preferences reorg. This is good news!

Screenshot of the new "Browsing" section, which includes scrolling options as well as search while you type and cursor keys navigation.

Cursor browsing and search while you type, are still available under the Browsing section, as these options offer convenience for everybody, regardless of disability. Users should now be able to find an option under an appropriate feature section, or search for it in the far upper corner. This is a positive trend, that I hope will continue as we imagine our users more broadly with a diverse set of use-cases, that include, but are not exclusive to disability.

Thanks to everyone who made this happen!

Planet MozillaWebdev Beer and Tell: July 2017

Webdev Beer and Tell: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaDMD is usable again on all Tier 1 platforms

DMD is heap profiler built into Firefox, best known for being the tool used to  diagnose the sources of “heap-unclassified” memory in about:memory.

It’s been unusable on Win32 for a long time due to incredibly slow start-up times. And recently it became very slow on Mac due to a performance regression in libunwind.

Fortunately I have been able to fix this in both cases (Win32, Mac) by using FramePointerStackWalk() instead of MozStackWalk() to do the stack tracing within DMD. (The Gecko Profiler likewise uses FramePointerStackWalk() on those two platforms, and it was my recent work on the profiler that taught me that there was an alternative stack walker available.)

So DMD should be usable and effective on all Tier 1 platforms. I have tested it on  Win32, Win64, Linux64 and Mac. I haven’t tested it in Linux32 or Android. Please let me know if you encounter any problems.

Planet MozillaQuantum Flow Engineering Newsletter #16

It has been almost a month and a half since the last time that I talked about our progress in fighting sync IPC issues.  So I figured it’s time to prepare another Sync IPC Analysis report.  Again, unfortunately only the latest data is available in the spreadsheet.  But here are screenshot of the C++ and JS IPC message pie charts:

As you can see, as we have made even more progress in fixing more sync IPC issues, now the document.cookie issue is even a larger relative share of the pie, at 60%.  That is followed by some JS IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent (which is a fast sync IPC message used by the async pan zoom component which would be hard to replace), followed by more JS IPC, followed by PContent::Msg_GetBlocklistState which is recently fixed, followed by PBrowser::Msg_NotifyIMEFocus, followed by more JS IPC and CPOW overhead before we get to the longer tail.  If you look at the JS sync IPC chart, you will see that almost all the overhead there is due to add-ons.  Hopefully none of this will be an issue after Firefox 57 with the new out of process WebExtensions for Windows users.  The only message in this chart stemming from our code that shows up in the data is contextmenu.

The rate of progress here has been really great to see, and this is thanks to the hard work of many people across many different teams.  Some of these issues have required heroic efforts to fix, and it’s really great to see this much progress made in so little time.

The development of Firefox 56 in coming to a close rapidly.  Firefox 57 branches off on Aug 2, and we have about 9 weeks from now until Firefox 57 rides the trains to beta.  So far, according to our burn-down chart, we have closed around 224 [qf:p1] bugs and have yet 110 more to fix.  Fortunately Quantum Flow is not one of those projects that needs all of those bugs to be fixed, because we may not end up having enough time to fix these bugs for the final release, especially since we usually keep adding new bugs to the list in our weekly triage sessions.  Soon we will probably need to reassess the priority of some of these bugs as the eventual deadline approaches.

It is now time for me to acknowledge the great work of everyone who helped by contributing performance improvements over the past two weeks.  As usual, I hope I’m not forgetting any names!

Planet MozillaWorking Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long

Working Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...

Planet MozillaWorking Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long

Working Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...

Planet MozillaReps Weekly Meeting Jul. 20, 2017

Reps Weekly Meeting Jul. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Jul. 20, 2017

Reps Weekly Meeting Jul. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaThe Next Generation of Web Gaming

Over the last few years, Mozilla has worked closely with other browsers and the industry to advance the state of games on the Web. Together, we have enabled developers to deploy native code on the web, first via asm.js, and then with its successor WebAssembly. Now available in Firefox and Chrome, and also soon in Edge and WebKit, WebAssembly enables near-native performance of code in the browser, which is great for game development, and has also shown benefits for WebVR applications. WebAssembly code is able to deliver more predictable performance due to JIT compilation and garbage collection being avoided. Its wide support across all major browser engines opens up paths to near-native speed, making it possible to build high-performing plugin-free games on the web.

“In 2017 Kongregate saw a shift away from Flash with nearly 60% of new titles using HTML5,” said Emily Greer, co-founder and CEO of Kongregate.  “Developers were able to take advantage of improvements in HTML5 technologies and tools while consumers were able to enjoy games without the need for 3rd-party plugins.  As HTML5 continues to evolve it will enable developers to create even more advanced games that will benefit the millions of gamers on and the greater, still thriving, web gaming industry.”

Kongregate’s data shows that on average, about 55% of uploaded games are HTML5 games.

And we can also see that these are high-quality games, with over 60% of HTML5 titles receiving a “great” score (better than a 4.0 out of 5 rating).

In spite of this positive trend, opportunities for improvement exist. The web is an ever-evolving platform, and developers are always looking for better performance. One major request we have often heard is for multithreading support on the web. SharedArrayBuffer is a required building block for multithreading, which enables concurrently sharing memory between multiple web workers. The specification is finished, and Firefox intends to ship SharedArrayBuffer support in Firefox 55.

Another common request is for SIMD support. SIMD is short for Single Instruction, Multiple Data. It’s a way for a CPU to parallelize math instructions, offering significant performance improvements for math-heavy requirements such 3D rendering and physics.

The WebAssembly Community Group is now focused on enabling hardware parallelism with SIMD and multithreading as the next major evolutionary steps for WebAssembly. Building on the momentum of shipping the first version of WebAssembly and continued collaboration, both of these new features should be stable and ready to ship in Firefox in early 2018.

Much work has gone into optimizing runtime performance over the last few years, and with that we learned many lessons. We have collected many of these learnings in a practical blog post about porting games from native to web, and look forward to your input on other areas for improvement. As multithreading support lands in 2018, expect to see opportunities to further invest in improving memory usage.

We again wish to extend our gratitude to the game developers, publishers, engine providers, and other browsers’ engine teams who have collaborated with us over the years. We could not have done it without your help — thank you!

Planet MozillaWebAssembly for Native Games on the Web

The biggest improvement this year to web performance has been the introduction of WebAssembly. Now available in Firefox and Chrome, and coming soon in Edge and WebKit, WebAssembly enables the execution of code at a low assembly-like level in the browser.

Mozilla has worked closely with the games industry for several years to reach this stage: including milestones like the release of games built with Emscripten in 2013, the preview of Unreal Engine 4 running in Firefox (2014), bringing the Unity game engine to WebGL also in 2014, exporting an indie Unity game to WebVR in 2016, and most recently, the March release of Firefox 52 with WebAssembly.

WebAssembly builds on Mozilla’s original asm.js specification, which was created to serve as a plugin-free compilation target approach for applications and games on the web. This work has accumulated a great deal of knowledge at Mozilla specific to the process of porting games and graphics technologies. If you are an engineer working on games and this sounds interesting, read on to learn more about developing games in WebAssembly.

Where Does WebAssembly Fit In?

By now web developers have probably heard about WebAssembly’s promise of performance, but for developers who have not actually used it, let’s set some context for how it works with existing technologies and what is feasible. Lin Clark has written an excellent introduction to WebAssembly. The main point is that unlike JavaScript, which is generally written by hand, WebAssembly is a compilation target, just like native assembly. Except perhaps for small snippets of code, WebAssembly is not designed to be written by humans. Typically, you’d develop the application in a source language (e.g. C/C++) and then use a compiler (e.g. Emscripten), which transforms the source code to WebAssembly in a compilation step.

This means that existing JavaScript code is not the subject of this model. If your application is written in JavaScript, then it already runs natively in a web browser, and it is not possible to somehow transform it to WebAssembly verbatim. What can be possible in these types of applications however, is to replace certain computationally intensive parts of your JavaScript with WebAssembly modules. For example, a web application might replace its JavaScript-implemented file decompression routine or a string regex routine by a WebAssembly module that does the same job, but with better performance. As another example, web pages written in JavaScript can use the Bullet physics engine compiled to WebAssembly to provide physics simulation.

Another important property: Individual WebAssembly instructions do not interleave seamlessly in between existing lines of JavaScript code; WebAssembly applications come in modules. These modules deal with low-level memory, whereas JavaScript operates on high-level object representations. This difference in structure means that data needs to undergo a transformation step—sometimes called marshalling—to convert between the two language representations. For primitive types, such as integers and floats, this step is very fast, but for more complex data types such as dictionaries or images, this can be time consuming. Therefore, replacing parts of a JavaScript application works best when applied to subroutines with large enough granularity to warrant replacement by a full WebAssembly module, so that frequent transitions between the language barriers are avoided.

As an example, in a 3D game written in three.js, one would not want to implement a small Matrix*Matrix multiplication algorithm alone in WebAssembly. The cost of marshalling a matrix data type into a WebAssembly module and then back would negate the speed performance that is gained in doing the operation in WebAssembly. Instead, to reach performance gains, one should look at implementing larger collections of computation in WebAssembly, such as image or file decompression.

On the other end of the spectrum are applications that are implemented as fully in WebAssembly as possible. This minimizes the need to marshal large amounts of data across the language barrier, and most of the application is able to run inside the WebAssembly module. Native 3D game engines such as Unity and Unreal Engine implement this approach, where one can deploy a whole game to run in WebAssembly in the browser. This will yield the best possible performance gain. However, WebAssembly is not a full replacement for JavaScript. Even if as much of the application as possible is implemented in WebAssembly, there are still parts that are implemented in JavaScript. WebAssembly code does not interact directly with existing browser APIs that are familiar to web developers, your program will call out from WebAssembly to JavaScript to interact with the browser. It is possible that this behavior will change in the future as WebAssembly evolves.

Producing WebAssembly

The largest audience currently served by WebAssembly are native C/C++ developers, who are often positioned to write performance sensitive code. An open source community project supported by Mozilla, Emscripten is a GCC/Clang-compatible compiler toolchain that allows building WebAssembly applications on the web. The main scope of Emscripten is support for the C/C++ language family, but because Emscripten is powered by LLVM, it has potential to allow other languages to compile as well. If your game is developed in C/C++ and it targets OpenGL ES 2 or 3, an Emscripten-based port to the web can be a viable approach.

Mozilla has benefited from games industry feedback – this has been a driving force shaping the development of asm.js and WebAssembly. As a result of this collaboration, Unity3D, Unreal Engine 4 and other game engines are already able to deploy content to WebAssembly. This support takes place largely under the hood in the engine, and the aim has been to make this as transparent as possible to the application.

Considerations For Porting Your Native Game

For the game developer audience, WebAssembly represents an addition to an already long list of supported target platforms (Windows, Mac, Android, Xbox, Playstation, …), rather than being a new original platform to which projects are developed from scratch. Because of this, we’ve placed a great deal of focus on development and feature parity with respect to other existing platforms in the development of Emscripten, asm.js, and WebAssembly. This parity continues to improve, although on some occasions the offered features differ noticeably, most often due to web security concerns.

The remainder of this article focuses on the most important items that developers should be aware of when getting started with WebAssembly. Some of these are successfully hidden under an abstraction if you’re using an existing game engine, but native developers using Emscripten should most certainly be aware of the following topics.

Execution Model Considerations

Most fundamental are the differences where code execution and memory model are concerned.

  • Asm.js and WebAssembly use the concept of a typed array (a contiguous linear memory buffer) that represents the low level memory address space for the application. Developers specify an initial size for this heap, and the size of the heap can grow as the application needs more memory.
  • Virtually all web APIs operate using events and an event queue mechanism to provide notifications, e.g. for keyboard and mouse input, file IO and network events. These events are all asynchronous and delivered to event handler functions. There are no polling type APIs for synchronously asking the “browser OS” for events, such as those that native platforms often provide.
  • Web browsers execute web pages on the main thread of the browser. This property carries over to WebAssembly modules, which are also executed on the main thread, unless one explicitly creates a Web Worker and runs the code there. On the main thread it is not allowed to block execution for long periods of time, since that would also block the processing of the browser itself. For C/C++ code, this means that the main thread cannot synchronously run its own loop, but must tick simulation and animation forward based on an event callback, so that execution periodically yields control back to the browser. User-launched pthreads will not have this restriction, and they are allowed to run their own blocking main loops.
  • At the time of writing, WebAssembly does not yet have multithreading support – this capability is currently in development.
  • The web security model can be a bit more strict compared to other platforms. In particular, browser APIs constrain applications from gaining direct access to low-level information about the system hardware, to mitigate being able to generate strong fingerprints to identify users. For example, it is not possible to query information such as the CPU model, the local IP address, amount of RAM or amount of available hard disk space. Additionally, many web features operate on web domain boundaries, and information traveling across domains is configured by cross-origin access control rules.
  • A special programming technique that web security also prevents is the dynamic generation and mutation of code on the fly. It is possible to generate WebAssembly modules in the browser, but after loading, WebAssembly modules are immutable and functions can no longer be added to it or changed.
  • When porting C/C++ code, standard compliant code should compile easily, but native compilers relax certain features on x86, such as unaligned memory accesses, overflowing float->int casts and invoking function pointers via signatures that mismatch from the actual type of the function. The ubiquitousness of x86 has made these kind of nonstandard code patterns somewhat common in native code, but when compiling to asm.js or WebAssembly, these types of constructs can cause issues at runtime. Refer to Emscripten documentation for more information about what kind of code is portable.

Another source of differences comes from the fact that code on a web page cannot directly access a native filesystem on the host computer, and so the filesystem solution that is provided looks a bit different than native. Emscripten defines a virtual filesystem space inside the web page, which backs onto the IndexedDB API for persistence across page visits. Browsers also store downloaded data in navigation caches, which sometimes is desirable but other times less so.

Developers should be mindful in particular about content delivery. In native application stores the model of upfront downloading and installing a large application is an expected standard, but on the web, this type of monolithic deployment model can be an off-putting user experience. Applications can download and cache a large asset package at first run, but that can cause a sizable first-time download impact. Therefore, launching with minimal amount of downloading, and streaming additional asset data as needed can be critical for building a web-friendly user experience.

Toolchain Considerations

The first technical challenge for developers comes from adapting the existing build systems to target the Emscripten compiler. To make this easier, the compiler (emcc & em++) is designed to operate closely as a drop-in replacement for GCC or Clang. This eases migration of existing build systems that are already aware of GCC-like toolchains. Emscripten supports the popular CMake build system configuration generator, and emulates support for GNU Autotools configure scripts.

A fact that is sometimes confused is that Emscripten is not a x86/ARM -> WebAssembly code transformation toolchain, but a cross-compiler. That is, Emscripten does not take existing native x86/ARM compiled code and transform that to run on the web, but instead it compiles C/C++ source code to WebAssembly. This means that you must have all the source available (or use libraries bundled with Emscripten or ported to it). Any code that depends on platform-specific (often closed source) native components, such as Win32 and Cocoa APIs, cannot be compiled, but will need to be ported to utilize other solutions.

Performance Considerations

One of the most frequently asked questions about asm.js/WebAssembly is whether it is fast enough for a particular purpose. Curiously, developers who have not yet tried out WebAssembly are the ones who most often doubt its performance. Developers who have tried it, rarely mention performance as a major issue. There are some performance caveats however, which developers should be aware of.

  • As mentioned earlier, multithreading is not available just yet, so applications that heavily depend on threads will not have the same performance available.
  • Another feature that is not yet available in WebAssembly, but planned, is SIMD instruction set support.
  • Certain instructions can be relatively slower in WebAssembly compared to native. For example, calling virtual functions or function pointers has a higher performance footprint due to sandboxing compared to native code. Likewise, exception handling is observed to cause a bigger performance impact compared to native platforms. The performance landscape can look a bit different, so paying attention to this when profiling can be helpful.
  • Web security validation is known to impact WebGL noticeably. It is recommended that applications using WebGL are careful to optimize their WebGL API calls, especially by avoiding redundant API calls, which still pay the cost for driver security validation.
  • Last, application memory usage is a particularly critical aspect to measure, especially if targeting mobile support as well. Preloading big asset packages on first run and uncompressing large amounts of audio assets are two known sources of memory bloat that are easy to do by accident. Applications will likely need to optimize specifically for this when porting, and this is an active area of optimization in WebAssembly and Emscripten runtime as well.


WebAssembly provides support for executing low-level code on the web at high performance, similar to how web plugins used to, except that web security is enforced. For developers using some of the super-popular game engines, leveraging WebAssembly will be as easy as choosing a new export target in the project build menu, and this support is available today. For native C/C++ developers, the open source Emscripten toolchain offers a drop-in compatible way to target WebAssembly. There exists a lively community of developers around Emscripten who contribute to its development, and a mailing list for discussion that can help you getting started. Games that run on the web are accessible to everyone independent of which computation platform they are on, without compromising portability, performance, or security, or requiring up front installation steps.

WebAssembly is only one part of a larger collection of APIs that power web-based games, so navigate on to the MDN games section to see the big picture. Hop right on in, and happy Emscriptening!

Planet MozillaFirefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features

Since the launch of Firefox Focus for Android less than a month ago, one million users have downloaded our fast, simple privacy browser app. Thank you for all your tremendous support for our Firefox Focus for Android app. This milestone marks a huge demand for users who want to be in the driver’s seat when it comes to their personal information and web browsing habits.

When we initially launched Firefox Focus for iOS last year, we did so based on our belief that everyone has a right to protect their privacy.  We created the Firefox Focus for Android app to support all our mobile users and give them the control to manage their online browsing habits across platforms.

Within a week of the the Firefox Focus for Android launch, we’ve had more than 8,000 comments, and the app is rated 4.5 stars. We’re floored by the response!

Feedback from Firefox Focus Users

“Awesome, the iconic privacy focused Firefox browser now is even more privacy and security focused.” 

“Excellent! It is indeed extremely lightweight and fast.” 

“This is the best browser to set as your “default”, hands down. Super fast and lightweight.”

 “Great for exactly what it’s built for, fast, secure, private and lightweight browsing. “

New Features

We’re always looking for ways to improve and your comments help shape our products. We huddled together to decide what features we can quickly add and we’re happy to announce the following new features less than a month since the initial launch:

  • Full Screen Videos: Your comments let us know that this was a top priority. We understand that if you’re going to watch videos on your phone, it’s only worth it if you can expand to the full size of your cellphone screen. We added support for most video sites with YouTube being the notable exception. YouTube support is dependent on a bug fix from Google and we will roll it out as soon as this is fixed.
  • Supports Downloads: We use our mobile phones for entertainment – whether it’s listening to music, playing games, reading an ebook, or doing work.  And for some, it requires downloading a file. We updated the Firefox Focus app to support files of all kind.
  • Updated Notification Actions: No longer solely for reminders to erase your history, Notifications now features a shortcut to open Firefox Focus. Finally, a quick and easy way to access private browsing.  

We’re on a mission to make sure our products meet your needs. Responding to your feedback with quick, noticeable improvements is our way of saying thanks and letting you know, “Hey, we’re listening.”

You can download the latest version of Firefox Focus on Google Play and in the App Store. Stay tuned for additional feature updates over the coming months!


The post Firefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features appeared first on The Mozilla Blog.

Planet MozillaFirefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader

Here at Firefox, we’re always looking for ways for users to get the most out of their web experience. Today, we’re rolling out some improvements that will set the stage for what’s to come in the Fall with Project Quantum. Together these new features help to enhance your mobile browsing experience and make a difference in how you use Firefox for iOS.

What’s new in Firefox for iOS:

New Tab Experience

We polished our new tab experience and will be gradually rolling it out so you’ll see recently visited sites as well as highlights from previous web visits.

Night Mode

For the times when you’re in a dark room and the last thing you want to do is turn on your cellphone to check the time – we added Night Mode which dims the brightness of the screen and eases the strain on your eyes. Now, it’ll be easier to read and you won’t get caught checking your email.

<video class="wp-video-shortcode" controls="controls" height="1080" id="video-10685-1" width="1920"><source src="" type="video/mp4"></source></video>


QR Code Reader

Trying to limit the number of apps on your phone?  We’ve eliminated the need to download a separate app for QR codes with a built-in QR code reader that allows you to quickly access QR codes.

Feature Recommendations

Everyone loves shortcuts and our Feature Recommendations will offer hints and timesavers to improve your overall Firefox experience. To start, this will be available in US and Germany.

To experience the newest features and use the latest version of Firefox for iOS, download the update and let us know what you think.

We hope you enjoy it!


The post Firefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader appeared first on The Mozilla Blog.

Planet Mozilla“*Utils” classes can be a code smell: an example

You might have heard that “*Utils” classes are a code smell.

Lots of people have written about that before, but I tend to find the reasoning a bit vague, and some of us work better with examples.

So here’s one I found recently while working on this bug: you can’t know what part of the Utils class is used when you require it, unless you do further investigation.

Case in point: if you place a method in VariousUtils.js and then import it later…

var { SomeFunction } = require('VariousUtils');

it’ll be very difficult to actually pinpoint when VariousUtils.SomeFunction was used in the code base. Because you could also do this:

var VariousUtils = require('VariousUtils');
var SomeFunction = VariousUtils.SomeFunction;

or this:

var SomeFunction = require('VariousUtils').SomeFunction;

or even something like…

var SomeFunction;
lazyRequire('VariousUtils').then((res) {
  SomeFunction = res.SomeFunction;

Good luck trying to write a regular expression to search for all possible variations of non-evident ways to include SomeFunction in your codebase.

You want to be able to search for things easily because you might want to refactor later. Obvious requires make this (and other code manipulation tasks) easier.

My suggestion is: if you are importing just that one function, place it on its own file.

It makes things very evident:

var SomeFunction = require('SomeFunction');

And searching in files becomes very easy as well:

grep -lr "require('SomeFunction');" *

But I have many functions and it doesn’t make sense to have one function per file! I don’t want to load all of them individually when I need them!!!!111

Then find a common pattern and create a module which doesn’t have Utils in its name. Put the individual functions on a directory, and make a module that imports and exposes them.

For example, with an `equations` module and this directory structure:


You would still have to require('equations').linear or some other way of just requiring `linear` if that’s what you want (so the search is “complicated” again). But at least the module is cohesive, and it’s obvious what’s on it: equations. It would not be obvious if it had been called “MathUtils” — what kind of utilities is that? formulas? functions to normalise stuff? matrix kernels? constants? Who knows!

So: steer away from “assorted bag of tricks” modules because they’ll make you (or your colleagues) waste time (“what was in that module again?”), and you’ll eventually find yourself splitting them at some point, once they grow enough to not make any sense, with lots of mental context switching required to work on them: “ah, here’s this function for formatting text… now a function to generate UUIDs… and this one for making this low level system call… and… *brainsplosion*” 😬

An example that takes this decomposition in files to the “extreme” is lodash. Then it can generate a number of different builds thanks to its extreme modularity.

Update: Another take: write code that is easy to delete. I love it!

flattr this!

Planet MozillaAnnouncing Rust 1.19

The Rust team is happy to announce the latest version of Rust, 1.19.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.19 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.19.0 on GitHub.

What’s in 1.19.0 stable

Rust 1.19.0 has some long-awaited features, but first, a note for our Windows users. On Windows, Rust relies on link.exe for linking, which you can get via the “Microsoft Visual C++ Build Tools.” With the recent release of Visual Studio 2017, the directory structure for these tools has changed. As such, to use Rust, you had to stick with the 2015 tools or use a workaround (such as running vcvars.bat). In 1.19.0, rustc now knows how to find the 2017 tools, and so they work without a workaround.

On to new features! Rust 1.19.0 is the first release that supports unions:

union MyUnion {
    f1: u32,
    f2: f32,

Unions are kind of like enums, but they are “untagged”. Enums have a “tag” that stores which variant is the correct one at runtime; unions elide this tag.

Since we can interpret the data held in the union using the wrong variant and Rust can’t check this for us, that means reading or writing a union’s field is unsafe:

let u = MyUnion { f1: 1 };

unsafe { u.f1 = 5 };

let value = unsafe { u.f1 };

Pattern matching works too:

fn f(u: MyUnion) {
    unsafe {
        match u {
            MyUnion { f1: 10 } => { println!("ten"); }
            MyUnion { f2 } => { println!("{}", f2); }

When are unions useful? One major use-case is interoperability with C. C APIs can (and depending on the area, often do) expose unions, and so this makes writing API wrappers for those libraries significantly easier. Additionally, from its RFC:

A native union mechanism would also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases.

This feature has been long awaited, and there’s still more improvements to come. For now, unions can only include Copy types and may not implement Drop. We expect to lift these restrictions in the future.

As a side note, have you ever wondered how new features get added to Rust? This feature was suggested by Josh Triplett, and he gave a talk at RustConf 2016 about the process of getting unions into Rust. You should check it out!

In other news, loops can now break with a value:

// old code
let x;

loop {
    x = 7;

// new code
let x = loop { break 7; };

Rust has traditionally positioned itself as an “expression oriented language”, that is, most things are expressions that evaluate to a value, rather than statements. loop stuck out as strange in this way, as it was previously a statement.

What about other forms of loops? It’s not yet clear. See its RFC for some discussion around the open questions here.

A smaller feature, closures that do not capture an environment can now be coerced to a function pointer:

let f: fn(i32) -> i32 = |x| x + 1;

We now produce xz compressed tarballs and prefer them by default, making the data transfer smaller and faster. gzip‘d tarballs are still produced in case you can’t use xz for some reason.

The compiler can now bootstrap on Android. We’ve long supported Android in various ways, and this continues to improve our support.

Finally, a compatibility note. Way back when we were running up to Rust 1.0, we did a huge push to verify everything that was being marked as stable and as unstable. We overlooked one thing, however: -Z flags. The -Z flag to the compiler enables unstable flags. Unlike the rest of our stability story, you could still use -Z on stable Rust. Back in April of 2016, in Rust 1.8, we made the use of -Z on stable or beta produce a warning. Over a year later, we’re fixing this hole in our stability story by disallowing -Z on stable and beta.

See the detailed release notes for more.

Library stabilizations

The largest new library feature is the eprint! and eprintln! macros. These work exactly the same as print! and println! but instead write to standard error, as opposed to standard output.

Other new features:

And some freshly-stabilized APIs:

See the detailed release notes for more.

Cargo features

Cargo mostly received small but valuable improvements in this release. The largest is possibly that Cargo no longer checks out a local working directory for the index. This should provide smaller file size for the registry and improve cloning times, especially on Windows machines.

Other improvements:

See the detailed release notes for more.

Contributors to 1.19.0

Many people came together to create Rust 1.19. We couldn’t have done it without all of you. Thanks!

Planet MozillaThe Joy of Coding - Episode 106

The Joy of Coding - Episode 106 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 106

The Joy of Coding - Episode 106 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaCreating a WebAssembly module instance with JavaScript

This is the 1st article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

WebAssembly is a new way of running code on the web. With it, you can write modules in languages like C or C++ and run them in the browser.

Currently modules can’t run on their own, though. This is expected to change as ES module support comes to browsers. Once that’s in place, WebAssembly modules will likely be loaded in the same way as other ES modules, e.g. using <script type="module">.

But for now, you need to use JavaScript to boot the WebAssembly module. This creates an instance of the module. Then your JavaScript code can call functions on that WebAssembly module instance.

For example, let’s look at how React would instantiate a WebAssembly module. (You can learn more in this video about how React could use WebAssembly.)

When the user loads the page, it would start in the same way.

The browser would download the JS file. In addition, a .wasm file would be fetched. That contains the WebAssembly code, which is binary.

Browser downloading a .js file and a .wasm file

We’ll need to load the code in these files in order to run it. First comes the .js file, which loads the JavaScript part of React. That JavaScript will then create an instance of a WebAssembly module… the reconciler.

To do that, it will call WebAssembly.instantiate.

React.js robot calling WebAssembly.instantiate

Let’s take a closer look at this.

The first thing we pass into WebAssembly.instantiate is going to be the binary code that we got in that .wasm file. That’s the module code.

So we extract the binary into a buffer, and then pass it in.

Binary code being passed in as the source parameter to WebAssembly.instantiate

The engine will start compiling the module code down to something that is specific to the machine that it’s running on.

But we don’t want to do this on the main thread. I’ve talked before about how the main thread is like a full stack developer because it handles JavaScript, the DOM, and layout. We don’t want to block the main thread while we compile the module. So what WebAssembly.instantiate returns is a promise.

Promise being returned as module compiles

This lets the main thread get back to its other work. The main thread knows that once the compiler is finished compiling this module, it will be notified by the promise. That promise will give it the instance.

But the compiled module is not the only thing needed to create the instance. I think of the module as kind of like an instruction book.

The instance is like a person who’s trying to make something with the instruction book. In order to make that thing, they also need raw materials. They need things that they can work with.

Instruction book next to WebAssembly robot

This is where the second parameter to WebAssembly.instantiate comes in. That is the imports object.

Arrow pointing to importObject param of WebAssembly.instantiate
I think of the imports object as a box of those raw materials, like you would get from IKEA. The instance uses these raw materials—these imports—to build a thing, as directed by the instructions. Just as an instruction manual expects a certain set of raw materials, each module expects a specific set of imports.

Imports box next to WebAssembly robot

So when you are instantiating a module, you pass it an imports object that has those imports attached to it. Each import can be one of these four kinds of imports:

  • values
  • function closures
  • memory
  • tables


It can have values, which are basically global variables. The only types that WebAssembly supports right now are integers and floats, so values have to be one of those two types. That will change as more types are added in the WebAssembly spec.

Function closures

It can also have function closures. This means you can pass in JavaScript functions, which WebAssembly can then call.

This is particularly useful because in the current version of WebAssembly, you can’t call DOM methods directly. Direct DOM access is on the WebAssembly roadmap, but not part of the spec yet.

What you can do in the meantime is pass in a JavaScript function that can interact with the DOM in the way you need. Then WebAssembly can just call that JS function.


Another kind of import is the memory object. This object makes it possible for WebAssembly code to emulate manual memory management. The concept of the memory object confuses people, so I‘ve gone into a little bit more depth in another article, the next post in this series.


The final type of import is related to security as well. It’s called a table. It makes it possible for you to use something called function pointers. Again, this is kind of complicated, so I explain it in the third part of this series.

Those are the different kinds of imports that you can equip your instance with.

Different kinds of imports going into the imports box

To return the instance, the promise returned from WebAssembly.instantiate is resolved. It contains two things: the instance and, separately, the compiled module.

The nice thing about having the compiled module is that you can spin up other instances of the same module quickly. All you do is pass the module in as the source parameter. The module itself doesn’t have any state (that’s all attached to the instance). That means that instances can share the compiled module code.

Your instance is now fully equipped and ready to go. It has its instruction manual, which is the compiled code, and all of its imports. You can now call its methods.

WebAssembly robot is booted

In the next two articles, we’ll dig deeper into the memory import and the table import.

Planet MozillaMemory in WebAssembly (and why it’s safer than you think)

This is the 2nd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

Memory in WebAssembly works a little differently than it does in JavaScript. With WebAssembly, you have direct access to the raw bytes… and that worries some people. But it’s actually safer than you might think.

What is the memory object?

When a WebAssembly module is instantiated, it needs a memory object. You can either create a new WebAssembly.Memory and pass that object in. Or, if you don’t, a memory object will be created and attached to the instance automatically.

All the JS engine will do internally is create an ArrayBuffer (which I explain in another article). The ArrayBuffer is a JavaScript object that JS has a reference to. JS allocates the memory for you. You tell it how much memory are going to need, and it will create an ArrayBuffer of that size.

React.js requesting a new memory object and JS engine creating one

The indexes to the array can be treated as though they were memory addresses. And if you need more memory later, you can do something called growing to make the array larger.

Handling WebAssembly’s memory as an ArrayBuffer — as an object in JavaScript — does two things:

  1. makes it easy to pass values between JS and WebAssembly
  2. helps make the memory management safe

Passing values between JS and WebAssembly

Because this is just a JavaScript object, that means that JavaScript can also dig around in the bytes of this memory. So in this way, WebAssembly and JavaScript can share memory and pass values back and forth.

Instead of using a memory address, they use an array index to access each box.

For example, the WebAssembly could put a string in memory. It would encode it into bytes…

WebAssembly robot putting string "Hello" through decoder ring

…and then put those bytes in the array.

WebAssembly robot putting bytes into memory

Then it would return the first index, which is an integer, to JavaScript. So JavaScript can pull the bytes out and use them.

WebAssembly robot returning index of first byte in string

Now, most JavaScript doesn’t know how to work directly with bytes. So you’ll need something on the JavaScript side, like you do on the WebAssembly side, that can convert from bytes into more useful values like strings.

In some browsers, you can use the TextDecoder and TextEncoder APIs. Or you can add helper functions into your .js file. For example, a tool like Emscripten can add encoding and decoding helpers.

JS engine pulling out bytes, and React.js decoding them

So that’s the first benefit of WebAssembly memory just being a JS object. WebAssembly and JavaScript can pass values back and forth directly through memory.

Making memory access safer

There’s another benefit that comes from this WebAssembly memory just being a JavaScript object: safety. It makes things safer by helping to prevent browser-level memory leaks and providing memory isolation.

Memory leaks

As I mentioned in the article on memory management, when you manage your own memory you may forget to clear it out. This can cause the system to run out of memory.

If a WebAssembly module instance had direct access to memory, and if it forgot to clear out that memory before it went out of scope, then the browser could leak memory.

But because the memory object is just a JavaScript object, it itself is tracked by the garbage collector (even though its contents are not).

That means that when the WebAssembly instance that the memory object is attached to goes out of scope, this whole memory array can just be garbage collected.

Garbage collector cleaning up memory object

Memory isolation

When people hear that WebAssembly gives you direct access to memory, it can make them a little nervous. They think that a malicious WebAssembly module could go in and dig around in memory it shouldn’t be able to. But that isn’t the case.

The bounds of the ArrayBuffer provide a boundary. It’s a limit to what memory the WebAssembly module can touch directly.

Red arrows pointing to the boundaries of the memory object

It can directly touch the bytes that are inside of this array but it can’t see anything that’s outside the bounds of this array.

For example, any other JS objects that are in memory, like the window global, aren’t accessible to WebAssembly. That’s really important for security.

Whenever there’s a load or a store in WebAssembly, the engine does an array bounds checks to make sure that the address is inside the WebAssembly instance’s memory.

If the code tries to access an out-of-bounds address, the engine will throw an exception. This protects the rest of the memory.

WebAssembly trying to store out of bounds and being rejected

So that’s the memory import. In the next article, we’ll look at another kind of import that makes things safer… the table import.

Planet MozillaWebAssembly table imports… what are they?

This is the 3rd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

In the first article, I introduced the four different kinds of imports that a WebAssembly module instance can have:

  • values
  • function imports
  • memory
  • tables

That last one is probably a little unfamiliar. What is a table import and what is it used for?

Sometimes in a program you want to be able to have a variable that points to a function, like a callback. Then you can do things like pass it into another function.Defining a callback and passing it into a function

In C, these are called function pointers. The function lives in memory. The variable, the function pointer, just points to that memory address.

Function pointer at memory address 4 points to the callback at memory address 1

And if you need to, later you could point the variable to a different function. This should be a familiar concept.

Function pointer at memory address 4 changes to point to callback2 at memory address 4

In web pages, all functions are just JavaScript objects. And because they’re JavaScript objects, they live in memory addresses that are outside of WebAssembly’s memory.

JS function living in JS managed memory

If we want to have a variable that points to one of these functions, we need to take its address and put it into our memory.

Function pointer in WebAssembly memory pointing to function

But part of keeping web pages secure is keeping those memory addresses hidden. You don’t want code on the page to be able to see or manipulate that memory address. If there’s malicious code on the page, it can use that knowledge of where things are laid out in memory to create an exploit.

For example, it could change the memory address that you have in there, to point to a different memory location.

Then when you try and call the function, instead you would load whatever is in the memory address the attacker gave you.

Malicious actor changing the address in WebAssembly memory to point to malicious code

That could be malicious code that was inserted into memory somehow, maybe embedded inside of a string.

Tables make it possible to have function pointers, but in a way that isn’t vulnerable to these kinds of attacks.

A table is an array that lives outside of WebAssembly’s memory. The values are references to functions.

Another region of memory is added, distinct from WebAssembly memory, which contains the function pointer

Internally, these references contain memory addresses, but because it’s not inside WebAssembly’s memory, WebAssembly can’t see those addresses.

It does have access to the array indexes, though.

All memory outside of the WebAssembly memory object is obfuscated

If the WebAssembly module wants to call one of these functions, it passes the index to an operation called call_indirect. That will call the function.

call_indirect points to the first element of the obfuscated array, which in turn points to the function

Right now the use case for tables is pretty limited. They were added to the spec specifically to support these function pointers, because C and C++ rely pretty heavily on these function pointers.

Because of this, the only kinds of references that you can currently put in a table are references to functions. But as the capabilities of WebAssembly expand—for example, when direct access to the DOM is added—you’ll likely see other kinds of references being stored in tables and other operations on tables in addition to call_indirect.

Planet MozillaFirefox marketshare revisited

Why building a better browser doesn’t translate to a better marketshare

I posted a couple weeks ago about Chrome effectively having won the browser wars. The market share observations in the blog post were based on data provided by StatCounter. Several commenters criticized the StatCounter data as inaccurate so I decided to take a look at raw installation data Mozilla publishes to see whether it aligns with the StatCounter data.

Active Firefox Installs

Mozilla’s public data shows that the number of active Firefox Desktop installs running the most recent version of Firefox has been declining for several years. Based on this data, 22% fewer Firefox Desktop installations are active today than a year ago. This is a loss of 16 million Firefox installs in a year. The year over year decline used to be below 10% but accelerated to 14% in 2016. It returned to a more modest 10% year over year loss late 2016, which could be the result of a successful marketing campaign (Mozilla’s biggest marketing campaigns are often in the fall). That effect was temporary as the accelerating decline this year shows (Philipp suggests that the two recent drops could be the result of support for older machines and Windows versions being removed and those users continuing to use previous versions of Firefox, see comments).

Year over Year Firefox Active Daily Installs (Desktop). The Y axis is not zero-based. Click on the graph to enlarge.

Obtaining the data

Mozilla publishes aggregated Firefox usage data in form of Active Daily Installs (ADIs) here (Update: looks like the site requires a login now. It used to be available publicly for years and was public until a few days ago). The site is a bit clumsy and you can look at individual days only so I wrote some code to fetch the data for the last 3 years so its easier to analyze (link). The raw ADI data is pretty noisy as you can see here:

Desktop Firefox Daily Active Installs. The Y axis is not zero-based. Click on the graph to enlarge.

During any given week the ADI number can vary substantially. For the last week the peak was around 80 million users and the low was around 53 million users. To understand why the data is so variable it’s necessary to understand how Active Daily Installs are calculated.

Firefox tries to contact Mozilla once a day to check for security updates. This is called the “updater ping”. The ADI number is the aggregate number of these pings that were seen on a given day and can be understood as the number of running Firefox installs on that day.

The main reason that ADI data is so noisy is that work machines are switched off on the weekend. Those Firefox installs don’t check over the weekend, so the ADI number drops significantly. This also explains why ADIs don’t map 1:1 to Active Daily Users (ADUs). A user may be briefly active on a given day but then switches off the machine before Firefox had a chance to phone home. The ADI count can miss this user. Inversely, Firefox may be active on a day but the user actually wasn’t. Mozilla has a disclaimer on the site that publishes ADI data to point out that ADI data is imprecise, and from data I have seen actual Active Daily Users are about 10% higher than ADIs but this is just a ballpark estimate.

The graphs above also only look at the most recent version of Firefox. A subset of users are often stranded on older versions of Firefox. This subset tends to be relatively small since Mozilla is doing a good job these days converting as many users as possible to the most recently/most secure/most performant version of Firefox.

The first graph in this post was obtained by sliding a 90 day window over the data and comparing for each 90 day window the total number of active daily installs to the 90 day window a year prior. This helps eliminate some of the variability in the data and shows are clearer trend. In addition to weekly swings there is also a strong seasonality. College students being on break and people spending time with family over Christmas are some of the effects you can see in the raw data that the sliding window mechanism can filter to a degree.

If you want to explore the data yourself and see what effect shorter window parameters have you can use this link. If you see any mistakes or have ideas how to improve the visualization please send a pull request.

What about Mobile?

Mozilla doesn’t publish ADI data for Firefox for iOS and Firefox Focus, but since none of these appear in any browser statistics it means their market share is probably very small. ADI data is available for Firefox for Android and that graph looks quite a bit different from Desktop:

Firefox for Android Active Daily Installs. The Y axis is not zero-based. Click on the graph to enlarge.

Firefox for Android has been growing pretty consistently over the last few years. There is a big drop in early 2015 which is likely when Mozilla stopped support for very old versions of Android. The otherwise pretty consistent albeit slow growth seems to have stopped this year but it’s too early still to tell whether this trend will hold.

As you can see ADI data for mobile is not as noisy as desktop. This makes sense because people are much less likely to switch of their phones than their PCs.


A lot of commenters asked why Firefox marketshare is falling off a cliff. I think that question can be best answered with a few screenshots Mozilla engineer Chris Lord posted:

Google is aggressively using its monopoly position in Internet services such as Google Mail, Google Calendar and YouTube to advertise Chrome. Browsers are a mature product and its hard to compete in a mature market if your main competitor has access to billions of dollars worth of free marketing.

Google’s incentives here are pretty clear. The Desktop market is not growing much any more, so Google can’t acquire new users easily which threatens Google’s revenue growth. Instead, Google is going after Firefox and other browsers to grow. Chrome allows Google to lock in a user and make sure that that user heads to Google services first. No wonder Google is so aggressively converting everyone to Chrome, especially if the marketing for that is essentially free to them.

This explains why the market share decline of Firefox has accelerated so dramatically the last 12 months despite Firefox getting much better during the same time window. The Firefox engineering team at Mozilla has made amazing improvements to Firefox and is rebuilding many parts of Firefox with state of the art technology based on Mozilla’s futuristic rendering engine Servo. Firefox is today as good as Chrome in most ways, and better in some (memory use for example). However, this simply doesn’t matter in this market.

Firefox’s decline is not an engineering problem. Its a market disruption (Desktop to Mobile shift) and monopoly problem. There are no engineering solutions to these market problems. The only way to escape this is to pivot to a different market (Firefox OS tried to pivot Mozilla into mobile), or create a new market. The latter is what Brendan’s Brave is all about: be the browser for a better less ad infested Web instead of a traditional Desktop browser.

What makes today very different from the founding days of Mozilla is that Google isn’t neglecting Chrome and the Web the way Microsoft did during the Internet Explorer 6 days. Google continues to advance Chrome and the Web at breakneck pace. Despite this silver lining it is still very concerning that we are headed towards a Web monoculture dominated by Chrome.

What about Mozilla?

Mozilla helped the Web win but Firefox is now losing an unwinnable marketing fight against Google. This does not mean Firefox is not a great browser. Firefox is losing despite being a great browser, and getting better all the time. Firefox is simply the victim of Google’s need to increase profit in a relatively stagnant market. And it’s also important to note that while Firefox Desktop is probably headed for extinction over the next couple years, today it’s still a product used by some 90 million people, and still generating significant revenue for Mozilla for some time.

While I no longer work for Mozilla and no longer have insight into their future plans, I firmly believe that the decline of Firefox won’t necessarily mean the decline of Mozilla. There is a lot of important work beyond Firefox that Mozilla can do and is doing for the Web. Mozilla’s Rust programming language has crossed into the mainstream and is growing steadily and Rust might become Mozilla’s second most lasting contribution to the world.

Mozilla’s engineering team is also building a futuristic rendering engine Servo which is a fascinating piece of technology. If you are interested in the internals of a modern rendering engine, you should definitely take a look. Finding a relevant product to use Servo in will be a challenge, but that doesn’t diminish Servo’s role in pushing the envelope of how fast the Web can be.

And, last but not least, Mozilla is also still actively engaged in Web standards (WebAssembly and WebVR for example), even though it has to rely more on good will than market might these days. The battle for the open web is far from over.

Filed under: Mozilla

Planet MozillaThese Weeks in Firefox: Issue 20


Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates


Activity Stream

Electrolysis (e10s)

Firefox Core Engineering

  • We have a sample of top crashers (by signature) from FF53 release crash pings (not reports), for 5/19-5/25, broken down by process type. Some interesting things there, sent to the stability@ list for further investigation.
  • Updates to 64-bit begin in FF56 (stub installer introduced this in FF55).
  • About to land: LZMA compression and SHA384 support for update downloads for FF56, reducing the size of the download and improving its security.

Form Autofill


  • We’re working hard to ship Android Activity Stream in Firefox for Android 57!
  • We’ve got a working draft of an open-source Android library that will allow you to log into your Firefox Sync account and download your bookmarks, history, and passwords. Check it out here.
  • Firefox Focus for Android v1.0 shipped one week before the all hands and v1.1 will be coming shortly, featuring full screen videos! Here’s the full v1.1 bug list
  • We are working on some UI refresh in Firefox for Android 57 to align the Firefox Desktop! Follow along in this bug.
  • We are also planning to phase out the support of Android 4.0 (API 15). Hoping to do this in Fennec 56. Here’s the tracking bug.


  • Built the prototype for adding the ability for the user to pin frequently-used items from the Page Action menu into the URL bar. This work adds a context menu to items in the action menu to control this. The prototype also added Page Action menu entries for Pocket and Screenshots (and as a next step, their existing buttons in the navbar will be removed). Eventually there will be an WebExtensions API so that Addons can extend this menu (but that work may not make 57).

    The context menu for the page action menu to pin actions to the URL bar

    Coming soon!

  • The bookmark star has moved into the URL bar. This (as with Pocket and Screenshots, mentioned above) is part of our work to consolidate actions you perform with the page into the Page Action menu.
  • The sidebar button is now in the toolbar by default. This gives easy one-click access to toggle the sidebar.
  • Customize Mode got a few updates. Its general style has been refreshed for Photon, and we’ve removed the “grid” style around the edges and shrinking-animation when opened. Also, the info panel that’s shown the first time a user enters customization mode (which helps explain that you can drag’n’drop items to move them around) has been replaced with a Photon critter – the Dragondrop. I hope you can appreciate this delightfully terrible pun. 😉

    The visual pun shown in the overflow menu when it is empty.

    Dragondrop! Get it?! Ba-dum-tish!

  • The Library panel will now show Bookmarks and Downloads. (Bookmarks are already in Nightly, Downloads was built during the week but needs more work before landing).
  • We also fixed a number of random polish bugs here and there. “Polish” bugs are changes that are not implementing new features, but are just fixing smaller issues with new or existing features. We’ll be seeing an increasing amount of these as we get closer to shipping, and focus on improving polish and quality overall.)

Search and Navigation

Sync / Firefox Accounts

Test Pilot (written only)

  • Page Shot, Activity Stream, Tab center, Pulse all graduated from Test Pilot
  • New experiments coming next week
  • We started a new blog to publish experiment results.  Watch for new posts soon

Web Payments

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Planet MozillaTalos take II

First, ob-TenFourFox stuff. As the wonderful Dutch progressive rock band Focus plays "Sylvia" in the CD player, I'm typing this in a partially patched up build of FPR2, which has a number of further optimizations including an AltiVec-accelerated memchr() implementation (this improves JavaScript regex matching by about 15 percent, but also beefs up some other parts of the browser which call the same library function) and some additional performance backports ripped off from Mozilla's Quantum project. This version also has a re-tuned G5 build with some tweaked compiler settings to better match the 970 cache line size, picking up some small but measurable improvements on Acid3 and other tests. Even the G3 gets some love: while it obviously can't use the AltiVec memchr(), it now uses a better unrolled character matcher instead and picks up a few percentage points that way. I hope to finish the security patch work by this weekend, though I am still annoyed to note I cannot figure out what's up with issue 72.

Many of you will remember the Raptor Talos, an attempt to bring a big beefy POWER8 to the desktop that sadly did not meet its crowdsource funding goal. Well, I'm gratified to note that Raptor is trying again with a smaller scale system but a bigger CPU: the POWER9-based Talos II. You want a Power-based, free and open non-x86 alternative that can kick Intel's @$$? Then you can get one of these and not have to give up performance or processing (eheheh) power. The systems will use the "scale-out" dual socket POWER9 with DDR4 RAM and while the number of maximum supported cores on Talos II has not yet been announced, I'll just say that POWER9 systems can go up to 24 cores and we'll leave it at that. With five PCIe slots, you can stick a couple cool video cards in there too and rock out. It runs ppc64le Linux, just like the original Talos.

I'm not (just) interested in a thoroughly modern RISC workstation, though: I said before I wanted Talos to be the best way to move forward from the Power Mac, and I mean it. I'm working on tuning up Firefox for POWER8 with optimizations that should carry to POWER9, and once that builds, beefing the browser up further with a new 64-bit Power ISA JavaScript JIT with what we've learned from TenFourFox's 32-bit implementation. I'd also like to optimize QEMU for the purpose of being able to still run instances of OS 9 and PowerPC OS X in emulation at as high performance on the Talos II as possible so you can bring along your legacy applications and software. When pre-orders open up in August -- yes, next month! -- I'm going to be there with my hard-earned pennies and you'll hear about my experiences with it here first.

But don't worry: the G5 is still going to be under my desk for awhile even after the Talos II arrives, and there's still going to be improvements to TenFourFox for the foreseeable future because I'll still be using it personally for the foreseeable future. PowerPC forever.

Planet MozillaA Security Audit of Firefox Accounts

FXA-01-reportTo provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Firefox Accounts (FxA) that Cure53 conducted last fall. At Mozilla, we sponsor security audits of core open source software underpinning the Web and Internet, recently relaunched our web bug bounty program, find and fix vulnerabilities ourselves, and open source our code for anyone to review. Despite being available to more reviewers, open source software is not necessarily reviewed more thoroughly or frequently than closed source software, and the extra attention from third party reviewers can find outstanding issues and vulnerabilities. To augment our other initiatives and improve the overall security of our web services, we engage third party organizations to audit the security and review the code of specific services.

As Firefox’s central authentication service FxA is a natural first target. Its security is critical to millions of users who rely on it to authenticate with our most sensitive services, such as and Sync. Cure53 ran a comprehensive security audit that encompassed the web services powering FxA and the cryptographic protocol used to protect user accounts and data. They identified 15 issues, none of which were exploited or put user data at risk.

We thank Cure53 for reviewing FxA and increasing our trust in the backbone of Firefox’s identity system. The audit is a step toward providing higher quality and more secure services to our users, which we will continue to improve through our various security initiatives. In the rest of this blog post, we discuss the technical details of the four highest severity issues. The report is available here and you can sign up or log into Firefox Accounts on your desktop or mobile device at:


FXA-01-001 HTML injection via unsanitized FxA relier Name

The one issue Cure53 ranked as critical, FXA-01-001 HTML injection via unsanitized FxA relier Name, resulted from displaying the name of a relier without HTML escaping on the relier registration page. This issue was not exploitable from outside Mozilla, because the endpoint for registering new reliers is not open to the public. A strict Content Security Policy (CSP) blocked most Cross-Site-Scripting (XSS) on the page, but an attacker could still exfiltrate sensitive authentication data via scriptless attacks and deface or repurpose the page for phishing. To fix the vulnerability soon after Cure53 reported it to us, we updated the template language to escape all variables and use an explicit naming convention for unescaped variables. Third party relier names are now sanitized and escaped.

FXA-01-004 XSS via unsanitized Output on JSON Endpoints

The first of three issues ranked high, FXA-01-004 XSS via unsanitized Output on JSON Endpoints, affected legacy browsers handling JSON endpoints with user controlled fields in the beginning of the response. For responses like the following:

        "id": "81730c8682f1efa5",
        "name": "<img src=x onerror=alert(1)>",
        "trusted": false,
        "image_uri": "",
        "redirect_uri": "javascript:alert(1)"

an attacker could set the name or redirect_uri such that legacy browsers sniff the initial bytes of a response, incorrectly guess the MIME type as HTML instead of JSON, and execute user defined scripts.  We added the HTTP header X-Content-Type-Options: nosniff (XCTO) to disable MIME type sniffing, and wrote middleware and patches for the web frameworks to unicode escape <, >, and & characters in JSON responses.

FXA-01-014 Weak client-side Key Stretching

The second issue with a high severity ranking, FXA-01-014 Weak client-side Key Stretching, is “a tradeoff between security and efficiency”. The onepw protocol threat model includes an adversary capable of breaking or bypassing TLS. Consequently, we run 1,000 iterations of PBKDF2 on user devices to avoid sending passwords directly to the server, which runs a further 216 scrypt iterations on the PBKDF2-stretched password before storing it. Cure53 recommended storing PBKDF2 passwords with a higher work factor of roughly 256,000 iterations, but concluded “an exact recommendation on the number of iterations cannot be supplied in this instance”. To keep performance acceptable on less powerful devices, we have not increased the work factor yet.

FXA-01-010 Possible RCE if Application is run in a malicious Path

The final high severity issue, FXA-01-010 Possible RCE if Application is run in a malicious Path, affected people running FxA web servers from insecure paths in development mode. The servers exposed an endpoint that executes shell commands to determine the release version and git commit they’re running in development mode. For example, the command below returns the current git commit:

var gitDir = path.resolve(__dirname, '..', '..', '.git')
var cmd = util.format('git --git-dir=%s rev-parse HEAD', gitDir)
exec(cmd, …)

Cure53 noted malicious commands like rm -rf * in the directory path __dirname global would be executed and recommended filtering and quoting parameters. We modified the script to use the cwd option and avoid filtering the parameter entirely:

var cmd = 'git rev-parse HEAD'
exec(cmd, { env: { GIT_CONFIG: gitDir } } ...)

Mozilla does not run servers from insecure paths, but some users host their own FxA services and it is always good to consider malicious input from all sources.


We reviewed the higher ranked issues from the report, circumstances limiting their impact, and how we fixed and addressed them. We invite you to contribute to developing Firefox Accounts and report security issues through our bug bounty program as we continue to improve the security of Firefox Accounts and other core services.

The post A Security Audit of Firefox Accounts appeared first on Mozilla Security Blog.

Planet MozillaIntern Presentations: Round 1: Tuesday, July 18th

Intern Presentations: Round 1: Tuesday, July 18th Intern Presentations 4 presenters Time: 1:00PM - 2:00PM (PDT) - each presenter will start every 15 minutes 2 in MTV, 2 in TOR

Planet MozillaIntern Presentations: Round 1: Tuesday, July 18th

Intern Presentations: Round 1: Tuesday, July 18th Intern Presentations 4 presenters Time: 1:00PM - 2:00PM (PDT) - each presenter will start every 15 minutes 2 in MTV, 2 in TOR

Planet MozillaFirefox data platform & tools update, Q2 2017

<figure>Beta “main” ping submission delay analysis by :chutten.</figure>

The data platform and tools teams are working on our core Telemetry system, the data pipeline, providing core datasets and maintaining some central data viewing tools.

To make new work more visible, we provide quarterly updates.

What’s new in the last few months?

A lot of work in the last months was on reducing latency, supporting experimentation and providing a more reliable experience of the data platform.

On the data collection side, we have significantly improved reporting latency from Firefox 55, with preliminary results from Beta showing we receive 95% of the “main” ping within 8 hours (compared to previously over 90 hours). Curious for more detail? #1 and #2 should have you covered.

We also added a “new-profile” ping, which gives a clear and timely signal for new clients.

There is a new API to record active experiments in Firefox. This allows annotating experiments or interesting populations in a standard way.

The record_in_processes is now required for all histograms. This removes ambiguity about which process they are recorded in.

The data documentation moved to a new home: Are there gaps in the documentation you want to see filled? Let us know by filing a bug.

For datasets, we added telemetry_new_profile_parquet, which makes the data from the “new-profile” ping available.

Additionally, the main_summary dataset now includes all scalars and uses a whitelist for histograms, making it easy to add them. Important fields like active_ticks and Quantum release criteria were also added and backfilled.

For custom analysis on ATMO, cluster lifetimes can now be extended self-serve in the UI. The stability of scheduled job stability also saw major improvements.

There were first steps towards supporting Zeppelin notebooks better; they can now be rendered as Markdown in Python.

The data tools work is focused on making our data available in a more accessible way. Here, our main tool re:dash saw multiple improvements.

Large queries should no longer show the slow script dialog and scheduled queries can now have an expiration date. Finally, a new Athena data source was introduced, which contains a subset of our Telemetry-based derived datasets. This brings huge performance and stability improvements over Presto.

What is up next?

For the next few months, interesting projects in the pipeline include:

  • The experiments viewer & pipeline, which will make it much easier to run pref-flipping experiments in Firefox.
  • Recording new probes from add-ons into the main ping (events, scalars, histograms).
  • We are working on defining and monitoring basic guarantees for the Telemetry client data (like reporting latency ranges).
  • A re-design of about:telemetry is currently on-going, with more improvements on the way.
  • A first version of Mission Control will be available, a tool for more real-time release monitoring.
  • Analyzing the results of the Telemetry survey, (thanks everyone!) to inform our planning.
  • Extending the main_summary dataset to include all histograms.
  • Adding a pre-release longitudinal dataset, which will include all measures on those channels.
  • Looking into additional options to decrease the Firefox data reporting latency.

How to contact us.

Please reach out to us with any questions or concerns.

Firefox data platform & tools update, Q2 2017 was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaMozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet

Mozilla, the non-profit behind the Firefox browser, is excited to support Rooftop Films in bringing a memorable evening of film and discussion to The Courtyard of Industry City, in beautiful Brooklyn, New York on Saturday, July 29 starting at 8 PM ET. As a part of Rooftop Films Annual Summer Series, Hitrecord will premiere a film produced by Joseph Gordon-Levitt about staying safe online.

Mozilla believes the Internet is the most fantastically fun, awe-inspiring place we’ve ever built together. It’s where we explore new territory, build innovative products and services, swap stories, get inspired, and find our way in the world. It was built with the intention that everyone is welcome.

Right now, however, we’re at a tipping point. Big corporations want to privatize our largest public resource. Fake news and filter bubbles are making it harder for us to find our way. Online bullies are silencing inspired voices. And our desire to explore is hampered by threats to our safety and privacy.

“The Internet is a vast, vibrant ecosystem,” said Jascha Kaykas-Wolff, Mozilla’s Chief Marketing Officer. “But like any ecosystem, it’s also fragile. If we want the Internet to thrive as a diverse, open and safe place where all voices are welcome, it’s going to take committed citizens standing tall to protect it. Mozilla is proud to support the artists and filmmakers who are raising awareness for Internet health through creativity and storytelling.”

Dan Nuxoll, Program Director at Rooftop Films said, “In such a pivotal year for the Internet, we are excited to be working with Mozilla in support of films that highlight with such great detail our relationship with the web. As a non-profit, we are thrilled to be collaborating with another non-profit in support of consumer education and awareness about issues that matter most.”

Joseph Gordon-Levitt, actor and filmmaker said, “Mozilla is really a great organization, it’s all about keeping the Internet free, open and neutral — ideas very near and dear to my heart. I was flattered when Mozilla knocked on hitRECord’s door and asked us to collaborate.”

Join us as we explore, through short films, what’s helping and what’s hurting the Web. We are calling the event, “Net Positive: Internet Health Shorts.” People can register now to secure a spot.

Featured Films:
Harvest – Kevin Byrnes
Hyper Reality – Keiichi Matsuda
I Know You From Somewhere – Andrew Fitzgerald
It Should Be Easy – Ben Meinhardt
Lovestreams – Sean Buckelew
Project X – Henrik Moltke and Laura Poitras
Too Much Information – Joseph Gordon Levitt & hitRECord
Price of Certainty – Daniele Anastasion
Pizza Surveillance – Micah Laaker

Saturday, July 29
Venue: The Courtyard of Industry City
Address: 274 36th Street (Sunset Park, Brooklyn)
8:00 PM: Doors Open
8:30 PM: Live Music
9:00 PM: Films Begin
10:30 PM: Post-Screening Discussion with Filmmakers
11:00 PM: After-party sponsored by Corona Extra, Tanqueray, Freixenet, DeLeón Tequila, and Fever-Tree Tonic

In the past year, Mozilla has supported the movement to raise awareness for Internet Health by launching the IRL podcast, hosting events around the country, and collaborating with change-makers such as Joseph Gordon-Levitt to educate the public about a healthy and safe Internet environment.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets, and mobile devices. For more information, visit

About Rooftop Films

Rooftop Films is a non-profit organization whose mission is to engage and inspire the diverse communities of New York City by showcasing the work of emerging filmmakers and musicians. In addition to their annual Summer Series – which takes place in unique outdoor venues every weekend throughout the summer – Rooftop provides grants to filmmakers, rents equipment at low-cost to artists and non-profits, and supports film screenings citywide with the Rooftop Films Community Fund. At Rooftop Films, we bring underground movies outdoors. For more information and updates please visit their website at

The post Mozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet appeared first on The Mozilla Blog.

Planet MozillaAdd-on Compatibility for Firefox 56

Firefox 56 will be released on September 26th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 56 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

Compatibility changes

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 56.

The automatic compatibility validation and upgrade for add-ons on AMO will run in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 55.

Last stop!

LEGO end of train line

Firefox 56 will be the last version of Firefox to support legacy add-ons. It’s the last release cycle you’ll have to port your add-ons to WebExtensions. Many planned APIs won’t make the cut for 57, so make sure that you plan your development timeline accordingly.

This is also the last compatibility overview I’ll write. I started writing these 7 years ago, the first one covering Firefox 4. Looking ahead, backwards-incompatible changes in WebExtensions APIs should be rare. When and if they occur, we’ll post one-offs about them, so please keep following this blog for updates.

The post Add-on Compatibility for Firefox 56 appeared first on Mozilla Add-ons Blog.

Planet MozillaPicasso Tower 360º tour with A-Frame

A 360º tour refers to an experience that simulates an in-person visit through the surrounding space. This “walkthrough” visit is composed of scenes in which you can look around at any point, similar to how you can look around in Google Street View. In a 360º tour, different scenes are accessible via discrete hotspots that users can enable or jump into, transporting themselves to a new place in the tour.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

With A-Frame, creating such an experience is a surprisingly simple task.

360º panoramas

In photography, panoramas are essentially wide-angle images. Wide-angle means wide field of view, so the region of the physical space captured by the camera is wider than in regular pictures. A 360º panorama captures the space all the way around the camera.

In the same way that wide-angle photography requires special lenses, 360º panoramas require special cameras. You can read Kevin Ngo’s guide to 360º photography for advice and recommendations when creating panoramas.

Trying to represent a sphere in a rectangular format results in what we call a projection. Projection introduces distortion —straight lines become curves. You will probably be able to recognize panoramic pictures thanks to the effects of distortion that occur when panoramic views are represented in a bi-dimensional space:

To undo the distortion, you have to project the rectangle back into a sphere. With A-Frame, that means using the panorama as the texture of a sphere facing the camera. The simplest approach is to use the a-sky primitive. The projection of the image must be equirectangular in order to work in this setup.

See the Pen 360º panorama viewer by Salvador de la Puente González (@lodr) on CodePen.

By adding some bits of JavaScript, you can modify the src attribute of the sky primitive to change the panorama texture and enable the user to teleport to a different place in your scene.

Getting equirectangular images actually depends on the capturing device. For instance, the Samsung Gear 360 camera requires the use of official Samsung stitching software to combine the native dual-fisheye output into the equirectangular version; while the Ricoh Theta S outputs both equirectangular and dual-fisheye images without further interaction.

A dual-fisheye image arranges two fisheye images side by side

A dual-fisheye image is the common output of 360º cameras. A stitching software can convert this into an equirectangular image.

A 360º tour template

To create such an experience, you can use the 360 tour template that comes with aframe-cli. The aframe-360-tour-template encapsulates the concepts mentioned above in reusable components and meaningful primitives, enabling a developer to write semantic 360º tours in just a few steps.

aframe-cli has not been released yet (this is bleeding edge A-Frame tooling) but you can install a pre-release version with npm by running the following command:

npm install -g aframevr-userland/aframe-cli

Now you can access aframe-cli using the aframe command. Go to your workspace directory and start a new project by specifying the name of the project folder and the template:

$ aframe new tour --template 360-tour
$ cd tour

Start the experience with the following command:

$ aframe serve

And visit to experience the tour.

Adding panoramas

Visit my Picasso Tower 360 gallery on Flickr and download the complete gallery. (Images are public domain so don’t worry about licensing issues.)

Decompress the file and paste the images inside the app/assets/images/ folder. I will use just three images in this example. After you finish this article, you can experiment with the complete tour. Be sure to notice that the panorama order matches naming: 360_0071_stitched_injected_35936080846_o goes before 360_0072_stitched_injected_35936077976_o, which goes before 360_0073_stitched_injected_35137574104_o and so on…

Edit index.html to locate the panoramas section inside the a-tour primitive. Change current panoramas by modifying their src attribute or add new ones by writing new a-panorama primitives. Replace the current panoramas with the following ones:

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg"></a-panorama>
<a-panorama id="corner" src="images/360_0074_stitched_injected_35936077166_o.jpg"></a-panorama>
<a-panorama id="facade" src="images/360_0077_stitched_injected_35137573294_o.jpg"></a-panorama>

Save and reload your browser tab to see the new results.

It is possible you’ll need to correct the rotation of the panorama to make the user face in the direction you want. Change the rotation component of the panorama to do so. (Remember to save and reload to see your changes.):

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg" rotation=”0 90 0”></a-panorama>

Now you need to connect the new sequence to the other panoramas with positioned hotspots. Replace current hotspots with the following one and look at the result by reloading the tab:

<a-hotspot id="garden-to-corner" for="garden" to="corner" mixin="hotspot-target" position="-3.86 -0.01 -3.18" rotation="-0.11 50.47 -0.00">
  <a-text value="CORNER" align="center" mixin="hotspot-text"></a-text>

Remember that in order to activate a hotspot, while in desktop mode, you have to place the black circle over the magenta octahedron and click on the screen.

Placing hotspots

Positioning hotspots can be a frustrating endeavour. Fortunately, the template comes with an useful component to help with this task. Simply add the hotspot-helper component to your tour, referencing the hotspot you want to place as the value for the target property: <a-tour hotspot-helper="target: #corner-to-garden">. The component will move the hotspot as you look around and will display a widget in the top-left corner showing the world position and rotation of the hotspot, allowing you to copy these values to the clipboard.

Custom hotspots

You can customise the hotspot using mixins. Edit index.html and locate hotspot-text and hotspot-target mixin primitives inside the assets section.

For instance, to avoid the need to copy the world rotation values, we are going to use ngokevin’s lookAt component which is already included in the template.

Modify the entity with hotspot-text id to looks like this:

<a-mixin id="hotspot-text" look-at="[camera]" text="font: exo2bold; width: 5" geometry="primitive: plane; width: 1.6; height: 0.4" material="color: black;" position="0 -0.6 0"></a-mixin>

Cursor feedback

If you enter VR mode, you will realise that teleporting to a new place requires you to fix your gaze on the hotspot you want to get to for an interval of time. We can change the duration of this interval, modifying the cursor component. Try increasing the timeout to two seconds:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">

Once you add fuse: true to your cursor component, you won’t need to click on the screen, even out of VR mode. A click event will trigger after fuse-timeout milliseconds.

Following the suggestion in the article about the cursor component, you can create the perception that something is about to happen by attaching an a-animation primitive inside the cursor entity:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">
      <a-animation begin="fusing" end="mouseleave" easing="ease-out" attribute="scale"
                   fill="backwards" from="1 1 1" to="0.2 0.2 0.2"
Fix the gaze on a hotspot for 2 seconds to activate the hotspot and teleport.

Click on the picture above to see fuse and the animation feedback in action.

Ambient audio

Sound is a powerful tool for increasing the illusion of presence. You can find several places on the Internet offering royalty-free sounds like Once you decide on the perfect ambient noise for the experience you’re creating, grab the file URL or download it if not available and serve the file locally. Create a new folder sounds under app/assets and put the audio file inside.

Add an audio tag that points to the sound file URL inside the <a-assets> element in order for the file to load:

   <audio id="ambient-sound" src="sounds/environment.mp3"></audio>

And use the sound component referencing the audio element id to start playing the audio:

<a-tour sound="src: #ambient-sound; loop: true; autoplay: true; volume: 0.4"></a-tour>

Adjust the volume by modifying the volume property which ranges from 0 to 1.


360º tours offer first-time WebVR creators a perfect starting project that does not require exotic or expensive gear to begin VR development. Panoramic 360º scenes naturally fall back to regular 2D visualization on a desktop or mobile screen and with a cardboard headset or VR head-mounted display, users will enjoy an improved sense of immersion.

With aframe-cli and the 360º tour template you can now quickly set up the basics to customise and publish your 360º VR tour. Create a new project to show us your favourite places (real or imaginary!) by adding panoramic views, or start hacking on the template to extend its basic functionality. Either way, don’t forget to share your project with the A-Frame community in Slack and Twitter.

Planet Mozilla60,000,000 Clicks for Copyright Reform

More than 100,000 people—and counting—are demanding Internet-friendly copyright laws in the EU


60,000,000 digital flyers.

117,000 activists.

12,000 tweets to Members of the European Parliament (MEPs).

Europe has been Paperstormed.

Earlier this year, Mozilla and our friends at Moniker launched, a digital advocacy tool that urges EU policymakers to update copyright laws for the Internet age. users drop digital flyers onto maps of European landmarks, like the Eiffel Tower and the Reichstag Building in Berlin. When users drop a certain amount, they trigger impassioned tweets to European lawmakers:

“We built Paperstorm as a fun (and mildly addictive) way for Internet users to learn about and engage with a serious issue: the EU’s outdated copyright laws,” says Mozilla’s Brett Gaylor, one of Paperstorm’s creators.

“The Parliament has a unique opportunity to reform copyright,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “We hope this campaign served as a reminder that EU citizens want a modern framework that will promote — not hinder — innovation and creativity online. The success of this reform hinges on whether the interests of these citizens — whether creators, innovators, teachers, librarians, or anyone who uses the internet — are truly taken into account in the negotiations.”

Currently, lawmakers are crafting amendments to the proposal for a new copyright law, a process that will end this year. Now is the time to make an impact. And we are.

Over the last two months, more than 100,000 Internet users visited They sent 12,000 tweets to key MEPs, like France’s Jean-Marie Cavada, Germany’s Angelika Niebler, and Lithuania’s Antanas Guoga. In total, Paperstormers contacted 13 MEPs in 10 countries: Austria, France, Germany, Italy, Lithuania, Malta, Poland, Romania, Sweden and the UK.

Then, we created custom MEP figurines inside Paperstorm snowglobes. A Mozilla community member from Italy hand-delivered these snowglobes right to MEPs offices in Brussels, alongside a letter urging a balanced copyright reform for the digital age. Here’s the proof:

Angelika Niebler, Member, ITRE (left) and Jean-Marie Cavada, Vice-Chair, JURI

JURI Committee Vice-Chair, MEP Laura Ferrara, Italy (center) with Mozilla’s Raegan MacDonald and Edoardo Viola

Thanks for clicking. We’re looking forward to what’s ahead: 100,000,000 clicks—and common-sense copyright laws for the Internet age.

The post 60,000,000 Clicks for Copyright Reform appeared first on The Mozilla Blog.

Planet MozillaMozilla statement on Supreme Court hearings on Aadhaar

The Supreme Court of India is setting up a nine judge bench to consider whether the right to privacy is a fundamental right under the Indian Constitution. This move is a result of multiple legal challenges to Aadhaar, the Indian national biometric identity database, which the Government of India is currently operating without any meaningful privacy protections.

We’re pleased to see the Indian Supreme Court take this important step forward in considering the privacy implications of Aadhaar. At a time when the Government of India is increasingly making Aadhaar mandatory for everything from getting food rations, to accessing healthcare, to logging into a wifi hotspot, a strong framework protecting privacy is critical. Indians have been waiting for years for a Constitutional Bench of the Supreme Court to take up these Aadhaar cases, and we hope the Right to Privacy will not be in question for much longer.

The post Mozilla statement on Supreme Court hearings on Aadhaar appeared first on Open Policy & Advocacy.

Planet MozillaThis Week in Rust 191

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is extfsm, a crate to help build finite state machines. Thanks to Tony P. for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

103 pull requests were merged in the last week

New Contributors

  • Luca Barbato
  • Lynn
  • Sam Cappleman-Lynes
  • Valentin Brandl
  • William Brown
  • Yorwba

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Good farmers use their bare hands, average farmers use a combine harvester.

/u/sin2pifx in response to "Good programmers write C, average programmers write Rust".

Thanks to Rushmore for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaThe Fundamental Philosophy of Debugging

Sometimes people have a very hard time debugging. Mostly, these are people who believe that in order to debug a system, you have to think about it instead of looking at it.

Let me give you an example of what I mean. Let’s say you have a web server that is silently failing to serve pages to users 5% of the time. What is your reaction to this question: “Why?”

Do you immediately try to come up with some answer? Do you start guessing? If so, you are doing the wrong thing.

The right answer to that question is: “I don’t know.”

So this gives us the first step to successful debugging:

When you start debugging, realize that you do not already know the answer.

It can be tempting to think that you already know the answer. Sometimes you can guess and you’re right. It doesn’t happen very often, but it happens often enough to trick people into thinking that guessing the answer is a good method of debugging. However, most of the time, you will spend hours, days, or weeks guessing the answer and trying different fixes with no result other than complicating the code. In fact, some codebases are full of “solutions” to “bugs” that are actually just guesses—and these “solutions” are a significant source of complexity in the codebase.

Actually, as a side note, I’ll tell you an interesting principle. Usually, if you’ve done a good job of fixing a bug, you’ve actually caused some part of the system to go away, become simpler, have better design, etc. as part of your fix. I’ll probably go into that more at some point, but for now, there it is. Very often, the best fix for a bug is a fix that actually deletes code or simplifies the system.

But getting back to the process of debugging itself, what should you do? Guessing is a waste of time, imagining reasons for the problem is a waste of time—basically most of the activity that happens in your mind when first presented with the problem is a waste of time. The only things you have to do with your mind are:

  1. Remember what a working system behaves like.
  2. Figure out what you need to look at in order to get more data.

Because you see, this brings us to the most important principle of debugging:

Debugging is accomplished by gathering data until you understand the cause of the problem.

The way that you gather data is, almost always, by looking at something. In the case of the web server that’s not serving pages, perhaps you would look at its logs. Or you could try to reproduce the problem so that you can look at what happens with the server when the problem is happening. This is why people often want a “reproduction case” (a series of steps that allow you to reproduce the exact problem)—so that they can look at what is happening when the bug occurs.

Sometimes the first piece of data you need to gather is what the bug actually is. Often users file bug reports that have insufficient data. For example, let’s say a user files the bug, “When I load the page, the web server doesn’t return anything.” That’s not sufficient information. What page did they try to load? What do they mean by “doesn’t return anything?” Is it just a white page? You might assume that’s what the user meant, but very often your assumptions will be incorrect. The less experienced your user is as a programmer or computer technician, the less well they will be able to express specifically what happened without you questioning them. In these cases, unless it’s an emergency, the first thing that I do is just send the user back specific requests to clarify their bug report, and leave it at that until they respond. I don’t look into it at all until they clarify things. If I did go off and try to solve the problem before I understood it fully, I could be wasting my time looking into random corners of the system that have nothing to do with any problem at all. It’s better to go spend my time on something productive while I wait for the user to respond, and then when I do have a complete bug report, to go research the cause of the now-understood bug.

As a note on this, though, don’t be rude or unfriendly to users just because they have filed an incomplete bug report. The fact that you know more about the system and they know less about the system doesn’t make you a superior being who should look down upon all users with disdain from your high castle on the shimmering peak of Smarter-Than-You Mountain. Instead, ask your questions in a kind or straightforward manner and just get the information. Bug filers are rarely intentionally being stupid—rather, they simply don’t know and it’s part of your job to help them provide the right information. If people frequently don’t provide the right information, you can even include a little questionnaire or form on the bug-filing page that makes them fill in the right information. The point is to be helpful to them so that they can be helpful to you, and so that you can easily resolve the issues that come in.

Once you’ve clarified the bug, you have to go and look at various parts of the system. Which parts of the system to look at is based on your knowledge of the system. Usually it’s logs, monitoring, error messages, core dumps, or some other output of the system. If you don’t have these things, you might have to launch or release a new version of the system that provides the information before you can fully debug the system. Although that might seem like a lot of work just to fix a bug, in reality it often ends up being faster to release a new version that provides sufficient information than to spend your time hunting around the system and guessing what’s going on without information. This is also another good argument for having fast, frequent releases—that way you can get out a new version that provides new debugging information quickly. Sometimes you can get a new build of your system out to just the user who is experiencing the problem, too, as a shortcut to get the information that you need.

Now, remember above that I mentioned that you have to remember what a working system looks like? This is because there is another principle of debugging:

Debugging is accomplished by comparing the data that you have to what you know the data from a working system should look like.

When you see a message in a log, is that a normal message or is it actually an error? Maybe the log says, “Warning: all the user data is missing.” That looks like an error, but really your web server prints that every single time it starts. You have to know that a working web server does that. You’re looking for behavior or output that a working system does not display. Also, you have to understand what these messages mean. Maybe the web server optionally has some user database that you aren’t using, which is why you get that warning—because you intend for all the “user data” to be missing.

Eventually you will find something that a working system does not do. You shouldn’t immediately assume you’ve found the cause of the problem when you see this, though. For example, maybe it logs a message saying, “Error: insects are eating all the cookies.” One way that you could “fix” that behavior would be to delete the log message. Now the behavior is like normal, right? No, wrong—the actual bug is still happening. That’s a pretty stupid example, but people do less-stupid versions of this that don’t fix the bug. They don’t get down to the basic cause of the problem and instead they paper over the bug with some workaround that lives in the codebase forever and causes complexity for everybody who works on that area of the code from then on. It’s not even sufficient to say “You will know that you have found the real cause because fixing that fixes the bug.” That’s pretty close to the truth, but a closer statement is, “You will know that you have found a real cause when you are confident that fixing it will make the problem never come back.” This isn’t an absolute statement—there is a sort of scale of how “fixed” a bug is. A bug can be more fixed or less fixed, usually based on how “deep” you want to go with your solution, and how much time you want to spend on it. Usually you’ll know when you’ve found a decent cause of the problem and can now declare the bug fixed—it’s pretty obvious. But I wanted to warn you against papering over a bug by eliminating the symptoms but not handling the cause.

And of course, once you have the cause, you fix it. That’s actually the simplest step, if you’ve done everything else right.

So basically this gives us four primary steps to debugging:

  1. Familiarity with what a working system does.
  2. Understanding that you don’t already know the cause of the problem.
  3. Looking at data until you know what causes the problem.
  4. Fixing the cause and not the symptoms.

This sounds pretty simple, but I see people violate this formula all the time. In my experience, most programmers, when faced with a bug, want to sit around and think about it or talk about what might be causing it—both forms of guessing. It’s okay to talk to other people who might have information about the system or advice on where to look for data that would help you debug. But sitting around and collectively guessing what could cause the bug isn’t really any better than sitting around and doing it yourself, except perhaps that you get to chat with your co-workers, which could be good if you like them. Mostly though what you’re doing in that case is wasting a bunch of people’s time instead of just wasting your own time.

So don’t waste people’s time, and don’t create more complexity than you need to in your codebase. This debugging method works. It works every time, on every codebase, with every system. Sometimes the “data gathering” step is pretty hard, particularly with bugs that you can’t reproduce. But at the worst, you can gather data by looking at the code and trying to see if you can see a bug in it, or draw a diagram of how the system behaves and see if you can perceive a problem there. I would only recommend that as a last resort, but if you have to, it’s still better than guessing what’s wrong or assuming you already know.

Sometimes, it’s almost magical how a bug resolves just by looking at the right data until you know. Try it for yourself and see. It can actually be fun, even.


Planet MozillaMozMEAO SRE Status Report - July 18, 2017

Here’s what happened on the MozMEAO SRE team from July 11th - July 18th.

Current work


Decommissioning old infrastructure

We’re planning on decommissioning our Deis 1 infrastructure starting with Ireland, as our apps are all running on Kubernetes in multiple regions. Once the Ireland cluster has been shut down, we’ll continue on to our Portland cluster.

Additionally, we’ll be scaling down our Virginia cluster, as our apps are being moved to regions with lower latencies for the majority of our users.


Planet MozillaThe 2017 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future!

Aug 18-19: RustConf

RustConf is a two-day event held in Portland, OR, USA on August 18-19. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. In addition to the training sessions, on Friday there will be a RustBridge workshop session for people from underrepresented groups in tech, as well as a session on Tock, the secure embedded operating system.

The second day is the main event, with talks at every level of expertise, covering basic and advanced techniques, experience reports, guidance on teaching, and interesting libraries.

Tickets are still on sale! We offer a scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

April 29-30th & Sept 30-01: Rust Fest

Hot off another successful event in Kyiv earlier this year, we invite you to join us at RustFest, the European Rust community conference series. Over the weekend of the 30th of September we’ll gather in Zürich, Switzerland to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale in couple weeks! Keep an eye on, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest. Want to get a glimpse into what it’s like? Check out the videos from Kyiv or Berlin!

Oct 26-27: Rust Belt Rust

For Rust Belt Rust’s second year, we’ll be in Columbus, OH, USA at the Columbus Athenaeum, and tickets are on sale now! We will have a day of workshops on Thursday and a day of single track talks on Friday. Speakers include Nell Shamrell, who works on Habitat at Chef, Emma Gospodinova, who is doing a GSoC project working on the Rust plugin for the KDevelop IDE, and Core Team members Aaron Turon, Niko Matsakis, and Carol Nichols. We’d love for YOU to be a speaker as well - our CFP is open now until Aug 7. We hope to see you at the Rustiest conference in the eastern US! Follow us on Twitter @rustbeltrust for the latest news.

Planet MozillaAdd-ons at Mozilla All Hands San Francisco

Firefox add-on staff and contributors gathered at Mozilla’s recent All Hands meeting in San Francisco to spend time as a group focusing on our biggest priority this year: the Firefox 57 release in November.

During the course of the week, Mozillians could be found huddled together in various conference spaces discussing blocker issues, making plans, and hacking on code. Here’s a  recap of the week and a glance at what we have in store for the second half of 2017.

Add-on Engineering

Add-on engineers Luca Greco and Kumar McMillan take a break to model new add-on jackets.

For most of the engineering team, the week was a chance to catch up on the backlog of bugs. (The full list of bugs closed during the week can be found here.)

We also had good conversations about altering HTTP Response in the webRequest API, performance problems with the blocklist on Firefox startup, and sketching out a roadmap for web-ext, the command line tool for extension development. We also had a chance to make progress on the browser.proxy API.

Improving (AMO)

Having recently completed the redesign of AMO for Android, we’ve now turned our attention to refreshing the desktop version. Goals for the next few months include modernizing the homepage and making it easier to find great add-ons. Here’s a preview of the new look:


Another area of focus was migrating to Django 1.11. Most of the work on the Django upgrade involved replacing and removing incompatible libraries and customizations, and a lot of progress was made during the week.

Add-on Reviews

Former intern Elvina Valieva helped make improvements to the web-ext command line tool, in addition to doing some impressive marine-themed photoshopping.

Review queue wait times have dramatically improved in the past few weeks, and we’re on track to deliver even more improvements in the next few months. During our week together, we also discussed ideas for improving the volunteer reviewer program and evolving it to stay relevant to the new WebExtensions model. We’ll be reaching out to the review team for feedback in the coming weeks.

Get Involved

Interested in contributing to the add-ons community? Check out our wiki to see a list of current opportunities.


The post Add-ons at Mozilla All Hands San Francisco appeared first on Mozilla Add-ons Blog.

Planet MozillaTurns Out, Custom T-Shirts Are Cheap

The final party at the recent Mozilla All Hands, organized by the ever-awesome Brianna Mark, had a “Your Favourite Scientist” theme. I’ve always been incredibly impressed by Charles Babbage, the English father of the digital programmable computer. And he was a Christian, as well. However, I didn’t really want to drag formal evening wear all the way to San Francisco.

Instead, I made some PDFs in 30 minutes and had a Babbage-themed t-shirt made up by VistaPrint, for the surprising and very reasonable sum of around £11, with delivery inside a week. I had no idea one-off custom t-shirts were so cheap. I must think of other uses for this information. Anyway, here’s the front:

and the back:

The diagram is, of course, part of his original plans for his Difference Engline. Terrible joke, but there you go. The font is Tangerine. Sadly, the theme was not as popular as the Steampunk one we did a couple of All Hands ago, and there weren’t that many people in costume. And the Academy of Sciences was cold enough that I had my hoodie on most of the time…

Planet MozillaWin32 Gotchas

For the second time since I have been at Mozilla I have encountered a situation where hooks are called for notifications of a newly created window, but that window has not yet been initialized properly, causing the hooks to behave badly.

The first time was inside our window neutering code in IPC, while the second time was in our accessibility code.

Every time I have seen this, there is code that follows this pattern:

HWND hwnd = CreateWindowEx(/* ... */);
if (hwnd) {
  // Do some follow-up initialization to hwnd (Using SetProp as an example):
  SetProp(hwnd, "Foo", bar);

This seems innocuous enough, right?

The problem is that CreateWindowEx calls hooks. If those hooks then try to do something like GetProp(hwnd, "Foo"), that call is going to fail because the “Foo” prop has not yet been set.

The key takeaway from this is that, if you are creating a new window, you must do any follow-up initialization from within your window proc’s WM_CREATE handler. This will guarantee that your window’s initialization will have completed before any hooks are called.

You might be thinking, “But I don’t set any hooks!” While this may be true, you must not forget about hooks set by third-party code.

“But those hooks won’t know anything about my program’s internals, right?”

Perhaps, perhaps not. But when those hooks fire, they give third-party software the opportunity to run. In some cases, those hooks might even cause the thread to reenter your own code. Your window had better be completely initialized when this happens!

In the case of my latest discovery of this issue in bug 1380471, I made it possible to use a C++11 lambda to simplify this pattern.

CreateWindowEx accepts a lpParam parameter which is then passed to the WM_CREATE handler as the lpCreateParams member of a CREATESTRUCT.

By setting lpParam to a pointer to a std::function<void(HWND)>, we may then supply any callable that we wish for follow-up window initialization.

Using the previous code sample as a baseline, this allows me to revise the code to safely set a property like this:

std::function<void(HWND)> onCreate([](HWND aHwnd) -> void {
  SetProp(aHwnd, "Foo", bar);

HWND hwnd = CreateWindowEx(/* ... */, &onCreate);
// At this point is already too late to further initialize hwnd!

Note that since lpParam is always passed during WM_CREATE, which always fires before CreateWindowEx returns, it is safe for onCreate to live on the stack.

I liked this solution for the a11y case because it preserved the locality of the initialization code within the function that called CreateWindowEx; the window proc for this window is implemented in another source file and the follow-up initialization depends on the context surrounding the CreateWindowEx call.

Speaking of window procs, here is how that window’s WM_CREATE handler invokes the callable:

switch (uMsg) {
  case WM_CREATE: {
    auto createStruct = reinterpret_cast<CREATESTRUCT*>(lParam);
    auto createProc = reinterpret_cast<std::function<void(HWND)>*>(

    if (createProc && *createProc) {

    return 0;
  // ...

TL;DR: If you see a pattern where further initialization work is being done on an HWND after a CreateWindowEx call, move that initialization code to your window’s WM_CREATE handler instead.

Planet MozillaMozilla files comments to save the internet… again

Today, we filed Mozilla’s comments to the FCC. Just want to take a look at them them? They’re right here – or read on for more.

Net neutrality is critical to the internet’s creators, innovators, and everyday users. We’ve talked a lot about the importance of net neutrality over the years, both in the US and globally — and there have been many positive developments. But today there’s a looming threat: FCC Chairman Pai’s plan to roll back enforceable net neutrality protections in his so-called “Restoring Internet Freedom” proceeding.

Net neutrality — enforceable and with clear rules for providers — is critical to the future of the internet. Our economy and society depend on the internet being open. For net neutrality to work, it must be enforceable. In the past, when internet service providers (ISPs) were not subject to enforceable rules, they violated net neutrality. ISPs prevented users from chatting on FaceTime and streaming videos, among other questionable business practices. The 2015 rules fixed this: the Title II classification of broadband protected access to the open internet and made all voices free to be heard. The 2015 rules preserved– and made enforceable– the fundamental principles and assumptions on which the internet have always been rooted. To abandon these core assumptions about how the internet works and is regulated has the potential to wreak havoc. It would hurt users and stymie innovation. It could very well see the US fall behind the other 47 countries around the world that have enforceable net neutrality rules.

We’ve asked you to comment, and we’ve been thrilled with your response. Thank you! Keep it coming! Now it’s our turn. Today, we are filing Mozilla’s comments on the proceeding, arguing against this rollback of net neutrality protections. Net neutrality is a critical part of why the internet is great, and we need to protect it:

  • Net neutrality is fundamental to free speech. Without it, big companies could censor anyone’s voice and make it harder to speak up online.
  • Net neutrality is fundamental to competition. Without it, ISPs can prioritize their businesses over newcomer companies trying to reach users with the next big thing.
  • Net neutrality is fundamental to innovation. Without it, funding for startups could dry-up, as established companies that can afford to “pay to play” become the only safe tech investments.
  • And, ultimately, net neutrality is fundamental to user choice. Without it, ISPs can choose what you access — or how fast it may load — online.

The best way to protect net neutrality is with what we have today: clear, lightweight rules that are enforceable by the FCC. There is no basis to change net neutrality rules, as there is no clear evidence of a negative impact on anything, including ISPs’ long-term infrastructure investments. We’re concerned that user rights and online innovation have become a political football, when really most people and companies agree that net neutrality is important.

There’s more to come in this process — many will write “reply comments” over the next month. After that, the Commission should consider these comments (and we hope they reconsider the plan entirely) and potentially vote on the proposal later this year. We fully expect the courts to weigh in here if the new rule is enacted, and we’ll engage there too. Stay tuned!

The post Mozilla files comments to save the internet… again appeared first on Open Policy & Advocacy.

Planet MozillaAntenna: post-mortem and project wrap-up


Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

The Socorro collector is one of several components that comprise Socorro. Each of the components has different uptime requirements and different security risk profiles. However, all the code is maintained in a single repository and we deploy everything every time we do a deploy. This is increasingly inflexible and makes it difficult for us to make architectural changes to Socorro without affecting everything and incurring uptime risk for components that have high uptime requirements.

Because of that, in early 2016, we embarked on a rearchitecture to split out some components of Socorro into separate services. The first component to get split out was the Socorro collector since it needs has the highest uptime requirements of all the Socorro components, but rarely changes, so it'd be a lot easier to meet those requirements if it was separate from the rest of Socorro.

Thus I was tasked with splitting out the Socorro collector and this blog post covers that project. It's a bit stream-of-consciousness, because I think there's some merit to explaining the thought process behind how I did the work over the course of the project for other people working on projects.

Read more… (15 mins to read)

Planet MozillaEasy Passwords released as a Web Extension

I’ve finally released Easy Passwords as a Web Extension (not yet through AMO review at the time of writing), so that it can continue working after Firefox 57. To be precise, this is an intermediate step, a hybrid extension meant to migrate data out of the Add-on SDK into the Web Extension part. But all the functionality is in the Web Extension part already, and the data migration part is tiny. Why did it take me so long? After all, Easy Passwords was created when Mozilla’s Web Extensions plan was already announced. So I was designing the extension with Web Extensions in mind, which is why it could be converted without any functionality changes now. Also, Easy Passwords has been available for Chrome for a while already.

The trouble was mostly the immaturity of the Web Extensions platform, which is why I chose to base the extension on the Add-on SDK initially (hey, Mozilla used to promise continued support for the Add-on SDK, so it looked like the best choice back then). Even now I had to fight a bunch of bugs before things were working as expected. Writing to clipboard is weird enough in Chrome, but in Firefox there is also a bug preventing you from doing so in the background page. Checking whether one of the extension’s own pages is open? Expect this to fail, fixed only very recently. Presenting the user with a file download dialog? Not without hacks. And then there are some strange keyboard focus issues that I didn’t even file a bug for yet.

There is still plenty more bugs and inconsistencies. For example, I managed to convert my Enforce Encryption extension so that it would work in Chrome, but it won’t work in Firefox due to a difference in the network stack. But Mozilla’s roadmap is set in stone, Firefox 57 it is. The good news: it could have been worse, Microsoft Edge shipped with an even more immature extensions platform. I complained about difficulties presenting a file download dialog to the user? In Edge, there are three bugs playing together nicely to make this task completely impossible: 1, 2, 3.

Planet MozillaPreview Storage API in Firefox Nightly

We are happy to announce that the Storage API feature is ready for testing in Firefox Nightly Desktop!

Storage API

The Storage API allows Web sites to find out how much space users can use (quota), how much they are already using (usage) and can also tell Firefox to store this data persistently and per origin. This feature is available only in secure contexts (HTTPS). You can also use Storage APIs via Web Workers.
There are plenty of APIs that can be used for storage, e.g.,  localStorage, IndexedDB. The data stored for a Web site managed by Storage API — which is defined by the Storage Standard specification — includes:
  • IndexedDB databases
  • Cache API data
  • Service Worker registrations
  • Web Storage API data
  • History state information saved using pushState()
  • Notification data

Storage limits

The maximum browser storage space is dynamic — it is based on your hard drive size when Firefox is launching. The global limit is calculated as 50% of free disk space. There’s also another limit called group limit — basically this is defined as 20% of the global limit, to prevent individual sites from using exorbitant amounts of storage where there is a free space, the group limit is set to 2GB (maximum). Each origin is part of a group (group of origins).

Site Storage

Basically, each origin has an associated site storage unit. Site storage consists of zero or more site storage units. A site storage unit contains a single box. A box has a mode which is either “best-effort” or “persistent”.
Traditionally, when users run out of storage space on their device, data stored with these APIs gets lost without the user being able to intervene, because modern browsers automatically manage storage space, it stores data until quota is reached and do housekeeping work automatically,  it’s called “best-effort” mode.
But this doesn’t fit web games, or music streaming sites which might have offline storage use cases for commute-friendly purpose, since storage space will be evicted when available storage space is low, it will be an awful experience to re-download data again. And web games or music sites may need more space, “best-effort” mode is  also bound by group limit, upper bound is just 2GB.
With Storage API, if the site has “persistent-storage” permission, it can call, let user decide to keep site data manually, and that is “persistent” mode.

if ( && { => {
    if (persisted)
      console.log(“Persistent mode. Quota is bound by global limit (50% of disk space).”);
      console.log(“Best-effort mode. Storage may be evicted under storage pressure.");

Site Storage Units

  • Each example is independent here.
  • If a user allows the site to store persistently, the user can store more data into disk, and the site storage quota for origin is not limited by group limit but global limit.
  • Site Storage Unit of Origin A consists three different types of storage, IndexedDB Databases, Local Storage, Cache API Data; Site Storage Unit of Origin B consists Cache API Data only. Site Storage Unit of Origin A and Bs’ quota is limited to global limit.
  • Site Storage Unit of Origin C is full, it is reached to quota (global limit) and can’t store any data without removing existed site storage data. UA will start to evict “best-effort” site storage units under a least recently used (LRU policy), if all best-effort site storage units are freed but still not enough, the user agent will send storage pressure notification to clear storage explicitly. See below thex storage pressure notification screenshot. Firefox may notify users when data usage is more than 90% of global limit to clear storage.
  • Site Storage Unit of Origin D is also full, the box mode is “best-effort”, so quota is storage limit per origin (Firefox 56 is still bound by group limit), and best-effort mode is smaller than persistent storage. User agent will try to retain the data contained in the box for as long as it can, but will not warn users if storage space runs low and it becomes necessary to clear the box to relieve the storage pressure.

Indicate user to clear persistent storage



Persistent Storage Permission

Preferences – Site Data


If user “persist” the site, that site data of that origin won’t be evicted until the user manually delete them in Preferences. With the new ‘Site Data Manager’, the user now can manage site data easily and delete persistent site data manually in the same place. Although cookies are not part of Site Storage, Site Storage should be cleared along with cookies to prevent user tracking or data inconsistency.

Storage API is now available for testing in Firefox Nightly 56.

What’s currently supported

  • new Site Data Manager in Preferences
  • IndexedDB, asm.js caching, Cache API data are managed by Storage API

Storage API V1.5

  • Local Storage/History state information/Notification data are managed by Storage API

Storage API V2

  • Multiple Boxes

Try it Out

To use the new Site Data Manager, open “Privacy Preferences” (you can type about:preferences#privacy in the address bar). Click the “Settings…” button besides “Site Data”.

Take a look at the Storage api wiki page for much more information and to get involved.

Planet MozillaConfession Of A C/C++ Programmer

I've been programming in C and C++ for over 25 years. I have a PhD in Computer Science from a top-ranked program, and I was a Distinguished Engineer at Mozilla where for over ten years my main job was developing and reviewing C++ code. I cannot consistently write safe C/C++ code. I'm not ashamed of that; I don't know anyone else who can. I've heard maybe Daniel J. Bernstein can, but I'm convinced that, even at the elite level, such people are few and far between.

I see a lot of people assert that safety issues (leading to exploitable bugs) with C and C++ only afflict "incompetent" or "mediocre" programmers, and one need only hire "skilled" programmers (such as, presumably, the asserters) and the problems go away. I suspect such assertions are examples of the Dunning-Kruger effect, since I have never heard them made by someone I know to be a highly skilled programmer.

I imagine that many developers successfully create C/C++ programs that work for a given task, and no-one ever fuzzes or otherwise tries to find exploitable bugs in those programs, so those developers naturally assume their programs are robust and free of exploitable bugs, creating false optimism about their own abilities. Maybe it would be useful to have an online coding exercise where you are given some apparently-simple task, you write a C/C++ program to solve it, and then your solution is rigorously fuzzed for exploitable bugs. If any such bugs are found then you are demoted to the rank of "incompetent C/C++ programmer".

Planet MozillaAn Inflection Point In The Evolution Of Programming Langauges

Two recent Rust-related papers are very exciting.

Rustbelt formalizes (a simplified version of) Rust's semantics and type system and provides a soundness proof that accounts for unsafe code. This is a very important step towards confidence in the soundness of safe Rust, and towards understanding what it means for unsafe code to be valid — and building tools to check that.

This systems paper is about exploiting Rust's remarkably strong control of aliasing to solve a few different OS-related problems.

It's not often you see a language genuinely attractive to the systems research community (and programmers in general, as the Rust community shows) being placed on a firm theoretical foundation. (It's pretty common to see programming languages being advocated to the systems community by their creators, but this is not that.) Whatever Rust's future may be, it is setting a benchmark against which future systems programming languages should be measured. Key Rust features — memory safety, data-race freedom, unique ownership, and minimal space/time overhead, justified by theory — should from now on be considered table stakes.

Planet MozillaAdd-ons Update – 2017/07

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,597 listed add-on submissions:

  • 1294 in fewer than 5 days (81%).
  • 110 between 5 and 10 days (7%).
  • 193 after more than 10 days (12%).

301 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation has been run. Additionally, the compatibility post for 56 is coming up.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest. As always, we recommend that you test your add-ons on Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.


We would like to thank the following people for their recent contributions to the add-ons world:

  • Aayush Sanghavi
  • Santiago Paez
  • Markus Strange
  • umaarabdullah
  • Ahmed Hasan
  • Fiona E Jannat
  • saintsebastian
  • Atique Ahmed
  • Apoorva Pandey
  • Cesar Carruitero
  • J.P. Rivera
  • Trishul Goel
  • Santosh
  • Christophe Villeneuve

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/07 appeared first on Mozilla Add-ons Blog.

Planet MozillaIntroducing sphinx-js, a better way to document large JavaScript projects

Until now, there has been no good tool for documenting large JavaScript projects. JSDoc, long the sole contender, has some nice properties:

  • A well-defined set of tags for describing common structures
  • Tooling like the Closure Compiler which hooks into those tags

But the output is always a mere alphabetical list of everything in your project. JSDoc scrambles up and flattens out your functions, leaving new users to infer their relationships and mentally sort them into comprehensible groups. While you can get away with this for tiny libraries, it fails badly for large ones like Fathom, which has complex new concepts to explain. What I wanted for Fathom’s manual was the ability to organize it logically, intersperse explanatory prose with extracted docs, and add entire sections which are nothing but conceptual overview and yet link into the rest of the work.1

The Python world has long favored Sphinx, a mature documentation tool with support for many languages and output formats, along with top-notch indexing, glossary generation, search, and cross-referencing. People have written entire books in it. Via plugins, it supports everything from Graphviz diagrams to YouTube videos. However, its JavaScript support has always lacked the ability to extract docs from code.

Now sphinx-js adds that ability, giving JavaScript developers the best of both worlds.

sphinx-js consumes standard JSDoc comments and tags—you don’t have to do anything weird to your source code. (In fact, it delegates the parsing and extraction to JSDoc itself, letting it weather future changes smoothly.) You just have Sphinx initialize a docs folder in the root of your project, activate sphinx-js as a plugin, and then write docs to your heart’s content using simple reStructuredText. When it comes time to call in some extracted documentation, you use one of sphinx-js’s special directives, modeled after the Python-centric autodoc’s mature example. The simplest looks like this:

.. autofunction:: linkDensity

That would go and find this function…

 * Return the ratio of the inline text length of the links in an element to
 * the inline text length of the entire element.
 * @param {Node} node - The node whose density to measure
 * @throws {EldritchHorrorError|BoredomError} If the expected laws of the
 *     universe change, raise EldritchHorrorError. If we're getting bored of
 *     said laws, raise BoredomError.
 * @returns {Number} A ratio of link length to overall text length: 0..1
function linkDensity(node) {

…and spit out a nicely formatted block like this:

(the previous comment block, formatted nicely)

Sphinx begins to show its flexibility when you want to do something like adding a series of long examples. Rather than cluttering the source code around linkDensity, the additional material can live in the reStructuredText files that comprise your manual:

.. autofunction:: linkDensity
   Anything you type here will be appended to the function's description right
   after its return value. It's a great place for lengthy examples!

There is also a sphinx-js directive for classes, either the ECMAScript 2015 sugared variety or the classic functions-as-constructors kind decorated with @class. It can optionally iterate over class members, documenting as it goes. You can control ordering, turn private members on or off, or even include or exclude specific ones by name—all the well-thought-out corner cases Sphinx supports for Python code. Here’s a real-world example that shows a few truly public methods while hiding some framework-only “friend” ones:

.. autoclass:: Ruleset(rule[, rule, ...])
   :members: against, rules

(Ruleset class with extracted documentation, including member functions)

Going beyond the well-established Python conventions, sphinx-js supports references to same-named JS entities that would otherwise collide: for example, one foo that is a static method on an object and another foo that is an instance method on the same. It does this using a variant of JSDoc’s namepaths. For example…

  • someObject#foo is the instance method.
  • is the static method.
  • And someObject~foo is an inner member, the third possible kind of overlapping thing.

Because JSDoc is still doing the analysis behind the scenes, we get to take advantage of its understanding of these JS intricacies.

Of course, JS is a language of heavy nesting, so things can get deep and dark in a hurry. Who wants to type this full path in order to document innerMember?


Yuck! Fortunately, sphinx-js indexes all such object paths using a suffix tree, so you can use any suffix that unambiguously refers to an object. You could likely say just innerMember. Or, if there were 2 objects called “innerMember” in your codebase, you could disambiguate by saying staticMethod~innerMember and so on, moving to the left until you have a unique hit. This delivers brevity and, as a bonus, saves you having to touch your docs as things move around your codebase.

With the maturity and power of Sphinx, backed by the ubiquitous syntactical conventions and proven analytic machinery of JSDoc, sphinx-js is an excellent way to document any large JS project. To get started, see the readme. Or, for a large-scale example, see the Fathom documentation. A particularly juicy page is the Rule and Ruleset Reference, which intersperses tutorial paragraphs with extracted class and function docs; its source code is available behind a link in its upper right, as for all such pages.

I look forward to your success stories and bug reports—and to the coming growth of rich, comprehensibly organized JS documentation!

1JSDoc has tutorials, but they are little more than single HTML pages. They have no particular ability to cross-link with the rest of the documentation nor to call in extracted comments.

Planet MozillaGetting Firefox data faster: introducing the ‘new-profile’ ping

Let me state this clearly, again: data latency sucks. This is especially true when working on Firefox: a nicely crafted piece of software that ships worldwide to many people. When something affects the experience of our users we need to know and react fast. The story so far… We started improving the latency of the … 

Planet MozillaIf using ES6 `extends`, call `super()` before accessing `this`

I am working on rewriting some code that used an ES5 “Class” helper, to use actual ES6 classes.

I soon stumbled upon a weird error in which apparently valid code would be throwing an |this| used uninitialized in A class constructor error:

class A extends B {
  constructor() {
    this.someVariable = 'some value'; // fails

I was absolutely baffled as to why this was happening… until I found the answer in a stackoverflow post: I had to call super() before accessing this.

With that, the following works perfectly:

class A extends B {
  constructor() {
    super(); // ☜☜☜ ❗️❗️❗️
    this.someVariable = 'some value'; // works!

Edit: filed a bug in Firefox to at least get a better error message!

flattr this!

Planet MozillaThoughts on the module system

For a long long time Mozilla has governed its code (and a few other things) as a series of modules. Each module covers an area of code in the source and has an owner and a list of peers, folks that are knowledgeable about that module. The full list of modules is public. In the early days the module system was everything. Module owners had almost complete freedom to evolve their module as they saw fit including choosing what features to implement and what bugs to fix. The folks who served as owners and peers came from diverse areas too. They weren’t just Mozilla employees, many were outside contributors.

Over time things have changed somewhat. Most of the decisions about what features to implement and many of the decisions about the priority of bugs to be fixed are now decided by the product management and user experience teams within Mozilla. Almost all of the module owners and most of the peers are now Mozilla employees. It might be time to think about whether the module system still works for Mozilla and if we should make any changes.

In my view the current module system provides two things that it’s worth talking about. A list of folks that are suitable reviewers for code and a path of escalation for when disagreements arise over how something should be implemented. Currently both are done on a per-module basis. The module owner is the escalation path, the module peers are reviewers for the module’s code.

The escalation path is probably something that should remain as-is. We’re always going to need experts to defer decisions to, those experts often need to be domain specific as they will need to understand the architecture of the code in question. Maintaining a set of modules with owners makes sense. But what about the peers for reviewing code?

A few years ago both the Toolkit and Firefox modules were split into sub-modules with peers for each sub-module. We were saying that we trusted folks to review some code but didn’t trust them to review other code. This frequently became a problem when code changes touched more than one sub-module, a developer would have to get review from multiple peers. That made reviews go slower than we liked. So we dropped the sub-modules. Instead every peer was trusted to review any code in the Firefox or Toolkit module. The one stipulation we gave the peers was that they should turn away reviews if they didn’t feel like they knew the code well enough to review it themselves. This change was a success. Of course for complicated work certain reviewers will always be more knowledgeable about a given area of code but for simpler fixes the number of available reviewers is much larger than it was otherwise.

I believe that we should make the same change for the entire code-base. Instead of having per-module peers simply designate every existing peer as a “Mozilla code reviewer” able to review code anywhere in the tree so long as they feel that they understand the change well enough. They should escalate architectural concerns to the module’s owner.

This does bring up the question of how do patch authors find reviewers for their code if there is just one massive list of reviewers. That is where Bugzilla’s suggested reviewers feature comes it. We have per-Bugzilla lists of reviewers who are the appropriate choices for that component. Patches that cover more than one component can choose whoever they like to do the review.


Planet MozillaDefending Net Neutrality: Millions Rally to Save the Internet, Again

We’re fighting for net neutrality, again, because it is crucial to the future of the internet. Net neutrality serves to enable free speech, competition, innovation and user choice online.

On July 12, it was great to see such a diversity of voices speak up and join together to support a neutral internet. We need to protect the internet as a shared global public resource for us all. This Day of Action makes it clear, yet again, that net neutrality it a mainstream issue, which the majority of Americans (76% from our recent survey) care about and support.

We were happy to see a lot of engagement with our Day of Action activities:

  • Mozilla collected more than 30,000 public comments on July 12 alone — bringing our total number of public comments to more than 75,000. We’ll be sharing these with the FCC
  • Our nine hour Soothing Sounds of Activism: Net Neutrality video, along with interviews from Senators Al Franken and Ron Wyden, received tens of thousands of views
  • The net neutrality public comments displayed on the U.S. Firefox snippet made 6.8 million impressions
  • 30,000 listeners tuned in for the net neutrality episode of our IRL podcast

The Day of Action was timed a few days before the first deadline for comments to the FCC on the proposed rollback of existing net neutrality protections. This is just the first step though. Mozilla takes action to protect net neutrality every day, because it’s obviously not a one day battle.

Net neutrality is not the sole responsibility any one company, individual or political party. We need to join together because the fight for net neutrality impacts the future of the internet and everyone who uses it.

What’s Next?

Right now, we’re finishing our FCC comments to submit on July 17. Next, we’ll continue to advocate for enforceable net neutrality through all FCC deadlines and we’ll defend the open internet, just like we did with our comments and efforts to protect net neutrality in 2010 and 2014.

The post Defending Net Neutrality: Millions Rally to Save the Internet, Again appeared first on The Mozilla Blog.

Planet MozillaFirefox Developer Edition 55 Beta 11 Testday, July 21st

Hello Mozillians,

We are happy to let you know that Friday, July 21st, we are organizing Firefox Developer Edition 55 Beta 11 Testday. We’ll be focusing our testing on the following features: Screenshots, Shutdown Video Decoder and Customization.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaReps Weekly Meeting Jul. 13, 2017

Reps Weekly Meeting Jul. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Jul. 13, 2017

Reps Weekly Meeting Jul. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.


Updated: .  Michael(tm) Smith <>