Planet MozillaOttawa Python Authors Meetup: Artificial Intelligence with Python

Last night, I attended my first Ottawa Python Authors Meetup.  It was the first time that I had attended despite wanting to attend for a long time.  (Mr. Releng also works with Python and thus every time there's a meetup, we discuss who gets to go and who gets to stay home and take care of little Releng.  It depends on if the talk to more relevant to our work interests.)

The venue was across the street from Confederation Park aka land of Pokemon.


I really enjoyed it.  The people I chatted with were very friendly and welcoming. Of course, I ran into some people I used to work with, as is with any tech event in Ottawa it seems. Nice to catch up!

The venue had the Canada Council for the Arts as a tenant, thus the quintessentially Canadian art.


The speaker that night was Emily Daniels, developer from Halogen Software who spoke on Artificial Intelligence with Python. (Slides here, github repo here).  She mentioned that she writes Java during the day but works on fun projects in Python at night.  She started the talk by going through some examples of artificial intelligence on the web.  Perhaps the most interesting one I found was a recurrent neural network called Benjamin which generates movie script ideas and was trained on existing sci-fi movies and movie scripts.  Also, a short film called Sunspring was made of one of the generated scripts.  The dialogue is kind of stilted but it is interesting concept.

 After the examples, Emily then moved on to how it all works. 

Deep learning is a type of machine learning that drives meaning out of data using a hierarchy of multiple layers that mimics the neural networks of our brain.

She then spoke about a project she wrote to create generative poetry from a RNN (recurrent neural network).  It was based on a RNN tutorial that she heavily refactored to meet her needs.  She went through the code that she developed to generate artificial prose from the works of H.G. Wells and Jane Austen.  She talked about how she cleaned up the text to remove EOL delimiters, page breaks, chapters numbers and so on. And then it took a week to train it with the data.

She then talked about another example which used data from Jack Kerouac and Virginia Woolf novels, which she posts some of the results to twitter.


She also created a twitter account which posts generated text from her RNN that consumes the content of Walt Whitman and Emily Dickinson. (I should mention at this point that she chose these authors for her projects because copyrights have expired on these works and they are available on the Gutenberg project)

After the talk, she field a number of audience questions which were really insightful. There were discussions on the inherent bias in the data because it was written by humans that are sexist and racist.  She mentioned that she doesn't post the results of the model automatically to twitter because some of them are really inappropriate since these novels since they learned from text that humans wrote who are inherently biased.

One thing I found really interesting is that Emily mentioned that she felt a need to ensure that the algorithms and data continue to exist, and that they were faithfully backed up.  I began to think about all the Amazon instances that Mozilla releng had automatically killed that day as our capacity had peaked and declined.  And of the great joy I feel ripping out code when we deprecate a platform.  I personally feel no emotional attachment to bring down machines or deleting used code.
 
Perhaps the sense of a need for a caretaker for these recurrent neural networks and the data they create is related to the fact that the algorithms that output text that is a simulacrum for the work of an author that we enjoy reading.  And perhaps that is why we maybe we aren't as attached to a ephemeral pool of build machines as we are are to our phones.  Because the phone provides a sense human of connection to the larger world when we may be sitting alone.

Thank you Emily for the very interesting talk, to the Ottawa Python Authors Group for organizing the meetup, and Shopify for sponsoring the venue.  Looking forward to the next one!

Further reading

Planet MozillaEclipse Committer Emeritus

I received this very kind email in my inbox this morning.

"David Williams has expired your commit rights to the
eclipse.platform.releng project.  The reason for this change is:

We have all known this day would come, but it does not make it any easier.
It has taken me four years to accept that Kim is no longer helping us with
Eclipse. That is how large her impact was, both on myself and Eclipse as a
whole. And that is just the beginning of why I am designating her as
"Committer Emeritus". Without her, I humbly suggest that Eclipse would not
have gone very far. Git shows her active from 2003 to 2012 -- longer than
most! She is (still!) user number one on the build machine. (In Unix terms,
that is UID 500). The original admin, when "Eclipse" was just the Eclipse
Project.

She was not only dedicated to her job as a release engineer she was
passionate about doing all she could to make other committer's jobs easier
so they could focus on their code and specialties. She did (and still does)
know that release engineering is a field of its own; a specialized
profession (not something to "tack on" at the end) that just anyone can do)
 and good, committed release engineers are critical to the success of any
project.

For anyone reading this that did not know Kim, it is not too late: you can
follow her blog at

http://relengofthenerds.blogspot.com/

You will see that she is still passionate about release engineering and
influential in her field.

And, besides all that, she was (I assume still is :) a well-rounded, nice
person, that was easy to work with! (Well, except she likes running for
exercise. :)

Thanks, Kim, for all that you gave to Eclipse and my personal thanks for
all that you taught me over the years (and I mean before I even tried to
fill your shoes in the Platform).

We all appreciate your enormous contribution to the success of Eclipse and
happy to see your successes continuing.

To honor your contributions to the project, David Williams has nominated
you for Committer Emeritus status."


Thank you David! I really appreciate your kind words.  I learned so much working with everyone in the Eclipse community.  I had the intention to contribute to Eclipse when I left IBM but really felt that I have given all I had to give.  Few people have the chance to contribute to two fantastic open source communities during their career.  I'm lucky to have that opportunity.


My IBM friends made this neat Eclipse poster when I left.  The Mozilla dino displays my IRC handle.

Planet MozillaExtension Signing: Availability of Unbranded Builds

With the release of Firefox 48, extension signing can no longer be disabled in the release and beta channel builds by using a preference. As outlined when extension signing was announced, we are publishing specialized builds that support this preference so developers can continue to test against the code that beta and release builds are generated from. These builds do not use Firefox branding, do not update automatically, and are available for the en-US locale only.

You can find links to the latest unbranded beta and release builds on the Extension Signing page as they come out. Additional information on Extension Signing, including a Frequently Asked Questions section, is also available on this page.

Update: We’re aware of a problem with the builds where they’re getting updated to branded releases/betas. We’re looking into it, and a workaround while we correct the issue is outlined in the Extension Signing page.

Planet Mozillaa git pre-commit hook for tooltool manifest checking

I’ve recently been uploading packages to tooltool for my work on Rust-in-Gecko and Android toolchains. The steps I usually follow are:

  1. Put together tarball of files.
  2. Call tooltool.py from build-tooltool to create a tooltool manifest.
  3. Upload files to tooltool with said manifest.
  4. Copy bits from said manifest into one of the manifest files automation uses.
  5. Do try push with new manifest.
  6. Admire my completely green try push.

That would be the ideal, anyway.  What usually happens at step 4 is that I forget a comma, or I forget a field in the manifest, and so step 5 winds up going awry, and I end up taking several times as long as I would have liked.

After running into this again today, I decided to implement some minimal validation for automation manifests.  I use a fork of gecko-dev for development, as I prefer Git to Mercurial. Git supports running programs when certain things occur; these programs are known as hooks and are usually implemented as shell scripts. The hook I’m interested in is the pre-commit hook, which is looked for at .git/hooks/pre-commit in any git repository. Repositories come with a sample hook for every hook supported by Git, so I started with:

cp .git/hooks/pre-commit.sample .git/hooks/pre-commit

The sample pre-commit hook checks trailing whitespace in files, which I sometimes leave around, especially when I’m editing Python, and can check for non-ASCII filenames being added.  I then added the following lines to that file:

if git diff --cached --name-only | grep -q releng.manifest; then
    for f in $(git diff --cached --name-only | grep releng.manifest); do
	if ! python -<<EOF
import json
import sys
try:
    with open("$f", 'r') as f:
        json.loads(f.read())
    sys.exit(0)
except:
    sys.exit(1)
EOF
	    then
	    echo $f is not valid JSON
	    exit 1
	fi
     done
fi

In prose, we’re checking to see if the current commit has any releng.manifest files being changed in any way. If so, then we’ll try parsing each of those files as JSON, and throwing an error if one doesn’t parse.

There are several ways this check could be more robust:

  • The check will error if a commit is removing a releng.manifest, because that file won’t exist for the script to check;
  • The check could ensure that the unpack field is set for all files, as the manifest file used for the upload in step 3, above, doesn’t include that field: it needs to be added manually.
  • The check could ensure that all of the digest fields are the correct length for the specified digest in use.
  • …and so on.

So far, though, simple syntax errors are the greatest source of pain for me, so that’s what’s getting checked for.  (Mismatched sizes have also been an issue, but I’m unsure of how to check that…)

What pre-commit hooks have you found useful in your own projects?

Planet MozillaDifferential Privacy for Dummies

Technology allows companies to collect more data and with more detail about their users than ever before. Sometimes that data is sold to third parties, other times it’s used to improve products and services.

In order to protect users’ privacy, anonymization techniques can be used to strip away any piece of personally identifiable data and let analysts access only what’s strictly necessary. As the Netflix competition in 2007 has shown though, that can go awry. The richness of data allows to identify users through a sometimes surprising combination of variables like the dates on which an individual watched certain movies. A simple join between an anonymized datasets and a non-anonymized one can re-identify anonymized data.

Aggregated data is not much safer either! Given two sets of aggregated data S_1 and S_2 that count how many individuals have property p and knowing that individual u is missing from S_2 , one can infer whether individual u has property p by subtracting S_2 from S_1.

Differential Privacy to the rescue

Differential privacy formalizes the idea that a query should not reveal whether any one person is present in a dataset, much less what their data are. Imagine two otherwise identical datasets, one with your information in it, and one without it. Differential Privacy ensures that the probability that a query will produce a given result is nearly the same whether it’s conducted on the first or second dataset. The idea is that if an individual’s data doesn’t significantly affect the outcome of a query, then he might be OK in giving his information up as likely no harm will come from it. The result of the query can damage an individual regardless of his presence in a dataset though. For example, if an analysis on a medical dataset finds a correlation between lung cancer and smoking, then the health insurance cost for a particular smoker might increase regardless of his presence in the study.

More formally, differential privacy requires that the probability of a query producing any given output changes by at most a multiplicative factor when a record (e.g. an individual) is added or removed from the input. The largest multiplicative factor quantifies the amount of privacy difference. This sounds harder than it actually is and the next sections will iterate on the concept with various examples, but first we need to define a few terms.

Dataset

We will think of a dataset d as being a collections of records from an universe U . One way to represent a dataset d is with a histogram in which each entry d_i represents the number of elements in the dataset equal to u_i \in U . For example, say we collected data about coin flips of three individuals, then given the universe U = \{head, tail\} , our dataset d would have two entries: d_{head} = i and d_{tail} = j , where i + j = 3 . Note that in reality a dataset is likely to be an ordered lists of rows (i.e. a table) but the former representation makes the math a tad easier.

Distance

Given the previous definition of dataset, we can define the distance between two datasets x, y with the l_1 norm as:

\displaystyle ||x - y||_1 = \sum\limits_{i=1}^{|U|}  |x_i - y_i|

Mechanism

A mechanism is an algorithm that takes as input a dataset and returns an output, so it can really be anything, like a number or a statistical model. Using the previous coin-flipping example, if mechanism C counts the number of individuals in the dataset, then C(d) = 3 .

Differential Privacy

A mechanism M satisfies \epsilon differential privacy if for every pair of datasets x, y such that ||x - y||_1 \leq 1 , and for every subset S \subseteq \text{Range}(M) :

\displaystyle \frac{Pr[M(x) \in S]}{Pr[M(y) \in S]} \leq e^{\epsilon}

What’s important to understand is that the previous statement is just a definition. The  definition  is  not  an  algorithm,  but  merely  a  condition that must be satisfied by a mechanism to claim that it satisfies \epsilon differential privacy. Differential privacy allows researchers to use a common framework to study algorithms and compare their privacy guarantees.

Let’s check if our mechanism C satisfies 1 differential privacy. Can we find a counter-example for which:

\displaystyle \frac{Pr[C(x)] \in S}{Pr[C(y)] \in S} \leq e

is false? Given x, y such that ||x - y||_1 = 1 and ||x||_1 = k , then:

\displaystyle \frac{Pr[C(x)] = k}{Pr[C(y)] = k} \leq e

i.e. \frac{1}{0} \leq e , which is clearly false, hence this proves that mechanism C doesn’t satisfy 1 differential privacy.

Composition theorems

A powerful property of differential privacy is that mechanisms can easily be composed.

Let d be a dataset and g an arbitrary function. Then, the sequential composition theorem asserts that if M_I(d) is \epsilon_i differentially private, then M(d) = g(M_1(d), M_2(d), ..., M_N(d)) is \sum_{i=1}^{N}\epsilon_i differentially private. Intuitively this means that given an overall fixed privacy budget, the more mechanisms are applied to the same dataset, the more the available privacy budget for each individual mechanism will decrease.

The parallel composition theorem asserts that given N partitions of a dataset d , if for an arbitrary partition d_i , M_i(d_i) is \epsilon differentially private, then M(d) = g(M_1(d_1), M_2(d_2), ..., M_N(d_N)) is \epsilon differentially private. In other words, if a set of \epsilon differentially private mechanisms is applied to a set of disjoint subsets of a dataset, then the combined mechanism is still \epsilon differentially private.

The randomized response mechanism

The first mechanism we will look into is “randomized response”, a technique developed in the sixties by social scientists to collect data about embarrassing or illegal behavior. The study participants have to answer a yes-no question in secret using the following mechanism M_R(d, \alpha, \beta) :

  1. Flip a biased coin with probability of heads \alpha ;
  2. If heads, then answer truthfully with d ;
  3. If tails, flip a coin with probability of heads \beta and answer “yes” for heads and “no” for tails.

In code:

def randomized_response_mechanism(d, alpha, beta):
    if random() < alpha:
        return d
    elif random() < beta:
        return 1
    else:
        return 0

Privacy is guaranteed by the noise added to the answers. For example, when the question refers to some illegal activity, answering “yes” is not incriminating as the answer occurs with a non-negligible probability whether or not it reflects reality, assuming \alpha and \beta are tuned properly.

Let’s try to estimate the proportion p of participants that have answered “yes”. Each participant can be modeled with a Bernoulli variable X_i which takes a value of 0 for “no” and a value of 1 for “yes”. We know that:

\displaystyle P(X_i = 1) = \alpha p + (1 - \alpha) \beta

Solving for p yields:

\displaystyle p = \frac{P(X_i = 1) - (1 - \alpha) \beta}{\alpha}

Given a sample of size n , we can estimate P(X_i = 1)  with \frac{\sum_{i=1}^{i=n} X_i}{n} .  Then, the estimate \hat{p} of p is:

\displaystyle \hat{p} = \frac{\frac{\sum\limits_{i=1}^{i=n} X_i}{n} - (1 - \alpha) \beta}{\alpha}

To determine how accurate our estimate is we will need to compute its standard deviation. Assuming the individual responses X_i are independent, and using basic properties of the variance,

\displaystyle \mathrm{Var}(\hat{p}) = \mathrm{Var}\biggl({\frac{\sum_{i=1}^{i=n} X_i}{n \alpha}}\biggr) = {\frac{\mathrm{Var}(X_i)}{n \alpha^2}}

By taking the square root of the variance we can determine the standard deviation of \hat{p} . It follows that the standard deviation s is proportional to \frac{1}{\sqrt{n}} , since the other factors are not dependent on the number of participants. Multiplying both \hat{p} and s by n yields the estimate of the number of participants that answered “yes” and its relative accuracy expressed in number of participants, which is proportional to \sqrt{n} .

The next step is to determine the level of privacy that the randomized response method guarantees. Let’s pick an arbitrary participant. The dataset d is represented with either 0 or 1 depending on whether the participant answered truthfully with a “no” or “yes”. Let’s call the two possible configurations of the dataset respectively d_{no} and d_{yes} . We also know that ||d_i - d_j||_1 \leq 1 for any i, j . All that’s left to do is to apply the definition of differential privacy to our randomized response mechanism M_R(d, \alpha, \beta) :

\displaystyle \frac{Pr[M_R(d_{i}, \alpha, \beta) = k]}{Pr[M_R(d_{j}, \alpha, \beta) = k]} \leq e^{\epsilon}

The definition of differential privacy applies to all possible configurations of i, j \text{ and } k , e.g.:

\displaystyle \begin{gathered}\frac{Pr[M_R(d_{yes}, \alpha, \beta) = 1]}{Pr[M_R(d_{no}, \alpha, \beta) = 1]} \leq e^{\epsilon} \\\ln \left( {\frac{\alpha + (1 - \alpha)\beta}{1 - (\alpha + (1 - \alpha)\beta)}} \right) \leq \epsilon \\\ln \left( \frac{\alpha + \beta - \alpha \beta}{1 - (\alpha + \beta - \alpha \beta)} \right) \leq \epsilon\end{gathered}

The privacy parameter \epsilon can be tuned by varying \alpha \text{ and } \beta . For example, it can be shown that the randomized response mechanism with \alpha = \frac{1}{2} and \beta = \frac{1}{2} satisfies \ln{3} differential privacy.

The proof applies to a dataset that contains only the data of a single participant, so how does this mechanism scale with multiple participants? It follows from the parallel composition theorem that the combination of \epsilon differentially private mechanisms applied to the datasets of the individual participants is \epsilon differentially private as well.

The Laplace mechanism

The Laplace mechanism is used to privatize a numeric query. For simplicity we are going to assume that we are only interested in counting queries f , i.e. queries that count individuals, hence we can make the assumption that adding or removing an individual will affect the result of the query by at most 1.

The way the Laplace mechanism works is by perturbing a counting query f with noise distributed according to a Laplace distribution centered at 0 with scale b = \frac{1}{\epsilon} ,

\displaystyle Lap(x | b) = \frac{1}{2b}e^{- \frac{|x|}{b}}

Then, the Laplace mechanism is defined as:

\displaystyle M_L(x, f, \epsilon) = f(x) + Z

where Z is a random variable drawn from Lap(\frac{1}{\epsilon}) .

In code:

def laplace_mechanism(data, f, eps):
    return f(data) + laplace(0, 1.0/eps)

It can be shown that the mechanism preserves \epsilon differential privacy. Given two datasets x, y such that ||x - y||_1 \leq 1 and a function f which returns a real number from a dataset, let p_x denote the probability density function of M_L(x, f, \epsilon) and p_y the probability density function of M_L(y, f, \epsilon) . Given an arbitrary real point z ,

\displaystyle \frac{p_x(z)}{p_y(z)} = \frac{e^{- \epsilon |f(x) - z|}}{e^{- \epsilon |f(y) - z|}} =

\displaystyle e^{\epsilon (|f(x) - z| - |f(y) - z|)} \leq

\displaystyle e^{\epsilon |f(x) - f(y)|}

by the triangle inequality. Then,

\displaystyle e^{\epsilon |f(x) - f(y)|} \leq e^\epsilon

What about the accuracy of the Laplace mechanism? From the cumulative distribution function of the Laplace distribution it follows that if Z \sim Lap(b) , then Pr[|Z| \ge t \times b] = e^{-t} . Hence, let y = M_L(x, f, \epsilon) and \forall \delta \in (0, 1] :

\displaystyle Pr \left[|f(x) - y| \ge \ln{\left (\frac{1}{\delta} \right)} \times \frac{1}{\epsilon} \right] =

\displaystyle Pr \left[|Z| \ge \ln{\left (\frac{1}{\delta} \right)} \times \frac{1}{\epsilon} \right] = \delta

where Z \sim Lap(\frac{1}{\epsilon}) . The previous equation sets a probalistic bound to the accuracy of the Laplace mechanism that, unlike the randomized response, does not depend on the number of participants n .

Counting queries

The same query can be answered by different mechanisms with the same level of differential privacy. Not all mechanisms are born equally though; performance and accuracy have to be taken into account when deciding which mechanism to pick.

As a concrete example, let’s say there are n individuals and we want to implement a query that counts how many possess a certain property p . Each individual can be represented with a Bernoulli random variable:

participants = binomial(1, p, n)

We will implement the query using both the randomized response mechanism M_R(d, \frac{1}{2}, \frac{1}{2}) , which we know by now to satisfy \ln{3} differential privacy, and the Laplace mechanism M_L(d, f, \ln{3}) which satisfies \ln{3} differential privacy as well.

def randomized_response_count(data, alpha, beta):
    randomized_data = randomized_response_mechanism(data, alpha, beta)
    return len(data) * (randomized_data.mean() - (1 - alpha)*beta)/alpha

def laplace_count(data, eps):
    return laplace_mechanism(data, np.sum, eps)

r = randomized_response_count(participants, 0.5, 0.5)
l = laplace_count(participants, log(3))

Note that while that while M_R is applied to each individual response and later combined in a single result, i.e. the estimated count, M_L is applied directly to the count, which is intuitively why M_R is noisier than M_L . How much noisier?  We can easily simulate the distribution of the accuracy for both mechanisms with:

def randomized_response_accuracy_simulation(data, alpha, beta, n_samples=1000):
    return [randomized_response_count(data, alpha, beta) - data.sum()
           for _ in range(n_samples)]

def laplace_accuracy_simulation(data, eps, n_samples=1000):
    return [laplace_count(data, eps) - data.sum()
            for _ in range(n_samples)]

r_d = randomized_response_accuracy_simulation(participants, 0.5, 0.5)
l_d = laplace_accuracy_simulation(participants, log(3))

As mentioned earlier, the accuracy of M_R grows with the square root of the number of participants:

<figure class="wp-caption aligncenter" id="attachment_3696" style="width: 1010px;">indexRandomized Response Mechanism Accuracy</figure>

while the accuracy of M_L is a constant:

<figure class="wp-caption aligncenter" id="attachment_3701" style="width: 998px;">indexLaplace Mechanism Accuracy</figure>

You might wonder why one would use the randomized response mechanism if it’s worse in terms of accuracy compared to the Laplace one. The thing about the Laplace mechanism is that the private data about the users has to be collected and stored, as the noise is applied to the aggregated data. So even with the best of intentions there is the remote possibility that an attacker might get access to it.  The randomized response mechanism though applies the noise directly to the individual responses of the users and so only the perturbed responses are collected! With the latter mechanism any individual’s information cannot be learned, but an aggregator can still infer population statistics.

Real world use-cases

The algorithms presented in this post can be used to answer simple counting queries. There are many more mechanisms out there used to implement complex statistical procedures like machine learning models. The concept behind them is the same though: there is a certain function that needs to be computed over a dataset in a privacy preserving manner and noise is used to mask how much information about an arbitrary individual is leaked.

One such mechanism is RAPPOR, an approach pioneered by Google to collect frequencies of an arbitrary set of strings. The idea behind it is to collect vectors of bits from users where each bit is perturbed with the randomized response mechanism.  The bit-vector might represent a set of binary answers to a group of questions, a value from a known dictionary or, more interestingly, a generic string encoded through a Bloom filter. The bit-vectors are aggregated and the expected count for each bit is computed in a similar way as shown previously in this post. Then, a statistical model is fit to estimate the frequency of a candidate set of known strings. The main drawback with this approach is that it requires a known dictionary.

Later on the approach has been improved to infer the collected strings without the need of a known dictionary at the cost of accuracy and performance. To give you an idea, to estimate a distribution  over  an  unknown  dictionary  of  6-letter  strings without knowing  the  dictionary,  in  the  worst  case,  a sample size in  the  order  of  300  million  is required; the sample size grows quickly as the length of the strings increases. That said, the mechanism consistently finds the most frequent strings which enable to learn the dominant trends of a population.

Even though the theoretical frontier of differential privacy is expanding quickly there are only a handful implementations out there that, by ensuring  privacy without the need for a trusted third party like RAPPOR, suit well the kind of data collection schemes commonly used in the software industry.

References


Planet MozillaHow to switch to a 64-bit Firefox on Windows

I recently wrote about 64-bit Firefox builds on Windows, explaining why you might want to switch — it can reduce the likelihood of out-of-memory crashes — and also some caveats.

However, I didn’t explain how to switch, so I will do that now.

First, if you want to make sure that you aren’t already running a 64-bit Firefox, type “about:support” in the address bar and then look at the User Agent field in the Application Basics table near the top of the page.

  • If it contains the string “Win64”, you are already running a 64-bit Firefox.
  • If it contains the string “WOW64“, you are running a 32-bit Firefox on a 64-bit Windows installation, which means you can switch to a 64-bit build.
  • Otherwise, you are running a 32-bit Firefox on a 32-bit Windows installation, and cannot switch to a 64-bit Firefox.

Here are links to pages contain 64-bit builds for all the different release channels.

  • Release
  • Beta
  • Developer Edition
  • Nightly:  This is a user-friendly page, but it only has the en-US locale.
  • Nightly: This is a more intimidating page, but it has all locales. Look for a file with a name of the form firefox-<VERSION>.<LOCALE>.win64.installer.exe, e.g. firefox-50.0a1.de.win64.installer.exe for Nightly 50 in German.

By default, 32-bit Firefox and 64-bit Firefox are installed to different locations:

  • C:\Program Files (x86)\Mozilla Firefox\
  • C:\Program Files\Mozilla Firefox\

If you are using a 32-bit Firefox and then you download and install a 64-bit Firefox, by default you will end up with two versions of  Firefox installed. (But note that if you let the 64-bit Firefox installer add shortcuts to the desktop and/or taskbar, these shortcuts will replace any existing shortcuts to 32-bit Firefox.)

Both the 32-bit Firefox and the 64-bit Firefox will use the same profile, which means all your history, bookmarks, extensions, etc., will be available in either version. You’ll be able to run both versions, though not at the same time with the same profile. If you decide you don’t need both versions you can simply remove the unneeded version through the Windows system settings, as normal; your profile will not be touched when you do this.

Finally, there is a plan to gradually roll out 64-bit Firefox to Windows users in increasing numbers.

Planet MozillaBay Area Rust Meetup July 2016

Bay Area Rust Meetup July 2016 Bay Area Rust Meetup for July 2016. Topics TBD.

Planet MozillaCounting function calls per second

Say you want to know how often you're allocating tiles in Firefox or the rate of some other thing. There's an easy way to do this using dtrace. The following dtrace script counts calls to any functions matching the pattern '*SharedMemoryBasic*Create*' in XUL in the target process.

#pragma D option quiet
dtrace:::BEGIN
{
rate = 0;
}

profile:::tick-1sec
{
printf("%d/sec\n", rate);
rate = 0;
}

pid$target:XUL:*SharedMemoryBasic*Create*:entry
{
rate++;

You can run this script with following command:
$ dtrace -s $SCRIPT_NAME -p $PID
I'd be interested in knowing if anyone else has a similar technique for OSs that don't have dtrace.

Planet MozillaWhat’s Up with SUMO – 28th July

Hello, SUMO Nation!

July’s almost over… but our updates are not, obviously :-) How have you been? Are you melting in the shade or freezing in the sun? Maybe both? ;-) Here are the hotte… no, wait, the coolest news on the web, for your eyes only!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 3rd of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

  • for Desktop
    • Version 48 coming August 2nd.

Thus, between a Firefox for iOS and a Firefox for Desktop and Android released into the wild wide web – that’s a lot of heat! Now I know why it’s so easy to sweat nowadays. Well, maybe there will be some relief in the weeks between releases ;-) Winter is coming! Slowly, but surely…

Keep rocking the helpful web, SUMO Nation!

Planet MozillaWeb QA Team Meeting, 28 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Planet MozillaReps weekly, 28 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaPrivacy Lab - July 2016 - Student Privacy

Privacy Lab - July 2016 - Student Privacy Join us for presentations and a lively discussion around Student Privacy. Guest speakers include: *Alex Smolen, head of security/privacy for Clever *Andrew Rock, online privacy...

Planet MozillaWhy ChakraCore matters

People who have been in the web development world for a long time might remember A List Apart’s posts from 16 years ago talking about “Why $browser matters”. In these posts, Zeldman explained how the compliance with web standards and support for new features made some browsers stand out, even if they don’t have massive market share yet.

fruit stand
Variety is the spice of life

These browsers became the test bed for future-proof solutions. Solutions that later on already were ready for a larger group of new browsers. Functionality that made the web much more exciting than the one of old. The web we built for one browser and relied on tried-and-true, yet hacky solutions like table layouts. These articles didn’t praise a new browser with flashy new functionality. Browsers featured in this series were the ones compliant with upcoming and agreed standards. That’s what made them important.

The web thrives on diversity. Not only in people, but also in engines. We would not be where we are today if we had stuck with one browser engine. We would not enjoy the openness and free availability of our technologies if Mozilla hadn’t showed that you can be open and a success. The web thrives on user choice and on tool choice for developers.

Competition makes us better and our solutions more creative. Standardisation makes it possible for users of our solutions to maintain them. To upgrade them without having to re-write them from scratch. Monoculture brings quick success but in the long run always ends in dead code on the web that has nowhere to execute. As the controlled, solution-to-end-all-solutions changed from underneath the developers without backwards compatibility.

Today my colleague Arunesh Chandra announced at the NodeSummit in San Francisco that ChakraCore, the open source JavaScript engine of Microsoft, in part powering the Microsoft Edge browser is available now for Linux and OSX.

Screen captures showing ChakraCore running inside terminal windows on Ubuntu 16.04 and OS X
ChakraCore on Linux and OS X

This is a huge step for Microsoft, who – like any other company – is a strong believer in its own products. It also does well keeping their developers happy by sticking to what they are used to. It is nothing they needed to do to stay relevant. But it is something that is the right thing to do to ensure that the world of Node also has more choice and is not dependent on one predominant VM. Many players in the market see the benefits of Node and want to support it, but are not sold on a dependency on one JavaScript VM. A few are ready to roll out their own VMs, which cater to special needs, for example in the IoT space.

This angers a few people in the Node world. They worry that with several VMs, the “browser hell” of “supporting all kind of environment” will come to Node. Yes, it will mean having to support more engines. But it is also an opportunity to understand that by using standardised code, ratified by the TC39, your solutions will be much sturdier. Relying on specialist functionality of one engine always means that you are dependent on it not changing. And we are already seeing far too many Node based solutions that can’t upgrade to the latest version as breaking changes would mean a complete re-write.

ChakraCore matters the same way browsers that dared to support web standards mattered. It is a choice showing that to be future proof, Node developers need to be ready to allow their solutions to run on various VMs. I’m looking forward to seeing how this plays out. It took the web a few years to understand the value of standards and choice. Much rhetoric was thrown around on either side. I hope that with the great opportunity that Node is to innovate and use ECMAScript for everything we will get there faster and with less dogmatic messaging.

Photo by Ian D. Keating

Planet MozillaA third day of deep HTTP inspection

The workshop roomThis fine morning started off with some news: Patrick is now our brand new official co-chair of the IETF HTTPbis working group!

Subodh then sat down and took us off on a presentation that really triggered a long and lively discussion. “Retry safety extensions” was his name of it but it involved everything from what browsers and HTTP clients do for retrying with no response and went on to also include replaying problems for 0-RTT protocols such as TLS 1.3.

Julian did a short presentation on http headers and his draft for JSON in new headers and we quickly fell down a deep hole of discussions around various formats with ups and downs on them all. The general feeling seems to be that JSON will not be a good idea for headers in spite of a couple of good characteristics, partly because of its handling of duplicate field entries and how it handles or doesn’t handle numerical precision (ie you can send “100” as a monstrously large floating point number).

Mike did a presentation he called “H2 Regrets” in which he covered his work on a draft for support of client certs which was basically forbidden due to h2’s ban of TLS renegotiation, he brought up the idea of extended settings and discussed the lack of special handling dates in HTTP headers (why we send 29 bytes instead of 4). Shows there are improvements to be had in the future too!

Martin talked to us about Blind caching and how the concept of this works. Put very simply: it is a way to make it possible to offer cached content for clients using HTTPS, by storing the data in a 3rd host and pointing out that data to the client. There was a lengthy discussion around this and I think one of the outstanding questions is if this feature is really giving as much value to motivate the rather high cost in complexity…

The list of remaining Lightning Talks had grown to 10 talks and we fired them all off at a five minutes per topic pace. I brought up my intention and hope that we’ll do a QUIC library soon to experiment with. I personally particularly enjoyed EKR’s TLS 1.3 status summary. I heard appreciation from others and I agree with this that the idea to feature lightning talks was really good.

With this, the HTTP Workshop 2016 was officially ended. There will be a survey sent out about this edition and what people want to do for the next/future ones, and there will be some sort of  report posted about this event from the organizers, summarizing things.

Attendees numbers

http workshopThe companies with most attendees present here were: Mozilla 5, Google 4, Facebook, Akamai and Apple 3.

The attendees were from the following regions of the world: North America 19, Europe 15, Asia/pacific 6.

38 participants were male and 2 female.

23 of us were also at the 2015 workshop, 17 were newcomers.

15 people did lightning talks.

I believe 40 is about as many as you can put in a single room and still have discussions. Going larger will make it harder to make yourself heard as easily and would probably force us to have to switch to smaller groups more and thus not get this sort of great dynamic flow. I’m not saying that we can’t do this smaller or larger, just that it would have to make the event different.

Some final words

I had an awesome few days and I loved all of it. It was a pleasure organizing this and I’m happy that Stockholm showed its best face weather wise during these days. I was also happy to hear that so many people enjoyed their time here in Sweden. The hotel and its facilities, including food and coffee etc worked out smoothly I think with no complaints at all.

Hope to see again on the next HTTP Workshop!

Planet MozillaThe Joy of Coding - Episode 65

The Joy of Coding - Episode 65 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaRep of the Month – July 2016

Please join us in congratulating Christophe Villeneuve as Reps of the Month for July 2016!

Christophe Villeneuve has been a Rep for more than 9 months now and has reported more than 100 activities in the program. From talks on security to writing articles and organizing events, Christophe is active in many different areas. Did we already mention that he also bakes Firefox cookies?

Christophe

His energy and drive to promote the Open Web and web security is astonishing. Even if sometimes external factors intervene and some of the activities get blockers he  neither  gets disappointed nor quits, he looks for the next possibility out there. He truly is an open source believer contributing to other open source communities (like Drupal, PHP, MariaDB) as well and he tries to combine those activities for bigger audiences.

Please don’t forget to congratulate him on Discourse!

Planet MozillaLinting and Automatically Reloading WebExtensions

We recently announced web-ext 1.0, a command line tool that makes developing WebExtensions more of a breeze. Since then we’ve fixed numerous bugs and added two new features: automatic extension reloading and a way to check for “lint” in your source code.

You can read more about getting started with web-ext or jump in and install it with npm like this:

npm install --global web-ext

Automatic Reloading

Once you’ve built an extension, you can try it out with the run command:

web-ext run

This launches Firefox with your extension pre-installed. Previously, you would have had to manually re-install your extension any time you changed the source. Now, web-ext will automatically reload the extension in Firefox when it detects a source file change, making it quick and easy to try out a new icon or fiddle with the CSS in your popup until it looks right.

Automatic reloading is only supported in Firefox 49 or higher but you can still run your extension in Firefox 48 without it.

Checking For Code Lint

If you make a mistake in your manifest or any other source file, you may not hear about it until a user encounters the error or you try submitting the extension to addons.mozilla.org. The new lint command will tell you about these mistakes so you can fix them before they bite you. Run it like this:

web-ext lint

For example, let’s say you are porting an extension from Chrome that used the history API, which hasn’t fully landed in Firefox at the time of this writing. Your manifest might declare the history permission like this:

{
  "manifest_version": 2,
  "name": "My Extension",
  "version": "1.0",
  "permissions": [
    "history"
  ]
}

When running web-ext lint from the directory containing this manifest, you’ll see an error explaining that history is an unknown permission.

Try it out and let us know what you think. As always, you can submit an issue if you have an idea for a new feature of if you run into a bug.

Planet MozillaA Summer to Mentor

This summer I am mentoring Justin Potts – a university intern working on improving Mozilla’s add-ons related test automation, and Ana Ribeiro – an Outreachy participant working on enhancing the pytest-html plugin.

It’s not my first time working with Justin, who has been a regular team contributor for a few years now, and last summer helped me to get the pytest-selenium plugin released. It certainly helped to have previous experience working with Justin when deciding to take on the official role as mentor for his first internship. Unfortunately, his project is rather difficult to define, as he’s been working on a number of things, though mostly they are related to Firefox add-ons and running automated tests. There’s no shortage of challenging tasks for Justin to work on, and he’s taking them on with the enthusiasm that I expected he would. You can read more about Justin’s internship on his blog.

Ana’s project grew out of a security exploit discovered in Jenkins, which led to the introduction of the

Content-Security-Policy
  header for static files being served. This meant that the fancy HTML reports generated by pytest-html were broken due to the use of JavaScript, inline CSS, and inline images. Along with a few other enhancements, providing a CSP friendly from the plugin became a perfect candidate project for Outreachy. As part of her application, Ana contributed a patch for pytest-variables, and I was impressed with her level of communication over the patch. To get Ana familiar with the plugin, her initial contributions were not related to the CSP issue, but she’s now making good progress on this. You can read more about Ana’s Outreachy project on her dedicated blog.

So far I have mostly enjoyed the experience of being a mentor – it especially feels great to see the results that Justin and Ana are producing. Probably the most challenging aspect for me is being remote – Justin is based in Mountain View, California, and Ana is based in Brazil. It’s hard to feel a connection when you’re dependent on instant messages and video conferencing, though I suspect it’s probably harder for them than it is for me. Fortunately, I did get to work with them a little in London during the all hands, and then some more with Ana in Freiburg during the pytest sprint.

There are still a few weeks left for their projects, and I’m hoping they’ll both be able to conclude them to their satisfaction!

Planet Mozillaa basic browser app in Positron

Over in Positron, we’ve implemented enough of Electron’s <webview> element to run the basic browser app in this collection of Electron sample apps. To try it out:

git clone https://github.com/hokein/electron-sample-apps
git clone https://github.com/mozilla/positron
cd positron
./mach build
./mach run ../electron-sample-apps/webview/browser/

Planet MozillaAnd now for something completely different: Join me at Vintage Computer Festival XI

A programming note: My wife and I will be at the revised, resurrected Vintage Computer Festival XI August 6 and 7 in beautiful Mountain View, CA at the Computer History Museum (just down the street from the ominous godless Googleplex). I'll be demonstrating my very first home computer, the Tomy Tutor (a weird partial clone of the Texas Instruments 99/4A), and its Japanese relatives. Come by, enjoy the other less interesting exhibits, and bask in the nostalgic glow when 64K RAM was enough and cassette tape was king.

I'm typing this in a G5-optimized build of 45 and it seems to perform pretty well. JavaScript benches over 20% faster than 38 due to improvements in the JIT (and possibly some marginal improvement from gcc 4.8), and this is before I start doing further work on PowerPC-specific improvements which will be rolled out during 45's lifetime. Plus, the AltiVec code survived without bustage in our custom VP8, UTF-8 and JPEG backends, and I backported some graphics performance patches from Firefox 48 that improve throughput further. There's still a few glitches to be investigated; I spent most of tonight figuring out why I got a big black window when going to fullscreen mode (it turned out to be several code regressions introduced by Mozilla removing old APIs), and Amazon Music still has some weirdness moving from track to track. It's very likely there will be other such issues lurking next week when you get to play with it, but that's what a beta cycle is for.

38.10 will be built over the weekend after I'm done doing the backports from 45.3. Stay tuned for that.

Planet MozillaMDN pro tip: Watch for changes

The Web moves pretty fast. Things are constantly changing, and the documentation content on the Mozilla Developer Network (MDN) is constantly changing, too. The pace of change ebbs and flows, and often it can be helpful to know when changes occur. I hear this most from a few categories of people:

  • Firefox developers who work on the code which implements a particular technology. These folks need to know when we’ve made changes to the documentation so they can review our work and be sure we didn’t make any mistakes or leave anything out. They often also like to update the material and keep up on what’s been revised recently.
  • MDN writers and other contributors who want to ensure that content remains correct as changes are made. With so many people making change to some of our content, keeping up and being sure mistakes aren’t made and that style guides are followed is important.
  • Contributors to specifications and members of technology working groups. These are people who have a keen interest in knowing how their specifications are being interpreted and implemented, and in the response to what they’ve designed. The text of our documentation and any code samples, and changes made to them, may be highly informative for them to that end.
  • Spies. Ha! Just kidding. We’re all about being open in the Mozilla community, so spies would be pretty bored watching our content.

There are a few ways to watch content for changes, from the manual to the automated. Let’s take a look at the most basic and immediately useful tool: MDN page and subpage subscriptions.

Subscribing to a page

Animation showing how to subscribe to a single MDN page After logging into your MDN account (creating one if you don’t already have one), make your way to the page you want to subscribe to. Let’s say you want to be sure nobody messes around with the documentation about <marquee> because, honestly, why would anyone need to change that anyway?

Find the Watch button near the top of the MDN page; it’s a drawing of an eye. In the menu that opens when you hover over that icon, you’ll find the option “Subscribe to this page.” Simply click that. From then on, each time someone makes a change to the page, you’ll get an email. We’ll talk about that email in a moment.

First, we need to consider another form of content subscriptions: subtree or sub-article subscriptions.

Subscribing to a subtree of pages

 

Planet MozillaWorkshop day two

HTTP Workshop At 5pm we rounded off another fully featured day at the HTTP workshop. Here’s some of what we touched on today:

Moritz started the morning with an interesting presentation about experiments with running the exact same site and contents on h1 vs h2 over different kinds of networks, with different packet loss scenarios and with different ICWND set and more. Very interesting stuff. If he makes his presentation available at some point I’ll add a link to it.

I then got the honor to present the state of the TCP Tuning draft (which I’ve admittedly been neglecting a bit lately), the slides are here. I made it brief but I still got some feedback and in general this is a draft that people seem to agree is a good idea – keep sending me your feedback and help me improve it. I just need to pull myself together now and move it forward. I tried to be quick to leave over to…

Jana, who was back again to tell us about QUIC and the state of things in that area. His presentation apparently was a subset of slides he presented last week in the Berlin IETF. One interesting take-away for me, was that they’ve noticed that the amount of connections for which they detect UDP rate limiting on, has decreased with 2/3 during the last year!

Here’s my favorite image from his slide set. Apparently TCP/2 is not a name for QUIC that everybody appreciates! ;-)

call-it-tcp2-one-more-time

While I think the topic of QUIC piqued the interest of most people in the room and there were a lot of questions, thoughts and ideas around the topic we still managed to get the lunch break pretty much in time and we could run off and have another lovely buffet lunch. There’s certainly no risk for us loosing weight during this event…

After lunch we got ourselves a series of Lightning talks presented for us. Seven short talks on various subjects that people had signed up to do

One of the lightning talks that stuck with me was what I would call the idea about an extended Happy Eyeballs approach that I’d like to call Even Happier Eyeballs: make the client TCP connect to all IPs in a DNS response and race them against each other and use the one that responds with a SYN-ACK first. There was interest expressed in the room to get this concept tested out for real in at least one browser.

We then fell over into the area of HTTP/3 ideas and what the people in the room think we should be working on for that. It turned out that the list of stuff we created last year at the workshop was still actually a pretty good list and while we could massage that a bit, it is still mostly the same as before.

Anne presented fetch and how browsers use HTTP. Perhaps a bit surprising that soon brought us over into the subject of trailers, how to support that and voilá, in the end we possibly even agreed that we should perhaps consider handling them somehow in browsers and even for javascript APIs… ( nah, curl/libcurl doesn’t have any particular support for trailers, but will of course get that if we’ll actually see things out there start to use it for real)

I think we deserved a few beers after this day! The final workshop day is tomorrow.

Planet MozillaPassports for Community Leadership

This is #1 of 5 posts I identified as perhaps, being worth finishing and sharing.   Writing never feels finished, and it’s a vulnerable thing… to share ideas – but perhaps better than never sharing them at all?

I wrote most of this post in April of this year (making this outdated with the current work of the Participation Team), thinking about ways the learning format of the Leadership Summit in Singapore could evolve into a valuable tool for community leadership development and credentialing.  Community Leadership Passport(s) perhaps…


6319414177_0dfc470590_b

At the Participation Leadership Summit in Singapore, we designed the schedule in time blocks sorted by the Leadership Framework.  This meant that everyone attended at least one session identified under each of the building blocks.  The schedule was structured something like this…

Copy of Schedule(1)

As you can see, the structure  ensured that everyone experienced learning outcomes of the entire framework, while still providing choice in what felt most relevant, exciting or interesting in their personal development.  You can find some of this content here.

I started wondering..

How might we evolve the schedule design and content into a format for leadership development that also provides real world credentials?

I don’t think the answer is to take this schedule and make it a static ‘course’ or offering, I don’t think it is about ‘event in box’,  but I do think there’s something in using the framework to enforce quality leadership development, while giving power to what people want to learn, and how they prefer to learn.

Merging this idea + my previous work with participation ‘steps & ladders’ into something like a passport, or series of passports for leadership.

mentorship(1)

CD(4)

Really, this is about creating a mechanism for helping people build leadership credentials in a way that intersects what they want to learn and do, and what the project needs. It could be used for anything from developing strong mentors, to project leads in areas like IoT and Rust, to governance and diversity & inclusion. Imagining Passports with  3 attributes:

Experience – Taking action, completing tasks, generating experiences associated with learning and project outcomes. Should be clear, and feel doable without too much detail.

Mozilla Content – Completing a course either developed by, or approved as Mozilla content.   These could be online, or in person events.

Learner Choice – Encouraging exploration, and learning that feels valuable, interesting and fun – but with some guidelines for topics, outcomes and likely recommendations to make things easier.  For example, some people might want to complete a Coursera Course on IOT and Embedded systems, while others might prefer a ‘learning by doing’ approach via YouTube channels.

Something like a Leadership Passport would obviously require more thought in implementation, tracking and issuing certification. It could also be used to test and evolve Leadership Framework. I prefer it over a participation ladder because it feels less prescriptive in ‘how’ we step up as leaders and more supportive of ways want to learn and lead — and ultimately help us recognize and invest in emerging leaders sooner.

Image Credit:  Kate Harding – Quilt of Nations.

FacebookTwitterGoogle+Share

Planet MozillaMozilla Delivers Improved User Experience in Firefox for iOS

When we rolled out Firefox for iOS late last year, we got a tremendous response and millions of downloads. Lots of Firefox users were ecstatic they could use the browser they love on the iPhone or iPad they had chosen. Today, we’re thrilled to release some big improvements to Firefox for iOS. These improvements will give users more speed, flexibility and choice, three things we care deeply about.

A Faster Firefox Awaits You

 

It’s summer intern season and our Firefox for iOS Engineer intern Tyler Lacroix pulled out all the stops this month when he unveiled the results of his pet project – making Firefox faster. In Tyler’s testing, he saw up to 40% reduction in CPU usage and up to 30% reduction in memory usage when using this latest version of Firefox. What this means is that users can get to their Web pages faster while seeing battery life savings. Of course, all devices and humans are different so results may vary. Either way, we are psyched to roll out these improvements to you today.

It’s Now Easy to Add Any Website Specific Search Engine. And Change Your Mind.

 

We’ve already included a set of the most popular search engines in Firefox for iOS, but users may want to search other sites right from the address bar. Looking for that perfect set of moustache handlebars for a vintage road bike? Users can add sites like Craigslist and eBay. Want to become a trivia champ? Get one tap access to Wikipedia. Simply go to a website with a search box and tap on the the magnifying glass to add that search to your list of search engines.

addsearchcustom

New Menu for Easier Navigation

 

Navigation in iOS browsers is a huge pain point for users who have come to expect the same seamless experience that’s available on their desktop or laptop. Firefox for iOS features a brand new menu on the toolbar that allows for easier navigation and quick access to frequently used features – from adding a bookmark to finding text in page.

menu button cropped

Recover Closed Tabs and Quickly Flip Through Open Tabs

 

Browser tabs on mobile devices have traditionally been difficult to use. Hard to see, hard to manage, hard to navigate, and gone forever in a tap. In this upgraded Firefox for iOS users can easily recover all closed tabs and navigate through open tabs.

undo closed tabs

Home Again, Home Again, With One Quick Tap

 

homepage-button

Almost everyone has one page they go to first and return to often. Further expanding the ability to customize their Firefox for iOS experience, users can now set their favorite site as their homepage. The designated website will open immediately with a tap of the “home” button. This makes it easier than ever before to visit preferred sites in a matter of seconds.

We created these new features in Firefox for iOS because of what we heard from our users, and we look forward to more feedback on the updates. To check out our handiwork, download Firefox for iOS from the App Store and let us know what you think.

From iOS to Android to Windows to Linux, we are supporting a healthy and open web by building a better Firefox. And we couldn’t do it without our hundreds of millions of active users across all platforms and our vibrant community. Thanks, everyone!

Download_on_the_App_Store_Badge_US-UK_135x40

For more information:

Planet MozillaConnected Devices Weekly Program Update, 26 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Planet MozillaUpdate from WebDriver WG meeting in July 2016

The W3C Browser Tools- and Testing Working Group met again 13-14 July 2016 in Redmond, WA to discuss the progress of the WebDriver specification. I will try to summarise the discussions, but if you’re interested in all the details the meetings have been meticulously scribed.

I wrote about the progress from our TPAC 2015 meeting previously, and we appear to have made good progress since then. The specification text is nearing completion, although it is missing a few important chapters: Some particularly obvious omissions are the complete lack of input handling, and a big, difficult void where advanced user actions are meant to be.

Actions

James has been hard at work drafting a proposal for action semantics, which we went over in great detail. I think it’s fair to say there had been conceptual agreement in the working group on what the actions were meant to accomplish, but that the details of how they were going to work were extremely scarce.

WebDriver tries to innovate on the actions as they appear in Selenium. Actions in Selenium were originally meant to provide a way to pipeline a sequence of interactions—such as pressing down a mouse button, moving the mouse, and releasing it—through a complex data structure to a single command endpoint. The idea was that this would help address some of the race conditions that are intrinsically part of the one-directional design of the protocol, and reduce latency which may be critical when interacting with a document.

Unfortunately the pipelining design to reduce the number of HTTP requests was never quite implemented in Selenium, and the API design suffered from over-specialisation of different types of input devices and actions. The specification attempts to rectify this by generalising the range of input device classes, and by associating the actions that can be performed with a certain class. This means we are moving away from a flat sequence of types, such as [{type: "mouseDown"}, {type: "mouseMove"}, {type: "mouseUp"}] to a model where each input device has its own “track”. This limits the actions you can perform with each device, which makes some conceptual sense because it would be impossible to i.e. type keys with a mouse or press a mouse button with a stylus/pen input device.

The side-effect of this design is that it allows for parallelisation of actions from one or more types of input devices. This is an important development, as it makes it possible to combine primitives for input methods such as touch: In reality, a device cannot determine whether two fingers are “associated” with the same hand. So instead of defining high-level actions such as pinch and flick, it gives you the right level of granularity to combine actions from two or more touch “finger” devices to synthesise more complex movements. We believe this is a good approach with the right level of granularity that doesn’t try to over-specify or shoehorn in primitives that might not make sense in a cross-browser automation setting.

I’m looking forward to seeing James’ work land in the specificaton text. I think probably some explanatory notes and examples are required to fully explain this concept for both implementors and users.

Input locality

A known limitation of Selenium that we are not proud of is that it does not have a good story for input with alternative keyboard layouts. We have explicitly phrased the specification in such a way that it doesn’t make it impossible to retrofit in support for multiple layouts in the future. But right now we want to finish the baseline of the specification before we try moving into this.

The current design ideas floating around are to have some way of setting a keyboard layout either through a command or a capability. This would allow / to generate key events for Shift and ? on an American layout, and Shift and 7 on Norwegian layout. The biggest reason this is hard is because we need to find the right key code conversion tables for what would happen when typing for example .

Untrusted SSL certificates

We had a big discussion on invalid, self-signed, and untrusted SSL certificates. The general agreement in the WG is that it would be good to have functionality to allow a WebDriver session to bypass the security checks associated with them, as WebDriver may be run in an environment where it is difficult or even impossible to instrument the browser/environment in such a way that they are accepted implicitly (e.g. by modifying the root store).

Different browser vendors raised questions over whether this would pass security review as implementing such a feature increases the attack surface in one of the most critical components in web browsers. A counterargument is that by the point your browser has WebDriver enabled, you probably have bigger things to worry about than the fact that untrusted certificates are implicitly accepted.

We also found that this is highly inconsistently implemented in Selenium. For the two drivers that support it, FirefoxDriver (written and maintained by Selenium) has an acceptSslCerts capability that takes a boolean to switch off security checks, and chromedriver (by Google) by contrast accepts all certificates by default. The remaining drivers have no support for it.

This leaves the working group free to decide on a new and consistent approach. One point of concern is that a boolean to disable all security checks seems like an overly coarse design. A suggested alternative is to provide a list of domains to disable the checks for, where wildcards can be expanded to cover every domain or every subdomain, so that i.e. ["*"] would be equivalent to setting acceptSslCerts to true in today’s Firefox implementation, but that ["*.sny.no"] would only disable untrusted certificates on this domain.

Navigation and URLs

Because WebDriver taps into the browser’s navigation algorithm at a much later point than when a user interacts with the address bar, we decided that malformed URLs should consistently return an error. We have also changed the prose to no longer mislead users to think that navigating in effect means the same as using the address bar; the address bar is not a concept of the web platform.

There was a proposal from Mozilla to allow navigation to relative URLs, so that one could navigate to i.e. "/foo" to go to the path on the current domain, similar to how window.location = "/foo" works. This was unfortunately voted down. I feel it would be useful, even just for consistency, for the WebDriver navigation command to mirror the platform API, modulo security checks.

Desired vs. required capabilities

A big discussion during the meeting was around the continuing confusion around capabilities: Many feel they are an intermediary node concept that is best left undefined in the core specification text itself, because the specification explicitly does not define any qualities or expectations about local ends (clients bindings) or intermediary nodes (Selenium server or proxy that gives you a session).

There was however consensus around the fact that having a way to pick a browser configuration from some matrix was a good idea. The uncertainty, I think, comes largely from driver implementors who feel that once capabilities reach the driver there is very little that can be done about the sort of conflict resolution that required- and desired capabilities warrant.

For example, what does it mean to desire a profile and how do you know if the provided profile is valid? We were unable to reach any agreement on this and decided to punt the topic for our next meeting in Lisbon.

Test coverage

In order to push the specification to “Rec” (short for Recommendation) one must have at least two interoperable implemenations by two separate vendors. To determine that they are interoperable, one needs a test suite. I’ve written previously about the test harness I wrote for the Web Platform Tests that integrates WebDriver spec tests with wptrunner.

We have a few exhaustive tests for a couple of chapters, but I hope to continue this work this quarter.

Next meeting

The working group is meeting again for TPAC that this year is in Lisbon (how civilised!) in late September. I’m enormously looking forward to visiting there as I’ve never been.

We hope resolve the outstanding capabilities discussion and make final decisions on a few more minor outstanding issues then.

<footer>sny.no/a</footer>

Planet MozillaThe Evolution of Signatures in TLS

This post will take a look at the evolution of signature algorithms and schemes in the TLS protocol since version 1.0. I at first started taking notes for myself but then decided to polish and publish them, hoping that others will benefit as well.

(Let’s ignore client authentication for simplicity.)

Signature algorithms in TLS 1.0 and TLS 1.1

In TLS 1.0 as well as TLS 1.1 there are only two supported signature schemes: RSA with MD5/SHA-1 and DSA with SHA-1. The RSA here stands for the PKCS#1 v1.5 signature scheme, naturally.

<figure class="code">
select (SignatureAlgorithm)
{
    case rsa:
        digitally-signed struct {
            opaque md5_hash[16];
            opaque sha_hash[20];
        };
    case dsa:
        digitally-signed struct {
            opaque sha_hash[20];
        };
} Signature;
</figure>

An RSA signature signs the concatenation of the MD5 and SHA-1 digest, the DSA signature only the SHA-1 digest. Hashes will be computed as follows:

<figure class="code">
h = Hash(ClientHello.random + ServerHello.random + ServerParams)
</figure>

The ServerParams are the actual data to be signed, the *Hello.random values are prepended to prevent replay attacks. This is the reason TLS 1.3 puts a downgrade sentinel at the end of ServerHello.random for clients to check.

The ServerKeyExchange message containing the signature is sent only when static RSA/DH key exchange is not used, that means we have a DHE_* cipher suite, an RSA_EXPORT_* suite downgraded due to export restrictions, or a DH_anon_* suite where both parties don’t authenticate.

Signature algorithms in TLS 1.2

TLS 1.2 brought bigger changes to signature algorithms by introducing the signature_algorithms extension. This is a ClientHello extension allowing clients to signal supported and preferred signature algorithms and hash functions.

<figure class="code">
enum {
    none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5), sha512(6)
} HashAlgorithm;

enum {
    anonymous(0), rsa(1), dsa(2), ecdsa(3)
} SignatureAlgorithm;

struct {
    HashAlgorithm hash;
    SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;
</figure>

If a client does not include the signature_algorithms extension then it is assumed to support RSA, DSA, or ECDSA (depending on the negotiated cipher suite) with SHA-1 as the hash function.

Besides adding all SHA-2 family hash functions, TLS 1.2 also introduced ECDSA as a new signature algorithm. Note that the extension does not allow to restrict the curve used for a given scheme, P-521 with SHA-1 is therefore perfectly legal.

A new requirement for RSA signatures is that the hash has to be wrapped in a DER-encoded DigestInfo sequence before passing it to the RSA sign function.

<figure class="code">
DigestInfo ::= SEQUENCE {
    digestAlgorithm DigestAlgorithm,
    digest OCTET STRING
}
</figure>

This unfortunately led to attacks like Bleichenbacher’06 and BERserk because it turns out handling ASN.1 correctly is hard. As in TLS 1.1, a ServerKeyExchange message is sent only when static RSA/DH key exchange is not used. The hash computation did not change either:

<figure class="code">
h = Hash(ClientHello.random + ServerHello.random + ServerParams)
</figure>

Signature schemes in TLS 1.3

The signature_algorithms extension introduced by TLS 1.2 was revamped in TLS 1.3 and MUST now be sent if the client offers a single non-PSK cipher suite. The format is backwards compatible and keeps some old code points.

<figure class="code">
enum {
    /* RSASSA-PKCS1-v1_5 algorithms */
    rsa_pkcs1_sha1 (0x0201),
    rsa_pkcs1_sha256 (0x0401),
    rsa_pkcs1_sha384 (0x0501),
    rsa_pkcs1_sha512 (0x0601),

    /* ECDSA algorithms */
    ecdsa_secp256r1_sha256 (0x0403),
    ecdsa_secp384r1_sha384 (0x0503),
    ecdsa_secp521r1_sha512 (0x0603),

    /* RSASSA-PSS algorithms */
    rsa_pss_sha256 (0x0700),
    rsa_pss_sha384 (0x0701),
    rsa_pss_sha512 (0x0702),

    /* EdDSA algorithms */
    ed25519 (0x0703),
    ed448 (0x0704),

    /* Reserved Code Points */
    private_use (0xFE00..0xFFFF)
} SignatureScheme;
</figure>

Instead of SignatureAndHashAlgorithm, a code point is now called a SignatureScheme and tied to a hash function (if applicable) by the specification. TLS 1.2 algorithm/hash combinations not listed here are deprecated and MUST NOT be offered or negotiated.

New code points for RSA-PSS schemes, as well as Ed25519 and Ed448-Goldilocks were added. ECDSA schemes are now tied to the curve given by the code point name, to be enforced by implementations. SHA-1 signature schemes SHOULD NOT be offered, if needed for backwards compatibility then only as the lowest priority after all other schemes.

The current draft-13 lists RSASSA-PSS as the only valid signature algorithm allowed to sign handshake messages with an RSA key. The rsa_pkcs1_* values solely refer to signatures which appear in certificates and are not defined for use in signed handshake messages.

To prevent various downgrade attacks like FREAK and Logjam the computation of the hashes to be signed has changed significantly and covers the complete handshake, up until CertificateVerify:

<figure class="code">
h = Hash(Handshake Context + Certificate) + Hash(Resumption Context)
</figure>

This includes amongst other data the client and server random, key shares, the cipher suite, the certificate, and resumption information to prevent replay and downgrade attacks. With static key exchange algorithms gone the CertificateVerify message is now the one carrying the signature.

Planet MozillaUser friendly website analytics with Sandstorm Oasis and Piwik

Piwik is a great FLOSS website analytics platform. I've been self hosting it for different small websites I've managed through the years. Although it's fairly easy to setup and maintain at this level of use, I want to avoid having another service in my maintenance list.

While looking for user and web respecting alternatives -read looking for something else than Google Analytics- I realized that Sandstorm Oasis does support Piwik.

I logged-in and setup my Piwik instance, or Grain as Sandstorm calls instances, in less than 30 of seconds. The tricky part is to copy the code provided by Sandstorm instead of using the code in Piwik documentation, since that's customized to work with Sandstorms special API interface. Paste in the HTML and you're done!

So if you're looking for decent solutions that respect your users and the web, give Sandstorm a try. They are on a "mission to make open source and indie web applications viable as an ecosystem" and they are doing so by developing a platform which makes it super easy to run many open source web apps, like Piwik, Rocket.Chat, Ghost, GitLab, Wordpress and others. Their hosted Oasis platform also comes with a free plan.

Planet MozillaThis Week in Rust 140

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

In what seems to become a kind of tradition, User gsingh93 suggested his trace crate, a syntax extension to insert print! statements to functions to help trace execution. Thanks, gsingh93!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

76 pull requests were merged in the last two weeks.

New Contributors

  • Evgeny Safronov
  • Matt Horn

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

you have a problem. you decide to use Rust. now you have a Rc<RefCell<Box<Problem>>>

kmc on #rust.

Thanks to Alex Burka for the tip. Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaFirefox 64-bit for Windows can take advantage of more memory

By default, on Windows, Firefox is a 32-bit application. This means that it is limited to using at most 4 GiB of memory, even on machines that have more than 4 GiB of physical memory (RAM). In fact, depending on the OS configuration, the limit may be as low as 2 GiB.

Now, 2–4 GiB might sound like a lot of memory, but it’s not that unusual for power users to use that much. This includes:

  • users with many (dozens or even hundreds) of tabs open;
  • users with many (dozens) of extensions;
  • users of memory-hungry web sites and web apps; and
  • users who do all of the above!

Furthermore, in practice it’s not possible to totally fill up this available space because fragmentation inevitably occurs. For example, Firefox might need to make a 10 MiB allocation and there might be more than 10 MiB of unused memory, but if that available memory is divided into many pieces all of which are smaller than 10 MiB, then the allocation will fail.

When an allocation does fail, Firefox can sometimes handle it gracefully. But often this isn’t possible, in which case Firefox will abort. Although this is a controlled abort, the effect for the user is basically identical to an uncontrolled crash, and they’ll have to restart Firefox. A significant fraction of Firefox crashes/aborts are due to this problem, known as address space exhaustion.

Fortunately, there is a solution to this problem available to anyone using a 64-bit version of Windows: use a 64-bit version of Firefox. Now, 64-bit applications typically use more memory than 32-bit applications. This is because pointers, a common data type, are twice as big; a rough estimate for 64-bit Firefox is that it might use 25% more memory. However, 64-bit applications also have a much larger address space, which means they can access vast amounts of physical memory, and address space exhaustion is all but impossible. (In this way, switching from a 32-bit version of an application to a 64-bit version is the closest you can get to downloading more RAM!)

Therefore, if you have a machine with 4 GiB or less of RAM, switching to 64-bit Firefox probably won’t help. But if you have 8 GiB or more, switching to 64-bit Firefox probably will help the memory usage situation.

Official 64-bit versions of Firefox have been available since December 2015. If the above discussion has interested you, please try them out. But note the following caveats.

  • Flash and Silverlight are the only supported 64-bit plugins.
  • There are some Flash content regressions due to our NPAPI sandbox (for content that uses advanced features like GPU acceleration or microphone APIs).

On the flip side, as well as avoiding address space exhaustion problems, a security feature known as ASLR works much better in 64-bit applications than in 32-bit applications, so 64-bit Firefox will be slightly more secure.

Work is being ongoing to fix or minimize the mentioned caveats, and it is expected that 64-bit Firefox will be rolled out in increasing numbers in the not-too-distant future.

UPDATE: Chris Peterson gave me the following measurements about daily active users on Windows.

  • 66.0% are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
  • 32.3% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
  • 1.7% are running 64-bit Firefox already.

UPDATE 2: Also from Chris Peterson, here are links to 64-bit builds for all the channels:

Dev.OperaProgressive web apps running as native OS X apps

Progressive Web Apps are getting ready for desktop

If you follow the Progressive Web App scene you’ve probably already seen multiple examples of the Air Horner app in mobile browsers such as Opera Mobile and Chrome for Android. However it is not easy to get your favorite PWA (such as Air Horner) to be a primary citizen on your favorite desktop operating system.

Browser makers are working on getting better support for PWAs in desktop, but it’s not quite ready yet. This is why I started the PWAify project. This project is powered by Node.js and Electron, which already have a huge community and provide ease of use for new and experienced developers. The PWAify project is still in the early experimental stages — use it at your own risk.

The goal of the PWAify project is to provide a solution for PWAs on desktop while we wait for browsers to develop support. In addition this project allows developers to experiment, give better feedback to browser makers, and improve the Web App Manifest specification.

<figure block="figure"> Installed Air Horner app on Android and OS X <figcaption elem="caption">What the installed Air Horner app looks like on Android and OS X</figcaption> </figure>

Bringing your PWA to desktop

To create a sample app with PWAify follow the latest documentation on GitHub.

Currently the simplest use case is to just provide an HTTPS URL to the remote app. For example:

pwaify https://voice-memos.appspot.com/

There are some advanced options for the tool as well. You can specify the platform of your choice and an icon file once you are ready to distribute your app:

pwaify https://voice-memos.appspot.com/ \
	--platforms=darwin \
	--icon chrome-touch-icon-384x384.icns

When you run the pwaify command it fetches and processes the manifest.json for the given PWA. Some early work is in place to simplify the process of generating desktop icons by looking at icons provided in the manifest. The current downside of the pwaify approach is the size of the executable for each application, but there is already huge list of Electron apps out there that require the download. In my mind I try not to worry about the download size. Instead I try to focus on the future of progressive apps instead.

Here’s an example of a simple progressive web app working on Windows, Ubuntu and OS X:

<figure block="figure"> Voice Memos PWA running on Windows, Linux, and OS X <figcaption elem="caption">Voice Memos PWA running on Windows, Linux, and OS X</figcaption> </figure>

For the purposes of the screenshot I rebuilt the Voice Memos app on all platforms. It is also possible to setup a build server for your application once you want to add multi-platform support. You can use the PWAify tool today by pointing it at your own app or one of the apps in the pwa.rocks list.

Besides the Voice Memos application, my second favorite example is the 2048 tile game, by using PWAify with 2048-opera-pwa.surge.sh I can easily jump back to the saved game state after I restart the application.

The PWAify project is open source, so you can fork it and extend functionality. For example you can add auto-update and custom UI for your application. In the spirit of keeping as much as possible of the application on the web I suggest only introducing minimal changes to the application wrapper. Treat the PWAify’d version of the app just like any other desktop browser would treat it.

The Future

Recently Microsoft announced that the Windows Store will start supporting the W3C Web App Manifest. This will help you distribute the applications for Windows Store users, but you don’t have wait for Microsoft. There is nothing stopping you today to build one version of your web application and get it in the hands of your users on desktop. You can even go beyond Windows users and distribute on other operating systems.

Some future ideas for the PWAify project itself include:

  • implementing more of the Web App Manifest spec.
  • reloading manifest.json from the server, which would allow dynamically changing certain properties of the application by updating the manifest.json remotely.
  • experimenting with new manifest features, such as window size and minimal UI.
  • working with the Electron app community to make it easier to generate application icons for cross-platform use.

Tools like Electron and PWAify put yourself in control of application distribution while allowing you to use the latest web technologies. Write one awesome progressive web application for multiple distribution targets!

Feel free to file an issue against the PWAify project if you have other ideas that will benefit the future of progressive web apps.

Planet MozillaL20n in Firefox: A Summary for Developers

L20n is a new localization framework for Firefox and Gecko. Here’s what you need to know if you’re a Firefox front-end developer.

Gecko’s current localization framework hasn’t changed in the last two decades. It is based on file formats which weren’t designed for localization. It offers crude APIs. It tasks developers with things they shouldn’t have to do. It doesn’t allow localizers to use the full expressive power of their languages.

L20n is a modern localization and internationalization infrastructure created by the Localization Engineering team in order to overcome these limitations. It was successfully used in Firefox OS. We’ve put parts of it on the ECMA standardization path. Now we intend to integrate it into Gecko and migrate Firefox to it.

Overview of How L20n Works

For Firefox, L20n is most powerful when it’s used declaratively in the DOM. The localization happens on the runtime and gracefully falls back to the next language in case of errors. L20n doesn’t force developers to programmatically create string bundles, request raw strings from them and manually interpolate variables. Instead, L20n uses a Mutation Observer which is notified about changes to data-l10n-* attributes in the DOM tree. The complexity of the language negotiation, resource loading, error fallback and string interpolation is hidden in the mutation handler. It is still possible to use the JavaScript API to request a translation manually in rare situations when DOM is not available (e.g. OS notifications).

What problems L20n solves?

The current localization infrastructure is tightly-coupled: it touches many different areas of the codebase.  It also requires many decisions from the developer. Every time someone wants to add a new string they need to go through the following mental checklist:

  1. Is the translation embedded in HTML or XUL? If so, use the DTD format. Be careful to only use valid entity references or you’ll end up with a Yellow Screen of Death. Sure enough, the list of valid entities is different for HTML and for XUL. (For instance &hellip;
    is valid in HTML but not in XUL.)
  2. Is the translation requested dynamically from JavaScript? If so, use the .properties format.
  3. Does the translation use interpolated variables? If so, refer to the documentation on good practices and use #1, %S, %1$S, {name} or &name; depending on the use-case. (That’s five different ways of interpolating data!) For translations requested from JavaScript, replace the interpolation placeables manually with String.prototype.replace.
  4. Does the translation depend on a number in any of the supported languages? If so, use the PluralForm.jsm module to choose the correct variant of the translation. Specify all variants on a single line of the .properties file, separated by semicolons.
  5. Does the translation comprise HTML elements? If so, split the copy into smaller parts surrounding the HTML elements and put each part in its own translation. Remember to keep them in sync in case of changes to the copy. Alternatively write your own solution for replacing interpolation specifiers with HTML markup.

What a ride! All of this just to add a simple You have no new notifications message to the UI.  How do we fix this tight-coupled-ness?

L20n is designed around the principle of separation of concerns. It introduces a single syntax for all use-cases and offers a robust fallback mechanism in case of missing or broken translations.

Let’s take a closer look at some of the features of L20n which mitigate the headaches outlined above.

Single syntax

In addition to DTD and .properties files Gecko currently also uses .ini and .inc files for a total of four different localization formats.

L20n introduces a single file format based on ICU’s MessageFormat. It’s designed to look familiar to people who have previous experience with .properties and .ini. If you’ve worked with .properties or .ini before you already know how to create simple L20n translations.

Primer on the FTL syntax

Fig. 1. A primer on the FTL syntax

A single localization format greatly reduces the complexity of the ecosystem. It’s designed to keep simple translations simple and readable. At the same time it allows for more control from localizers when it comes to defining and selecting variants of translations for different plural categories, genders, grammatical cases etc. These features can be introduced only in translations which need them and never leak into other languages. You can learn more about L20n’s syntax in my previous blog post and at http://l20n.org/learn. An interactive editor is also available at https://l20n.github.io/tinker.

Separation of Concerns: Plurals and Interpolation

In L20n all the logic related to selecting the right variant of the translation happens inside of the localization framework. Similarly L20n takes care of the interpolation of external variables into the translations. As a developer, all you need to do is declare which translation identifier you are interested in and pass the raw data that is relevant.

Plurals and interpolation in L20n

Fig. 2. Plurals and interpolation in L20n

In the example above you’ll note that in the BEFORE version the developer had to manually call the PluralForm API. Furthermore the calling code is also responsible for replacing #1 with the relevant datum. There’s is no error checking: if the translation contains an error (perhaps a typo in #1) the replace() will silently fail and the final message displayed to the user will be broken.

Separation of Concerns: Intl Formatters

L20n builds on top of the existing standards like ECMA 402’s Intl API (itself based in large part on Unicode’s ICU). The Localization team has also been active in advancing proposals and specification for new formatters.

L20n provides an easy way to use Intl formatters from within translations. Often times the Intl API completely removes the need of going through the localization layer. In the example below the logic for displaying relative time (“2 days ago”) has been replaced by a single call to a new Intl formatter, Intl.RelativeTimeFormat.

Intl API in use

Fig. 3. Intl API in use

Separation of Concerns: HTML in Translations

L20n allows for some semantic markup in translations. Localizers can use safe text-level HTML elements to create translations which obey the rules of typography and punctuation. Developers can also embed interactive elements inside of translations and attach event handlers to them in HTML or XUL. L20n will overlay translations on top of the source DOM tree preserving the identity of elements and the event listeners.

Semantic markup in L20n

Fig. 4. Semantic markup in L20n

In the example above the BEFORE version must resort to splitting the translation into multiple parts, each for a possible piece of translation surrounding the two <label> elements.  The L20n version only defines a single translation unit and the localizer is free to position the text around the <label> elements as they see fit.  In the future it will be possible to reorder the <label> elements themselves.

Resilient to Errors

L20n provides a graceful and robust fallback mechanism in case of missing or broken translations. If you’re a Firefox front-end developer you might be familiar with this image:

Yellow Screen of Death

Fig. 5. Yellow Screen of Death

This errors happens whenever a DTD file is broken. The way a DTD file can be broken might be as subtle as a translation using the &hellip; entity which is valid in HTML but not in XUL.

In L20n, broken translations never break the UI. L20n tries its best to display a meaningful message to the user in case of errors. It may try to fall back to the next language preferred by the user if it’s available. As the last resort L20n will show the identifier of the message.

New Features

L20n allows us to re-think major design decisions related to localization in Firefox. The first area of innovation that we’re currently exploring is the experience of changing the browser’s UI language. A runtime localization framework allows the change to happen seamlessly on the fly without restarts. It will also become possible to go back and forth between languages for just a part of the UI, a feature often requested by non-English users of Developer Tools.

Another innovation that we’re excited about is the ability to push updates to the existing translations independent of the software updates which currently happen approximately every 6 weeks. We call this feature Live Updates to Localizations.

We want to decouple the release schedule of Firefox from the release schedule of localizations. The whole release process can then become more flexible and new translations can be delivered to users outside of regular software updates.

Recap

L20n’s goal is to improve Mozilla’s ability to create quality multilingual user interfaces, simplify the localization process for developers, improve error recovery and allow us to innovate.

The migration will result in cleaner and easier to maintain code base. It will improve the quality and the security of Firefox. It will provide a resilient runtime fallback, loosening the ties between code and localizations. And it will open up many new opportunities to innovate.

Planet MozillaA workshop Monday

http workshopI decided I’d show up a little early at the Sheraton as I’ve been handling the interactions with hotel locally here in Stockholm where the workshop will run for the coming three days. Things were on track, if we ignore how they got the wrong name of the workshop on the info screens in the lobby, instead saying “Haxx Ab”…

Mark welcomed us with a quick overview of what we’re here for and quick run-through of the rough planning for the days. Our schedule is deliberately loose and open to allow for changes and adaptations as we go along.

Patrick talked about the 1 1/2 years of HTTP/2 working in Firefox so far, and we discussed a lot around the numbers and telemetry. What do they mean and why do they look like this etc. HTTP/2 is now at 44% of all HTTPS requests and connections using HTTP/2 are used for more than 8 requests on median (compared to slightly over 1 in the HTTP/1 case). What’s almost not used at all? HTTP/2 server push, Alt-Svc and HTTP 308 responses. Patrick’s presentation triggered a lot of good discussions. His slides are here.

RTT distribution for Firefox running on desktop and mobile, from Patrick’s slide set:

rtt-dist

The lunch was lovely.

Vlad then continued to talk about experiences from implementing and providing server push at Cloudflare. It and the associated discussions helped emphasize that we need better help for users on how to use server push and there might be reasons for browsers to change how they are stored in the current “secondary cache”. Also, discussions around how to access pushed resources and get information about pushes from javascript were briefly touched on.

After a break with some sweets and coffee, Kazuho continued to describe cache digests and how this concept can help making servers do better or more accurate server pushes. Back to more discussions around push and what it actually solved, how much complexity it is worth and so on. I thought I could sense hesitation in the room on whether this is really something to proceed with.

We intend to have a set of lightning talks after lunch each day and we have already have twelve such suggested talks listed in the workshop wiki, but the discussions were so lively and extensive that we missed them today and we even had to postpone the last talk of today until tomorrow. I can already sense how these three days will not be enough for us to cover everything we have listed and planned…

We ended the evening with a great dinner sponsored by Mozilla. I’d say it was a great first day. I’m looking forward to day 2!

Planet MozillaSusan Chen, Promoted to Vice President of Business Development

I’m excited to announce that Susan Chen has been appointed Vice President of Business Development at Mozilla, a new role we are creating to recognize her achievements.

Susan ChenSusan joined Mozilla in 2011 as Head of Strategic Development. During her five years at Mozilla, Susan has worked with the Mozilla team to conceive and execute multiple complex negotiations and concluded hundreds of millions dollar revenue and partnership deals for Mozilla products and services.

As Vice President of Business Development, Susan is now responsible for planning and executing major business deals and partnerships for Mozilla across its product lines including search, commerce, content, communications, mobile and connected devices. She is also in charge of managing the business development team working across the globe.

We are pleased to recognize Susan’s achievements and expanded scope with the title of Vice President. Please join me in welcoming Susan to the leadership team at Mozilla!

Background:

Susan’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Planet MozillaUpdate on the United Nations High Level Panel on Women’s Economic Empowerment

It is critical to ensure that women are active participants in digital life. Without this we won’t reach full economic empowerment. This is the perspective and focus I bring to the UN High Level Panel for Women’s Economic Empowerment (HLP), which met last week in Costa Rica, hosted by President Luis Guillermo Solis.

(Here is the previous blog post on this topic.)

Many thanks to President Solis, who led with both commitment and authenticity. Here he shows his prowess with selfie-taking:

Screen Shot 2016-07-22 at 12.32.19 PM

Members of the High Level Panel – From Left to Right: Tina Fordham, Citi Research; Laura Tyson, UC Berkeley; Alejandra Mora, Government of Costa Rica; Ahmadou Ba, AllAfrica Global Media; Renana Jhabvala, WIEGO; Elizabeth Vazquez, WeConnect; Jeni Klugman, Harvard Business School; Mitchell Baker, Mozilla; Gwen Hines, DFID-UK; Phumzile Mlambo, UN Women; José Manuel Salazar Xirinachs, International Labour Organization; Simona Scarpaleggia, Ikea; Winnie Byanyima, Oxfam; Fiza Farhan, Buksh Foundation; Karen Grown, World Bank; Margo Thomas, HLP Secretariat.

Photo Credit: Luis Guillermo Solis, President, Costa Rica

In the meeting we learned about actions the Panel members have initiated, and provided feedback and guidelines on the first draft of the HLP report. The goal for the report is to be as concrete as possible in describing actions in women’s economic empowerment which have shown positive results so that interested parties could adopt these successful practices. An initial version of the report will be released in September, with the final report in 2017.  In the meantime, Panel members are also initiating, piloting and sometimes scaling activities that improve women’s economic empowerment.

As Phumzile Mlambo-Ngcuka, the Executive Director of UN Women often says, the best report will be one that points to projects that are known to work. One such example is a set of new initiatives, interventions and commitments to be undertaken in the Punjab, announced by the Panel Member and Deputy from Pakistan, Fiza Farhan and Mahwish Javaid.

Mozilla, too, is engaged in a set of new initiatives. We’ve been tuning our Mozilla Clubs program, which are on-going events to teach Web Literacy, to be interesting and more accessible to women and girls. We’ve entered into a partnership with UN Women to deepen this work and the pilots are underway. If you’d like to participate, consider applying your organizational, educational, or web skills to start a Mozilla Club for women and girls in your area. Here are examples of existing clubs for women in Nairobi and Cape Town.

Mozilla is also involved in the theme of digital inclusion as a cross-cutting, overarching theme of the HLP report. This is where Anar Simpson, my official Deputy for the Panel, focuses her work. We are liaising with companies in Silicon Valley who are working in the fields of connectivity and distribution of access to explore if, when and and how their projects can empower women economically.  We’re looking to gather everything they have learned about what has been effective. In addition to this information/content gathering task, Mozilla is working with the Panel on the advocacy and publicity efforts of the report.

I joined the Panel because I see it as a valuable mechanism for driving both visibility and action on this topic. Women’s economic empowerment combines social justice, economic growth benefits and the chance for more stability in a fragile world. I look forward to meeting with the UN Panel again in September and reporting back on practical and research-driven initiatives.

Planet MozillaFirefox 49.0 Aurora Testday Results

Hello mozillians!

Last week on Friday (July 22nd), we held another successful event – Firefox 49.0 Aurora Testday.

Thank you all for helping us making Mozilla a better place – Moin Shaikh, Georgiu Ciprian, Marko Andrejić, Dineesh Mv, Iryna Thompson.

From Bangladesh: Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Md. Rahimul Islam, Forhad Hossain, Akash, Roman Syed, Niaz Bhuiyan Asif, Saddam Hossain, Sajedul Islam, Md.Majedul islam, Fahim, Abdullah Al Jaber Hridoy, Raihan Ali, Md.Ehsanul Hassan, Sauradeep Dutta, Mohammad Maruf Islam, Kazi Nuzhat Tasnem, Maruf Rahman, Fatin Shahazad, Tanvir Rahman, Rakib Rahman, Tazin Ahmed, Shanjida Tahura Himi, Anika Nawar and Md. Nazmus Shakib (Robin).

From India: Nilima, Paarttipaabhalaji, Ashly Rose Mathew M, Selva Makilan R, Prasanth P, Md Shahbaz Alam and Bhuvana Meenakshi.K

A big thank you goes out to all our active moderators too!

Results:

I strongly advise everyone of you to reach out to us, the moderators, via#qa during the events when you encountered any kind of failures. Keep up the great work!

Keep an eye on QMO for upcoming events! 😉

Planet WebKitNew <video> Policies for iOS

<video class="alignleft" loop="loop"><source src="/wp-content/uploads/forever.mp4"><source src="/wp-content/uploads/forever.webm"></video>

Since before your sun burned hot in space and before your race was born, Safari on iOS has required a user gesture to play media in a <video> or <audio> element. When Safari first supported <video> in iPhoneOS 3, media data loaded only when the user interacted with the page. But with the goal of returning more control over media playback to web developers, we relaxed this restriction in iOS 8: Safari began honoring the preload="metadata" attribute, allowing <video> and <audio> elements to load enough media data to determine that media’s size, duration, and available tracks. For Safari in iOS 10, we are further relaxing this user gesture requirement for silent <video> elements.

Motivation

It turns out that people these days really like GIFs. But the GIF format turns out to be a very expensive way to encode animated images when compared to a modern video codec like H.264. We’ve found that GIFs can be up to twelve times as expensive in bandwidth and twice as expensive in energy use. It’s so expensive that many of the largest GIF providers have been moving away from GIFs and toward the <video> element. Since most of these GIFs started out their lives as video clips, were converted into animated GIFs, and were again converted back to video clips, you might say that the circle is complete.

But while this move does spare websites’ bandwidth costs as well as saving users’ batteries, it comes at a usability cost. On iOS 9, <video>s will only begin playing as a result of a user gesture. So pages which replace an <img> with a <video> will require a user gesture before displaying their animated content, and, on iPhone, the <video> will enter fullscreen when starting playback.

A note about the user gesture requirement: when we say that an action must have happened “as a result of a user gesture”, we mean that the JavaScript which resulted in the call to video.play(), for example, must have directly resulted from a handler for a touchend, click, doubleclick, or keydown event. So, button.addEventListener('click', () => { video.play(); }) would satisfy the user gesture requirement. video.addEventListener('canplaythrough', () => { video.play(); }) would not.

Similarly, web developers are doing some seriously amazing stuff by integrating <video> elements into the presentation of their pages. And these pages either don’t work at all on iOS due to the user gesture requirement, or the <video> elements completely obscure the page’s presentation by playing an animated background image in fullscreen.

WebKit’s New policies for video

Starting in iOS 10, WebKit relaxes its inline and autoplay policies to make these presentations possible, but still keeps in mind sites’ bandwidth and users’ batteries.

By default, WebKit will have the following policies:

  • <video autoplay> elements will now honor the autoplay attribute, for elements which meet the following conditions:
    • <video> elements will be allowed to autoplay without a user gesture if their source media contains no audio tracks.
    • <video muted> elements will also be allowed to autoplay without a user gesture.
    • If a <video> element gains an audio track or becomes un-muted without a user gesture, playback will pause.
    • <video autoplay> elements will only begin playing when visible on-screen such as when they are scrolled into the viewport, made visible through CSS, and inserted into the DOM.
    • <video autoplay> elements will pause if they become non-visible, such as by being scrolled out of the viewport.
  • <video> elements will now honor the play() method, for elements which meet the following conditions:
    • <video> elements will be allowed to play() without a user gesture if their source media contains no audio tracks, or if their muted property is set to true.
    • If a <video> element gains an audio track or becomes un-muted without a user gesture, playback will pause.
    • <video> elements will be allowed to play() when not visible on-screen or when out of the viewport.
    • video.play() will return a Promise, which will be rejected if any of these conditions are not met.
  • On iPhone, <video playsinline> elements will now be allowed to play inline, and will not automatically enter fullscreen mode when playback begins.
    <video> elements without playsinline attributes will continue to require fullscreen mode for playback on iPhone.
    When exiting fullscreen with a pinch gesture, <video> elements without playsinline will continue to play inline.

For clients of the WebKit framework on iOS, these policies can still be controlled through API, and clients who are using existing API to control these policies will see no change. For more fine-grained control over autoplay policies, see the new WKWebViewConfiguration property mediaTypesRequiringUserActionForPlayback. Safari on iOS 10 will use WebKit’s default policies.

A note about the playsinline attribute: this attribute has recently been added to the HTML specification, and WebKit has adopted this new attribute by unprefixing its legacy webkit-playsinline attribute. This legacy attribute has been supported since iPhoneOS 4.0, and accordance with our updated unprefixing policy, we’re pleased to have been able to unprefix webkit-playsinline. Unfortunately, this change did not make the cut-off for iOS 10 Developer Seed 2. If you would like to experiment with this new policy with iOS Developer Seed 2, the prefixed attribute will work, but we would encourage you to transition to the unprefixed attribute when support for it is available in a future seed.

Examples

So how would the humble web developer take advantage of these new policies? Suppose one had a blog post or article with many GIFs which one would prefer to serve as <video> elements instead. Here’s an example of a simple GIF replacement:

<video autoplay loop muted playsinline>
  <source src="image.mp4">
  <source src="image.webm" onerror="fallback(parentNode)">
  <img src="image.gif">
</video>
function fallback(video)
{
  var img = video.querySelector('img');
  if (img)
    video.parentNode.replaceChild(img, video);
}

On iOS 10, this provides the same user experience as using a GIF directly with graceful fallback to that GIF if none of the <video>‘s sources are supported. In fact, this code was used to show you that awesome GIF.

If your page design requires different behavior if inline playback is allowed vs. when fullscreen playback is required, use the -webkit-video-playable-inline media query to differentiate the two:

<div id="either-gif-or-video">
  <video src="image.mp4" autoplay loop muted playsinline></video>
  <img src="image.gif">
</div>
#either-gif-or-video video { display: none; }
@media (-webkit-video-playable-inline) {
    #either-gif-or-video img { display: none; }
    #either-gif-or-video video { display: initial; }
}

These new policies mean that more advanced uses of the <video> element are now possible, such as painting a playing <video> to a <canvas> without taking that <video> into fullscreen mode.

  var video;
  var canvas;

  function startPlayback()
  {
    if (!video) {
      video = document.createElement('video');
      video.src = 'image.mp4';
      video.loop = true;
      video.addEventListener('playing', paintVideo);
    }
    video.play();
  }

  function paintVideo()
  {
    if (!canvas) {
      canvas = document.createElement('canvas');
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;
      document.body.appendChild(canvas);
    }
    canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
    if (!video.paused)
      requestAnimationFrame(paintVideo);
  }
<button onclick="startPlayback()">Start Playback</button> 

The same technique can be used to render into a WebGL context. Note in this example that a user gesture–the click event–is required, as the <video> element is not in the DOM, and thus is not visible. The same would be true for a <video style="display:none"> or <video style="visibility:hidden">.

We believe that these new policies really make video a much more useful tool for designing modern, compelling websites without taxing users bandwidth or batteries. For more information, contact me at @jernoble, Jonathan Davis, Apple’s Web Technologies Evangelist, at @jonathandavis, or the WebKit team at @WebKit.

Planet MozillaAnnouncing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

Planet MozillaThis Week In Servo 72

In the last week, we landed 79 PRs in the Servo organization’s repositories.

The team working on WebBluetooth in Servo has launched a new site! It has two demo videos and very detailed instructions and examples on how to use standards-based Web Platform APIs to connect to Bluetooth devices.

We’d like to especially thank UK992 this week for their AMAZING work helping us out with Windows support! We are really eager to get the Windows development experience from Servo up to par with that of other platforms, and UK992’s work has been essential.

Connor Brewster (cbrewster) has also been on an incredible tear, working with Alan Jeffrey, on figuring out how session history is supposed to work, clarifying the standard and landing some great fixes into Servo.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • UK992 added support for tinyfiledialogs on Windows, so that we can prompt there, too!
  • UK992 uncovered the MINGW magic to get AppVeyor building again after the GCC 6 bustage
  • jdm made it possible to generate the DOM bindings in parallel, speeding up some incremental builds by nearly a minute!
  • aneesh restored better error logging to our BuildBot configuration and provisioning steps
  • canaltinova fixed the reference test for text alignment in input elements
  • larsberg fixed up some issues preventing the Windows builder from publishing nightlies
  • upsuper added support for generating bindings for MSVC
  • heycam added FFI glue for 1-arg CSS supports() in Stylo
  • manish added Stylo bindings for calc()
  • johannhof ensured we only expose Worker interfaces to workers
  • cbrewster implemented joint session history
  • shinglyu optimized dirty flags for viewport percentage units based on viewport changes
  • stshine blockified some children of flex containers, continuing the work to flesh out flexbox support
  • creativcoder integrated a service worker manager thread
  • izgzhen fixed Blob type-strings
  • ajeffrey integrated logging with crash reporting
  • malisas allowed using ByteString types in WebIDL unions
  • emilio ensured that transitions and animations can be tested programmatically

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

See the aforementioned demos from the team at the University of Szeged.

Planet MozillaThe 2016 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future, and we wanted to give a shout-out to each, now that all of the lineups are fully announced.

Sept 9-10: RustConf

RustConf is a two-day event held in Portland, OR, USA on September 9-10. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. The second day is the main event, with talks at every level of expertise, covering both core Rust concepts and design patterns, production use of Rust, reflections on the RFC process, and systems programming in general. We offer scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

Sept 17-18: Rust Fest

Join us at RustFest, Europe’s first conference dedicated to the Rust programming language. Over the weekend 17-18th September we’ll gather in Berlin to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale this week, with an optional combo—including both Viewsource and RustFest. Keep an eye on http://www.rustfest.eu/, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest

Oct 27-28: Rust Belt Rust

Rust Belt Rust is a two-day conference in Pittsburgh, PA, USA on October 27 and 28, 2016, and people with any level of Rust experience are encouraged to attend. The first day of the conference has a wide variety of interactive workshops to choose from, covering topics like an introduction to Rust, testing, code design, and implementing operating systems in Rust. The second day is a single track of talks covering topics like documentation, using Rust with other languages, and efficient data structures. Both days are included in the $150 ticket! Come learn Rust in the Rust Belt, and see how we’ve been transforming the region from an economy built on manufacturing to an economy built on technology.

Follow us on Twitter @rustbeltrust.

Planet Mozilla[worklog] Edition 028. appearance on Enoshima

Each time I "set up my office" (moved to a new place for the next 3 months, construction work on the main home), I'm mesmerized by how easy it is to set up a work environment. Laptop, wifi and electricity are the main things needed to start. A table and a chair are useful but non essential. And eventually an additional screen to have more working surface to be comfortable. Basically in 5 minutes we are ready to work. And that's one chance of our area of work. How long does it take before you can start working?

Working with a view on Enoshima for the next 3 months. Tune of the week: Omoide no Enoshima.

Webcompat Life

Progress this week:

Today: 2016-07-25T06:21:33.702789
296 open issues
----------------------
needsinfo       5
needsdiagnosis  71
needscontact    14
contactready    34
sitewait        164
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Time to time, people are reporting usability issues which are more or less cross-browsers. They basically hinder every browsers. It's out scope for the Web Compatibility project, but hint at something interesting about browsers and users perception. Often, I wonder if browsers should do more than just supporting the legacy Web site (aka it displays), but also adjust the content to a more palatable experience. Somehow the way, Reader mode is doing on user request, a beautify button for legacy content.
  • Google Image Search and black arrow. A kind of cubist arrow for Firefox. Modern design?
  • I opened an issue on Tracking Protection and Webcompat. Adam pointed me this morning to a project on moving tracking protection to a Web extension.
  • Because we have more issues on Firefox Desktop and Firefox Android, we focus our energy there, so we need someone in the community to focus on Firefox OS issues.
  • When I test Web sites on Firefox Android, I usually do it through the remote debugging in WebIDE and instead of typing a long URI on the device itself, I usually go to the console and paste the address I want window.location = 'http://example.com/long/path/with/strange/356374389dgjlkj36s'.
  • Starting to test a bit more in depth what appearance means in different browsers. Specifically to determine what is needed for Web compatibility and/or Web standards.
  • a WONTFIX which is a good news. Bug 1231829 - Implement -webkit-border-image quirks for compatibility. It means it has been fixed by the site owners.
  • On this Find my phone issue on Google search, the wrong order of CSS properties creates a layout issue where the XUL -moz-box was finally interpreted, but it triggered a good question from Xidorn. Should we expose the XUL display values to Web content? Add to that that some properties in the CSS never existed.
  • hangout doesn't work the same way for Chrome and Firefox. There's something happening either on the Chrome side or the servers, which creates the right path of actions. I haven't determined it yet.

WebCompat.com dev

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Planet MozillaWindows Mobile (or Windows Phone) and FastMail

I’ve been a big fan of Windows Phone (now Windows Mobile) for a while and have had a few phones across versions 7, 8, and now 10. A while ago I switched to FastMail as my e-mail provider [1], but had been stuck using Google as my calendar provider still (and my contacts were on my Windows Live account). I had a desire to move all these onto a single account, but Windows 10 Mobile only officially supports e-mail from arbitrary providers. Calendar and contacts are limited to a few special providers.

Below I’ve outlined how I’ve gotten all three services (email, contacts, and calendar) from my FastMail account onto my Windows Mobile device.

Email

Email is the easy one, FastMail even has a guide to setting up email on Windows Phone. This guide did not handle sending emails with a custom domain name, if you don’t have that situation, probably just use the FastMail guide.

  1. Add a new account, choose “other account”.
  2. Type in your email address (e.g. you@yourcustomdomain.com) and password.
  3. It will complain about being unable to find proper account settings. Click “try again”.
  4. It will complain again, but not give you an option for “advanced”, click it.
  5. Choose “Internel email account”.
  6. Enter any “Account name” and “Your name” that you want.
  7. Choose “IMAP4” as the “Account type”.
  8. Change the incoming mail server to mail.messagingengine.com.
  9. Change the username to your FastMail username (e.g. you@fastmail.com).
  10. Change the outgoing mailserver to mail.messagingengine.com.

Now when you send email it should show up properly as you@yourcustomdomain.com, but be sent via FastMail’s servers!

Contacts

FastMail added support for CardDAV last year and Windows Phone added support back in 2013, so why is this hard? Well…turns out that there isn’t a way to make a CardDAV account on Windows Mobile, it’s just used for certain account types. Luckily, there is a forum post about hooking up CardDAV via a hack. Steps are reproduced below:

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username, but add +Default before the @ (e.g. you+Default@fastmail.com), note that this isn’t anything special, just the scheme FastMail uses for CardDAV usernames.
  3. Put in your password. [2]
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and calendar.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Calendar server (CalDAV)” to localhost. [3]
  7. Change “Contacts server (CardDAV)” to carddav.messagingengine.com:443/dav/addressbooks/user/you@fastmail.com/Default, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

Your contacts should eventually appear in your address book! I couldn’t figure out a way to force my phone to sync contacts, but they appeared fairly quickly.

Calendar

FastMail added support for CalDAV back in the beginning of 2014 [4]. These steps are almost identical to the Contacts section above, but using information from the guide for setting up Calendar.app.

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username (e.g. you@fastmail.com).
  3. Put in your password.
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and contacts.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Contacts server (CardDAV)” to localhost.
  7. Change “Calendar server (CalDAV)” to caldav.messagingengine.com/dav/principals/user/you@fastmail.com/, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

My default calendar appeared very quickly, but additional calendars took a bit to sync onto my phone.

Good luck and let me know if there are any errors, easier ways, or other tricks to getting the most of FastMail on a Windows Mobile device!

[1]There are a variety of reasons why I switched, I had recently bought a domain name to get better control over my online presence (email, website, etc.). I was also tired of my email being used to server me advertisements and various other issues with free webmail. I highly recommend FastMail, they have awesome security and privacy policies. They also have amazing support, give back to (a lot) to open source and a whole slew of other things.
[2]I put a dummy one in and then changed it after I updated the servers in step 6. This was to not send my password to iCloud servers. The password is hopefully encrypted and hashed, but I don’t know for sure.
[3]We’re just ensuring that our credentials for these other services will not hit Apple servers for any reason.
[4]That article talks about beta.fastmail.fm, but this is now available on the production FastMail servers too!

Planet MozillaHTTP Workshop 2016, day -1

http workshop The HTTP Workshop 2016 will take place in Stockholm starting tomorrow Monday, as I’ve mentioned before. Today we’ll start off slowly by having a few pre workshop drinks and say hello to old and new friends.

I did a casual count, and out of the 40 attendees coming, I believe slightly less than half are newcomers that didn’t attend the workshop last year. We’ll see browser people come, more independent HTTP implementers, CDN representatives, server and intermediary developers as well as some friends from large HTTP operators/sites. I personally view my attendance to be primarily with my curl hat on rather than my Firefox one. Firmly standing in the client side trenches anyway.

Visitors to Stockholm these days are also lucky enough to arrive when the weather is possibly as good as it can get here with the warmest period through the summer so far with lots of sun and really long bright summer days.

News this year includes the @http_workshop twitter account. If you have questions or concerns for HTTP workshoppers, do send them that way and they might get addressed or at least noticed.

I’ll try to take notes and post summaries of each workshop day here. Of course I will fully respect our conference rules about what to reveal or not.

stockholm castle and ship

Planet MozillaTenFourFox 45 is more of a thing

Since the initial liftoff of TenFourFox 45 earlier this week, much progress has been made and this blog post, ceremonially, is being typed in it. I ticked off most of the basic tests including printing, YouTube, social media will eat itself, webcam support, HTML5 audio/video, canvas animations, font support, forums, maps, Gmail, blogging and the major UI components and fixed a number of critical bugs and assertions, and now the browser is basically usable and able to function usefully. Still left to do is collecting the TenFourFox-specific strings into their own DTD for the localizers to translate (which will include the future features I intend to add during the feature parity phase) and porting our MP3 audio support forward, and then once that's working compiling some opt builds and testing the G5 JavaScript JIT pathways and the AltiVec acceleration code. After that it'll finally be time for the first beta once I'm confident enough to start dogfooding it myself. We're a little behind on the beta cycle, but I'm hoping to have 45 beta 1 ready shortly after the release of 38.10 on August 2nd (the final 38 release, barring a serious showstopper with 45), a second beta around the three week mark, and 45 final ready for general use by the next scheduled release on September 13th.

A couple folks have asked if there will still be a G3 version and I am pleased to announce the answer will very likely be yes; the JavaScript JIT in 45 does not mandate SIMD features in the host CPU, so I don't see any technical reason why not (for that matter, the debug build I'm typing this on isn't AltiVec accelerated either). Still, if you're bravely rocking a Yosemite in 2016 you might want to think about a G4 for that ZIF socket.

I've been slack on some other general interest posts such as the Power Mac security rollup and the state of the user base, but I intend to write them when 45 gets a little more stabilized since there have been some recurring requests from a few of you. Watch for those soon also.

Planet MozillaSUMO Show & Tell: How I Got Involved With Mozilla

Hey SUMO Nation!

London Work Week 2016During the Work Week in London we had the utmost pleasure of hanging out with some of you (we’re still a bit sad about not everyone making it… and that we couldn’t organize a meetup for everyone contributing to everything around Mozilla).

Among the numerous sessions, working groups, presentations, and demos we also had a SUMO Show & Tell – a story-telling session where everyone could showcase one cool thing they think everyone should know about.

I have asked those who presented to help me share their awesome stories with everyone else – and here you go, with the second one presented by Andrew, a jack-of-all-trades and Bugzilla tamer.

Take a look below and relive the origin story of a great Mozillian – someone just like you!

WhistlerIt all started… with an issue that I had with Firefox on my desktop computer running Windows XP, back in 2011. Firefox wouldn’t stop crashing! I then discovered the support site for Firefox. There I found help with my issue through support articles, and at the same time, I was also intrigued by the ability to help other users through the very same site as well.

As I looked into the available opportunities to contribute to the support team, I landed upon live chat. Live chat was a 1-on-1 chat to help out users with the issues they had. Unfortunately, after I joined the team, the live chat was placed on a hiatus. It was recommended that I move on to the forums and knowledge base, because rather than just helping one user and only them benefiting, on the forums I could help many more people through a single suggestion. For some, this floated well and with others it didn’t, because we weren’t taking care of the user personally (like on the chat).

It definitely took some time for me to adjust to this new setting, as things were (and are) handled differently on the forum and on the knowledge base. Users on the forum sometimes do respond immediately, but most of the time they respond later, and some actually don’t respond at all. This is one of the differences between helping out through live chat and through the forums.

The knowledge base on the other hand, can be really complex. There is markup being used to present text in a different way to different users. We must be as clear and precise as possible when writing the article, since although we may know really well what we are talking about, the article reader (usually a user in need of helpful information) may not. It is definitely challenging for some Mozillians to get involved with writing, but once you do, you get the hang of it and truly enjoy it.

From there on, I kept contributing to the forum and knowledge base, but I also went to find out how I could contribute to other areas of Mozilla. I landed upon triaging bugs within Mozilla sites thanks to the help of Liz Henry and Tyler Downer. Furthermore, as Firefox OS rolled out, I started to provide support to the users, write more articles and file bugs in regards to the OS.

As things moved forward so did life – at the moment I am contributing through the Social Support team. Contributing through Social helps our users on social media realise that we are listening to them and that their comments and woes are not falling on deaf ears. We respond to all types of concerns, be they praises or complaints. Helping users on Twitter while being restricted to 140 characters is difficult, whereas on Facebook we can provide a more detailed explanation and response. With Social Support, a single response from us sometimes reaches only a single person – other times it can reach thousands through re-sharing.

Social media makes it easy to identify issues, crises, and hot topics – it is where people nowadays now go to seek assistance, rant, and share their experiences. Also, as posts and tweets can spread easily on social media, it is a double-edged sword: if something positive is spreading, we hope it spreads more. However, if something negative is spreading, we must contain it, identify, and address the root cause of the issue. The bottom line is: we must help our users while keeping everything in the balance and being constantly vigilant.

TorontoIn 2013, I was very thankful that I was able to attend the Summit that was held in 3 places across the world. I was invited to Toronto, where I held a session called “What does ‘Mozillian’ mean?” In that session, we defined what the term “Mozillian” meant, who was included, not included, and what roles and capabilities were necessary to classify an individual to be a Mozillian. At the end of the session, we touched base via email to finalize our thoughts and gather the necessary information to pass along to others. Although we made some progress, defining who a Mozillian is, who can (or can’t) be one, and setting a specific criteria is somewhat impossible. We must be accepting of those who come and go, those with different backgrounds, personal preferences regarding getting things done, and (sometimes highly) different opinions. All that said, we are a huge family – a huge Mozilla family.

Thank you Andrew for sharing your story with us. I personally appreciate your relaxed and flexible perspective on (sometimes inevitable) changes and challenges we all face when trying to make Mozilla work for the users of the web.

Here’s to many more great chances for you to rock the (helpful, but not only) web with Mozilla and others!

Planet MozillaWebDriver F2F - July 2016

Last week saw the latest WebDriver F2F to work on the specification. We held the meeting at the Microsoft campus in Redmond, Washington.

The agenda for the meeting was placed, as usual, on the W3 Wiki. We had quite a lot to discuss and, as always, was a very productive meeting.

The meeting notes are available for Wednesday and Thursday. The most notable items are;

  • Finalising Actions in the specification
  • newSession
  • Certificate handling on navigation
  • Specification tests

We also welcomed Apple to their first WG meeting. You may have missed it, but there is going to be a Safari Driver built in in macOS.

Planet WebKitImprovements in MathML Rendering

MathML is a W3C recommendation and an ISO/IEC standard that can be used to write mathematical content on HTML5 web pages, EPUB3 e-books and many other XML-based formats. Although it has been supported by WebKit for a long time, the rendering quality of mathematical formulas was not as high as one would expect and several important MathML features were missing. However, Igalia has recently contributed to WebKit’s MathML implementation and big improvements are already available in Safari Technology Preview 9. We give an overview of these recent changes as well as screenshots of beautiful mathematical formulas rendered by the latest Safari Technology Preview. The MathML demos from which the screenshots were taken are also provided and you are warmly invited to try them yourself.

Mathematical Fonts

We continue to rely on Open Font Format features to improve the quality of the mathematical rendering. The default user agent stylesheet will try and find known mathematical fonts but you can always set the font-familyon the <math> element to pick your favorite font. In the following screenshot, the first formula is rendered with Latin Modern Math while the second one is rendered with Libertinus Math. The last formula is rendered with an obsolete version of the STIX fonts lacking many Open Font Format features. For example, one can see that integral and summation symbols are too small or that their scripts are misplaced. Scheduled for the third quarter of 2016, STIX 2 will hopefully have many issues fixed and hence become usable for WebKit’s MathML rendering.

<figure class="widescreen mattewhite">Screenshot of MathML with different fonts in Safari Technology Preview 9</figure>

Hyperlinks

WebKit now supports the href attribute which can be used to set hyperlinks on any part of a formula. This is useful to provide references (e.g. notations or theorems) as shown in the following example.

<figure class="widescreen mattewhite">Screenshot of MathML hyperlinks in Safari Technology Preview 9</figure>

Mathvariants

Unicode contains Mathematical Alphanumeric Symbols to convey special meaning. WebKit now uses the italic characters from this Unicode block for mathematical variables and the mathvariant attribute can also be used to easily access these special characters. In the example below, you can see italic, fraktur, blackboard bold and script variables.

<figure class="widescreen mattewhite">Screenshot of MathML mathvariants in Safari Technology Preview 9</figure>

Operators

A big refactoring has been performed on the code handling stretchy operators, large symbols and radicals. As a consequence the rendering quality is now much better and many weird bugs have been fixed.

<figure class="widescreen mattewhite">Screenshot of MathML operators in Safari Technology Preview 9</figure>

Displaystyle

Mathematical formulas can be integrated inside a paragraph of text (inline math in TeX terminology) or displayed in its own horizontally centered paragraph (display math in TeX terminology). In the latter case, the formula is in displaystyle and does not have any restrictions on vertical spacing. In the former case, the layout of the mathematical formula is modified a bit to minimize this vertical spacing and to better integrate within the surrounding text. The displaystyle property can also be set using the corresponding attribute or can change automatically in subformulas (e.g. in fractions or scripts). The screenshot below shows the layout difference according to whether the equation is in displaystyle or not. Note that the displaystyle property should also affect the font-size but WebKit does not support the scriptlevel yet.

<figure class="widescreen mattewhite">Screenshot of MathML displaystyle in Safari Technology Preview 9</figure>

OpenType MATH Parameters

Many new OpenType MATH features have been implemented following the MathML in HTML5 Implementation Note:

  • Use of the AxisHeight parameter to set the vertical position of fractions, tables and symmetric operators.
  • Use of layout constants for radicals, scripts and fractions in order to improve math spacing and positioning.
  • Use of the italic correction of large operator glyph to set the position of subscripts.

The screenshots below illustrate some of these improvements. For the first one, the use of AxisHeight allows to better align the fraction bars with the plus, minus and equal signs. For the second one, the use of layout constants for scripts as well as the italic correction of the surface integral improves the placement of its subscript.

<figure class="widescreen mattewhite">Screenshot of MathML examples with OpenType MATH parameters in Safari Technology Preview 9</figure>

Arabic Mathematics

WebKit already had support for right-to-left mathematical layout used to write Arabic mathematical notations. Although glyph-level mirroring is not supported yet, we added support for right-to-left radicals. This allows to use basic arithmetic notations as shown in the image that follows.

<figure class="widescreen mattewhite">Screenshot of Arabic Mathematics in Safari Technology Preview 9</figure>

Conclusion

Great changes have happened in WebKit’s MathML support recently and Igalia’s web plaform team is excited about seeing beautiful mathematical formulas in WebKit-based browsers and e-readers! If you have any comments, questions or feedback, you can contact Frédéric at fwang@igalia.com or Manuel at @regocas. Detailed bug reports are also welcome to WebKit’s bug tracker. We are still refining the implementation so stay tuned for more awesome news!

Planet MozillaIllusion of atomic reference counting

Most people believe that having an atomic reference counter makes them safe to use RefPtr on multiple threads without any more synchronization.  Opposite may be truth, though!

Imagine a simple code, using our commonly used helper classes, RefPtr<> and an object Type with ThreadSafeAutoRefCnt reference counter and standard AddRef and Release implementations.

Sounds safe, but there is a glitch most people may not realize.  See an example where one piece of code is doing this, no additional locks involved:

RefPtr<Type> local = mMemeber; // mMember is RefPtr<Type>, holding an object

And other piece of code then, on a different thread presumably:

mMember = new Type(); // mMember's value is rewritten with a new object

Usually, people believe this is perfectly safe.  But it’s far from it.

Just break this to actual atomic operations and put the two threads side by side:

Thread 1

local.value = mMemeber.value;
/* context switch */ 
.
.
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* temporary = new Type();
temporary->AddRef();
Type* old = mMember.value; 
mMember.value = temporary; 
old->Release(); 
/* context switch */ 
.

Similar for clearing a member (or a global, when we are here) while some other thread may try to grab a reference to it:

RefPtr<Type> service = sService;
if (!service) {
  return; // service being null is our 'after shutdown' flag
}

And another thread doing, usually during shutdown:

sService = nullptr; // while sService was holding an object

And here what actually happens:

Thread 1

local.value = sService.value;
/* context switch */
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* old = sService.value; 
sService.value = nullptr; 
old->Release(); 
/* context switch */
.

And where is the problem?  Clearly, if the Release() call on the second thread is the last one on the object, the AddRef() on the first thread will do its job on a dying or already dead object.  The only correct way is to have both in and out assignments protected by a mutex or, ensure that there cannot be anyone trying to grab a reference from a globally accessed RefPtr when it’s being finally released or just being re-assigned. The letter may not always be easy or even possible.

Anyway, if somebody has a suggestion how to solve this universally without using an additional lock, I would be really interested!

The post Illusion of atomic reference counting appeared first on mayhemer's blog.

Planet MozillaSamsung’s L-ish Model Numbers

A slow hand clap for Samsung, who have managed to create versions of the S4 Mini phone with model numbers (among others):

  • GT-i9195
  • GT-i9195L (big-ell)
  • GT-i9195i (small-eye)
  • GT-i9195l (small-ell)

And of course, the small-ell variant, as well as being case-confusable with the big-ell variant and visually confusable with the small-eye variant if it’s written with a capital I as, say, here, is in fact an entirely different phone with a different CPU and doesn’t support the same aftermarket firmware images that all of the other variants do.

See this post for the terrible details.

Planet MozillaTenFourFox 45 is a thing

The browser starts. Lots of problems but it boots. More later.

Planet MozillaMozci and pulse actions contributions opportunities

We've recently finished a season of feature development adding TaskCluster support to add new jobs to Treeherder on pulse_actions.

I'm now looking at what optimizations or features are left to complete. If you would like to contribute feel free to let me know.

Here's some highligthed work (based on pulse_action issues and bugs):
This will help us save money in Heroku since using Buildapi + buildjson files is memory hungry and requires us to use bigger Heroku nodes.
This is important to help us change the behaviour of the Heroku app without having to commit any code. I've used this in the past to modify the logging level when debugging an issue.

This is also useful if we want to have different pipelines in Heroku. 
Having Heroku pipelines help us to test different versions of the software.
This is useful if we want to have a version running from 'master' against the staging version of Treeherder.
It would also help contributors to have a version of their pull requests running live.
We don't have any tests running. We need to determine how to run a minimum set of tests to have some confident in the product.

This needs integration tests of Pulse messages.
The comment is the bug is rather accurate and it shows that there are many small things that need fixing.
Manual backfilling uses Buildapi to schedule jobs. If we switched to scheduling via TaskCluster/Buildbot-bridge we would get better results since we can guarantee proper scheduling of a build + associated dependent jobs. Buildapi does not give us this guarantee. This is mainly useful when backfilling PGO test and talos jobs.

If instead you're interested on contributing to mozci you can have a look at the issues.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaWhat’s Up with SUMO – 21st July

Hello, SUMO Nation!

Chances are you have noticed that we had some weird temporal issues, possibly caused by a glitch in the spacetime continuum. I don’t think we can pin the blame on the latest incarnation of Dr Who, but you never know… Let’s see what the past of the future brings then, shall we?

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 27th of July!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • If you’re an active localizer in one of the top 20+ locales, expect a list of high priority articles coming your way within the next 24 hours. Please make sure that they are localized as soon as possible – our users rely on your awesomeness!
  • Final reminder: remember the discussion about the frequency & necessity of KB updates and l10n notifications? We’re trying to address this for KB editors and localizers alike. Give us your feedback!
  • Reminder: L10n hackathons everywhere! Find your people and get organized! If you have questions about joining, contact your global locale team.

Firefox

  • for Android
    • Version 48 is still on track – release in early August.
  • for Desktop
    • Version 48 is still on track – release in early August.

Now that we’re safely out of the dangerous vortex of a spacetime continuum loop, I can only wish you a great weekend. Take it easy and keep rocking the helpful web!

Planet MozillaMetrics, Metrics, Metr.. wait what do you do?

Metrics, Metrics, Metr.. wait what do you do? 2016 Intern presentation - Sharon Mui

Planet MozillaMetrics, Metrics, Metr.. wait what do you do?

Metrics, Metrics, Metr.. wait what do you do? 2016 Intern presentation - Sharon Mui

Planet MozillaNew WebExtensions Guides and How-tos on MDN

The official launch of WebExtensions is happening in Firefox 48, but much of what you need is already supported in Firefox and AMO (addons.mozilla.org). The best place to get started with WebExtensions is MDN, where you can find a trove of helpful information. I’d like to highlight a couple of recent additions that you might find useful:

Thank you to Will Bamberg for doing the bulk of this work. Remember that MDN is a community wiki, so anyone can help!

Planet MozillaWeb QA Team Meeting, 21 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Planet MozillaReps weekly, 21 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaWeb QA Team Meeting, 21 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>