Planet MozillaTenFourFox FPR4b1 available

TenFourFox Feature Parity Release 4 beta 1 is now available (downloads, hashes, release notes).

I didn't get everything into this release that I was hoping to; CSS Grid and some additional DOM features are going to have to wait until FPR5. Still, there's quite a bit in FPR4, including more AltiVec conversions (this time the library function we're making over is strchr()), layout speed enhancements and hopefully a final fix for issue 72. That was a particularly interesting fix because it turns out there are actually two OS bugs in 10.5 that not only caused the issue but made it a little more involved to mitigate; read the issue if you're interested in the gory technical details, but basically we can overwhelm Leopard with our popup window events, and on top of that we can't even detect the misplaced clicks that result because the NSEvent's underlying CGEvent has incorrectly displaced coordinates. Since it does much the same work to patch around the OS as the fix for issue 248 (which also affects 10.4), even though the two issues have completely different root causes, I mostly combined the code for the two fixes to simplify the situation. It's not well tested, however, so I haven't uploaded it to the tree yet in case I have to back it out like I did the last time. Once we've determined it fixes the problem and it doesn't regress anything, I'll commit and push it.

The two major user-facing changes relate to fonts and HTML5 video. On the font side, we now have the same versions of the Brotli, OTS, WOFF2 and Harfbuzz libraries as Firefox 57, meaning we now support the latest iteration of WOFF2 webfonts as well and pick up all the rendering and performance improvements along the way. (This also upgrades Brotli decompression for the websites that support it, and I added some PowerPC-specific enhancements to the in-tree Brotli to use our native assembly byteswapping instructions for endian conversion. I should try to push this upstream when I get a round tuit.) This version of TenFourFox also fixes a longstanding issue where we couldn't display Graphite fonts for minority writing systems; they just wouldn't load due to a compiler issue where one of the key structs was computed with the wrong size, causing the browser to bail out. Before you upgrade, look at that link in FPR3 and note that because of this fallback the Burmese Padauk font has the wrong washwes and the Nastaʿlīq font at the bottom is missing all the ligatures and glyph substitutions shown in the comparison screenshot. In FPR4, this is all corrected and everything appears perfectly. As a formally trained linguist (BA, University of California) and a Christian, I find the work SIL International is doing with writing systems to be fascinating and hopefully this will make TenFourFox more useful to our users in foreign climes.

On the video side, the YouTube redesign has been an unmitigated dumpster fire for performance on older machines. Not only does it require a lot more system resources, it also ruined a lot of older addons to download videos that depended on the prior layout (on purpose?). It's not entirely misguided, though: while the lazy loader they appear to be using makes it very hard to deterministically reason about what loads when, after the first video finally grinds through subsequent ones do require much less work. (This is part of Google's attempt to get you to just "leave YouTube on" like your TV, I suspect.) I tried to retune the media decoder state machine to deal with these changes, and the baseline I hit on makes the browser pre-render a lot more frames (not just buffer, but actually pre-decode prior to playback) and pushes much smaller sets to the compositor instead of drowning it in frames that arrive too late and then have to be taken back out. With this change my Quad G5 is able to play most videos in Reduced mode nearly as well as before -- it does not completely erase the loss in performance, but it does improve.

This retuning also benefits HTML5 video playback in general, not just on YouTube. You can see the difference on other WebM and Theora videos, such as the ones on Mozilla's own pages, or Wikipedia (WebM VP8 example, Theora VP3 example) — although there is an initial delay while the video pre-decodes, playback should be a fair bit less choppy. Even full-screen playback is no longer "LOL" in theory, though in practice still probably more stuttery than people would like. The same general limitations apply as before; for example, my Quad G5 handles VP9 with MSE fine, but my 10.5 DLSD PowerBook G4 becomes a slideshow due to VP9's higher bitrate and strongly prefers VP8. As such, the default setting is still to disable MSE, and I discourage enabling it except on low-spec G4 systems near the 1.25GHz cutoff (to use the lower 144p and 240p resolutions) and high-end 2.5GHz/2.7GHz G5 systems (to use the 360p and 480p options if desired).

FPR4 also introduces an experimental (disabled by default) set of features specifically for YouTube but possibly beneficial elsewhere, namely decode delay and Mach monitoring. Decode delay adds a "wait state" between page load and video playback so that the rest of the page can (eventually) load and the video won't get stomped on by other page display tasks requiring the CPU. In a similar fashion, Mach monitoring looks at the kernel-provided Mach factor at various intervals and if not enough CPU resources are available, inserts a "wait state" right then and there to temporarily delay playback until the CPU load goes down.

The reason these aren't enabled is because 1) I'm not sure what the proper values should be, or what a reasonable default is, and 2) longer values can cause some issues on YouTube with very short clips (particularly the interstitial ads) because their code doesn't expect the browser to suddenly take a timeout during playback. When this happens and an ad won't play, you probably can get around it by reloading the page. But you can still play with these settings and see what works for you. Post your findings in the comments along with your system specs, speed, RAM, etc. NB: You may need to restart the browser for some of these settings to stick, as they are cached for performance.

To introduce a decode delay, create a new integer preference in about:config and set the number of seconds you want. If you say zero (0), or delete the preference, there is no decode delay (the default). Every video played will have the decode delay, but only once upon initial playback. The idea with YouTube is a nice long decode delay to let all the other crap lazy-load, and then your video can queue up in peace.

Mach monitoring is based on Mach factor: the lower the factor, the more load is on the system (the reverse of load average in concept); zero, then, means all cores are 100% occupied. The default is a critical Mach factor of 450 (, a delay of five (5) seconds (, and zero (0) maximum tries ( which essentially disables the feature. If the preferences do not exist (the default), these defaults are used, meaning monitoring is not in effect. At various times the state machine will sample the Mach factor for the entire computer. If the Mach factor is less than the critical point, such as when the browser is trying to load YouTube comments, a playback delay is introduced (note that a delay of zero may still cause the browser to buffer video without an explicit delay, so this is not the same thing as disabling the feature entirely). The browser will only do this up to the maximum number of tries per video to prevent playback thrashing. Systems that are at their limit decoding video or very busy otherwise will likely need the Mach factor set rather low or the browser will blow through all the tries back to back before it even plays a single frame. Likewise, more maximum tries rather than longer delays may reduce problems with short clips but can cause irritating stalls later on; you'll have to find the balance that works for you. A tool like iStat or MenuMeters can give you an idea about how much processing headroom your system has.

Finally, this version removes the "Get Add-ons" tab from the Add-ons Manager, as threatened promised. Since the future is WebExtensions, and TenFourFox isn't compatible, there's no point in advertising them to our userbase. You can still download older legacy addons from AMO; I do still support them (remember: "best effort" only), and they will still install. I may resurrect this tab if someone(tm) develops a site to host these old addons.

For FPR5 my plan is to expand the use of our VMX-accelerated strchr() to more places, add CSS Grid, add some additional DOM features, and maybe start work on date and time pickers. The other major change I'd like to make is an overhaul of the session store system. The argument is that session stores run too frequently and chew up SSDs by writing the state of the browser to disk too often. As a fellow SSD user (a Samsung 512GB 850 PRO) I agree with this concern up to a point (which is why we have a 25-second interval instead of the default 15-second interval used in Firefox), but I think it's more profitable to reduce the size of the writes instead of making the interval excessively long: our systems aren't getting any younger and some of the instability we get reports on turned out to be undiagnosed system conflicts or even failing hardware. If we had a very long interval, it's possible these people might have lost data. The session store, like any backup method, only works if you use it!

Like everything else, you can tune this to your taste (though don't come crying to me if you muck it up). However, I think a reasonable first pass would be to do some code cleanup and then reduce the amount of history that gets serialized by reducing the number of back and forward documents to 5 (currently 10 and no limit respectively), and automatically purging closed windows and tabs after a certain timeframe, maybe a couple hours (see issue 444 for the relevant prefs). Making the interval 30 seconds instead of 25 shouldn't be a big loss either. But if you have other ideas, feel free to post them in the comments.

FPR4 final is scheduled for release November 14.

Planet MozillaChalk meets SLG

For the last month or so, I’ve gotten kind of obsessed with exploring a new evaluation model for Chalk. Specifically, I’ve been looking at adapting the SLG algorithm, which is used in the XSB Prolog engine. I recently opened a PR that adds this SLG-based solver as an alternative, and this blog post is an effort to describe how that PR works, and explore some of the advantages and disadvantages I see in this approach relative to the current solver that I described in my previous post.


For those who don’t want to read all the details, let me highlight the things that excite me most about the new solver:

  • There is a very strong caching story based on tabling.
  • It handles negative reasoning very well, which is important for coherence.
  • It guarantees termination without relying on overflow, but rather a notion of maximum size.
  • There is a lot of work on how to execute SLG-based designs very efficiently (including virtual machine designs).

However, I also have some concerns. For one thing, we have to figure out how to include coinductive reasoning for auto traits and a few other extensions. Secondly, the solver as designed always enumerates all possible answers up to a maximum size, and I am concerned that in practice this will be very wasteful. I suspect both of these problems can be solved with some tweaks.

What is this SLG algorithm anyway?

There is a lot of excellent work exploring the SLG algorithm and extensions to it. In this blog post I will just focus on the particular variant that I implemented for Chalk, which was heavily based on this paper “Efficient Top-Down Computation of Queries Under the Well-formed Semantics” by Chen, Swift, and Warren (JLP ‘95), though with some extensions from other work (and some of my own).

Like a traditional Prolog solver, this new solver explores all possibilities in a depth-first, tuple-at-a-time fashion, though with some extensions to guarantee termination1. Unlike a traditional Prolog solver, however, it natively incorporates tabling and has a strong story for negative reasoning. In the rest of the post, I will go into each of those bolded terms in more detail (or you can click on one of them to jump directly to the corresponding section).

All possibilities, depth-first, tuple-at-a-time

One important property of the new SLG-based solver is that it, like traditional Prolog solvers, is complete, meaning that it will find all possible answers to any query2. Moreover, like Prolog solvers, it searches for those answers in a so-called depth-first, tuple-at-a-time fashion. What this means is that, when we have two subgoals to solve, we will fully explore the implications of one answer through multiple subgoals before we turn to the next answer. This stands in contrast to our current solver, which rather breaks down goals into subgoals and processes each of them entirely before turning to the next. As I’ll show you now, our current solver can sometimes fail to find solutions as a result (but, as I’ll also discuss, our current solver’s approach has advantages too).

Let me give you an example to make it more concrete. Imagine this program:

// sour-sweet.chalk
trait Sour { }
trait Sweet { }

struct Vinegar { }
struct Lemon { }
struct Sugar { }

impl Sour for Vinegar { }
impl Sour for Lemon { }

impl Sweet for Lemon { }
impl Sweet for Sugar { }

Now imagine that we had a query like:

exists<T> { T: Sweet, T: Sour }

That is, find me some type T that is both sweet and sour. If we plug this into Chalk’s current solver, it gives back an “ambiguous” result (this is running on my PR):

> cargo run -- --program=sour-sweet.chalk
?- exists<T> { T: Sour, T: Sweet }
Ambiguous; no inference guidance

This is because of the way that our solver handles such compound queries; specifially, the way it breaks them down into individual queries and performs each one recursively, always looking for a unique result. In this case, it would first ask “is there a unique type T that is Sour?” Of course, the answer is no – there are two such types. Then it asks about Sweet, and gets the same answer. This leaves it with nowhere to go, so the final result is “ambiguous”.

The SLG solver, in contrast, tries to enumerate individual answers and see them all the way through. If we ask it the same query, we see that it indeed finds the unique answer Lemon (note the use of --slg in our cargo run command to enable the SLG-based solver):

> cargo run -- --program=sour-sweet.chalk --slg
?- exists<T> { T: Sour, T: Sweet }     
1 answer(s) found:
- ?0 := Lemon

This result is saying that the value for the 0th (i.e., first) existential variable in the query (i.e., T) is Lemon.3

In general, the way that the SLG solver proceeds is kind of like a sort of loop. To solve a query like exists<T> { T: Sour, T: Sweet }, it is sort of doing something like this:

for T where (T: Sour) {
  if (T: Sweet) {

(The actual struct is a bit complex because of the possibility of cycles; this is where tabling, the subject of a later section, comes in, but this will do for now.)

As we have seen, a tuple-at-a-time strategy finds answers that our current strategy, at least, does not. If we adopted this strategy wholesale, this could have a very concrete impact on what the Rust compiler is able to figure out. Consider these two functions, for example (assuming that the traits and structs we declared earlier are still in scope):

fn foo() {
  let vec: Vec<_> = vec![];
  //           ^
  //           |
  // NB: We left the element type of this vector
  // unspecified, so the compiler must infer it.

  //   ^
  //   |
  // This effectively generates the two constraints
  //     ?T: Sweet
  //     ?T: Sour
  // where `?T` is the element type of our vector.

fn bar<T: Sweet + Sour>(x: Vec<T>) {

Here, we wind up creating the very sort of constraint I was talking about earlier. rustc today, which follows a chalk-like strategy, will fail compilation, demanding a type annotation:

error[E0282]: type annotations needed
  --> src/
  15 |   let vec: Vec<_> = vec![];
     |       ---           ^^^^^^ cannot infer type for `T`
     |       |
     |       consider giving `vec` a type

An SLG-based solver of course could find a unique answer here. (Also, rustc could give a more precise error message here regarding which type you ought to consider giving.)

Now, you might ask, is this a realistic example? In other words, here there happens to be a single type that is both Sour and Sweet, but how often does that happen in practice? Indeed, I expect the answer is “quite rarely”, and thus the extra expressiveness of the tuple-at-a-time approach is probably not that useful in practice. (In particular, the type-checker does not want to “guess” types on your behalf, so unless we can find a single, unique answer, we don’t typically care about the details of the result.) Still, I could imagine that in some narrow circumstances, especially in crates like Diesel that use traits as a complex form of meta-programming, this extra expressiveness may be of use. (And of course having the trait solver fail to find answers that exist kind of sticks in your craw a bit.)

There are some other potential downsides to the tuple-at-a-time approach. For example, there may be an awfully large number of types that implement Sweet, and we are going to effectively enumerate them all while solving. In fact, there might even be an infinite set of types! That brings me to my next point.

Guaranteed termination

Imagine we extended our previous program with something like a type HotSauce<T>. Naturally, if you add hot sauce to something sour, it remains sour, so we can also include a trait impl to that effect:

struct HotSauce<T> { }
impl<T> Sour for HotSauce<T> where T: Sour { }

Now if we have the query exists<T> { T: Sour }, there are actually an infinite set of answers. Of course we can have T = Vinegar and T = Lemon. And we can have T = HotSauce<Vinegar> and T = HotSauce<Lemon>. But we can also have T = HotSauce<HotSauce<Lemon>>. Or, for the real hot-sauce enthusiast4, we might have:

T = HotSauce<HotSauce<HotSauce<HotSauce<Lemon>>>>

In fact, we might have an infinite number of HotSauce types wrapping either Lemon or Vinegar.

This poses a challenge to the SLG solver. After all, it tries to enumerate all answers, but in this case there are an infinite number! The way that we handle this is basically by imposing a maximum size on our answers. You could measure size various ways. A common choice is to use depth, but the total size of a type can still grow exponentially relative to the depth, so I am instead limiting the maximum size of the tree as a whole. So, for example, our really long answer had a size of 5:

T = HotSauce<HotSauce<HotSauce<HotSauce<Lemon>>>>

The idea then is that once an answer exceeds that size, we start to approximate the answer by introducing variables.5 In this case, if we imposed a maximum size of 3, we might transform that answer into:

exists<U> { T = HotSauce<HotSauce<U>> }

The original answer is an instance of this – that is, we can substitute U = HotSauce<HotSauce<Lemon>> to recover it.

Now, when we introduce variables into answers like this, we lose some precision. We can now only say that exists<U> { T = HotSauce<HotSauce<U>> } might be an answer, we can’t say for sure. It’s a kind of “ambiguous” answer6.

So let’s see it in action. If I invoke the SLG solver using a maximum size of 3, I get the following:7

> cargo run -- --program=sour-sweet.chalk --slg --overflow-depth=3
7 answer(s) found:
- ?0 := Vinegar
- ?0 := Lemon
- ?0 := HotSauce<Vinegar>
- ?0 := HotSauce<Lemon>
- exists<U0> { ?0 := HotSauce<HotSauce<?0>> } [ambiguous]
- ?0 := HotSauce<HotSauce<Vinegar>>
- ?0 := HotSauce<HotSauce<Lemon>>

Notice that middle answer:

- exists<U0> { ?0 := HotSauce<HotSauce<?0>> } [ambiguous]

This is precisely the point where the abstraction mechanism kicked in, introducing a variable. Note that the two instances of ?0 here refer to different variables – the first one, in the “key”, refers to the 0th variable in our original query (what I’ve been calling T). The second ?0, in the “value” refers, to the variable introduced by the exists<> quantifier (the U0 is the “universe” of that variable, which has to do with higher-ranked things and I won’t get into here). Finally, you can see that we flagged this result as [ambiguous], because we had to truncate it to make it fit the maximum size.

Truncating answers isn’t on its own enough to guarantee termination. It’s also possible to setup an ever-growing number of queries. For example, one could write something like:

trait Foo { }
impl<T> Foo for T where HotSauce<T>: Foo { }

If we try to solve (say) Lemon: Foo, we will then have to solve HotSauce<Lemon>, and HotSauce<HotSauce<Lemon>>, and so forth ad infinitum. We address this by the same kind of tweak. After a point, if a query grows too large, we can just truncate it into a shorter one8. So e.g. trying to solve

exists<T> HotSauce<HotSauce<HotSauce<HotSauce<T>>>>: Foo

with a maximum size of 3 would wind up “redirecting” to the query

exists<T> HotSauce<HotSauce<HotSauce<T>>>: Foo

Interestingly, unlike the “answer approximation” we did earlier, redirecting queries like this doesn’t produce imprecision (at least not on its own). The new query is a generalization of the old query, and since we generate all answers to any given query, we will find the original answers we were looking for (and then some more). Indeed, if we try to perform this query with the SLG solver, it correctly reports that there exists no answer (because this recursion will never terminate):

> cargo run -- --program=sour-sweet.chalk --slg --overflow-depth=3
?- Lemon: Foo
No answers found.

(The original solver panics with an overflow error.)


The key idea of tabling is to keep, for each query that we are trying to solve, a table of answers that we build up over time. Tabling came up in my previous post, too, where I discussed how we used it to handle cyclic queries in the current solver. But the integration into SLG is much deeper.

In SLG, we wind up keeping a table for every subgoal that we encounter. Thus, any time that you have to solve the same subgoal twice in the course of a query, you automatically get to take advantage of the cached answers from the previous attempt. Moreover, to account for cyclic dependencies, tables can be linked together, so that as new answers are found, the suspended queries are re-awoken.

Tables can be in one of two states:

  • Completed: we have already found all the answers for this query.
  • Incomplete: we have not yet found all the answers, but we may have found some of them.

By the time the SLG processing is done, all tables will be in a completed state, and thus they serve purely as caches. These tables can also be remembered for use in future queries. I think integrating this kind of caching into rustc could be a tremendous performance enhancement.

Variant- versus subsumption-based tabling

I implemented “variant-based tabling” – in practical terms, this means that whenever we have some subgoal G that we want to solve, we first convert it into a canonical form. So imagine that we are in some inference context and ?T is a variable in that context, and we want to solve HotSauce<?T>: Sour. We would replace that variable ?T with ?0, since it is the first variable we encountered as we traversed the type, thus giving us a canonical query like:

HotSauce<?0>: Sour

This is then the key that we use to lookup if there exists a table already. If we do find such a table, it will have a bunch of answers; these answers are in the form of substitutions, like

  • ?0 := Lemon
  • ?0 := Vinegar

and so forth. At this point, this should start looking familiar: you may recall that earlier in the post I was showing you the output from the chalk repl, which consisted of stuff like this:

> cargo run -- --program=sour-sweet.chalk --slg
?- exists<T> { T: Sour, T: Sweet }     
1 answer(s) found:
- ?0 := Lemon

This printout is exactly dumping the contents of the table that we constructed for our exists<T> { T: Sour, T: Sweet } query. That query would be canonicalized to ?0: Sour, ?0: Sweet, and hence we have results in terms of this canonical variable ?0.

However, this form of tabling that I just described has its limitations. For example, imagine that I we have the table for exists<T> { T: Sour, T: Sweet } all setup, but then I do a query like Lemon: Sour, Lemon: Sweet. In the solver as I wrote it today, this will create a brand new table and begin computation again. This is somewhat unfortunate, particularly for a setting like rustc, where we often solve queries first in the generic form (during type-checking) and then later, during trans, we solve them again for specific instantiations.

The paper about SLG that I pointed you at earlier describes an alternative approach called “subsumption-based tabling”, in which you can reuse a table’s results even if it is not an exact match for the query you are doing. This extension is not too difficult, and we could consider doing something similar, though we’d have to do some more experiments to decide if it pays off.

(In rustc, for example, subsumption-based tabling might not help us that much; the queries that we perform at trans time are often not the same as the ones we perform during type-checking. At trans time, we are required to “reveal” specialized types and take advantage of other details that type-checking does not do, so the query results are somewhat different.)

Negative reasoning and the well-founded semantics

One last thing that the SLG solver handles quite well is negative reasoning. In coherence – and maybe elsewhere in Rust – we want to be able to support negative queries, such as:

not { exists<T> { Vec<T>: Foo } }

This would assert that there is no type T for which Vec<T>: Foo is implemented. In the SLG solver, this is handled by creating a table for the positive query (Vec<?0>: Foo) and letting that execute. Once it completes, we can check whether the table has any answers or not.

There are some edge cases to be careful of though. If you start to allow negative reasoning to be used more broadly, there are logical pitfalls that start to arise. Consider the following Rust impls, in a system where we supported negative goals:

trait Foo { }
trait Bar { }

impl<T> Foo for T where T: !Bar { }
impl<T> Bar for T where T: !Foo { }

Now consider the question of whether some type T implements Foo and Bar. The trouble with these two impls is that the answers to these two queries (T: Foo, T: Bar) are no longer independent from one another. We could say that T: Foo holds, but then T: Bar does not (because T: !Foo is false). Alternatively, we could say that T: Bar holds, but then T: Foo does not (because T: !Bar is false). How is the compiler to choose?

The SLG solver chooses not to choose. It is based on the well-founded semantics, which ultimately assigns one of three results to every query: true, false, or unknown. In the case of negative cycles like the one above, the answer is “unknown”.

(In contrast, our current solver will answer that both T: Foo and T: Bar are false, which is clearly wrong. I imagine we could fix this – it was an interaction we did not account for in our naive tabling implementation.)

Extensions and future work

The SLG papers themselves describe a fairly basic set of logic programs. These do not include a number of features that we need to model Rust. My current solver already extends the SLG work to cover first-order hereditary harrop clauses (meaning the ability to have queries like forall<T> { if (T: Clone) { ... } }) – this was relatively straight-forward. But I did not yet cover some of the other things that the current solver handles:

  • Coinductive predicates: To handle auto traits, we need to support coinductive predicates like Send. I am not sure yet how to extend SLG to handle this.
  • Fallback clauses: If you normalize something like <Vec<u32> as IntoIterator>::Item, the correct result is u32. The SLG solver gives back two answers, however: u32 or the unnormalized form <Vec<u32> as IntoIterator>::Item. This is not wrong, but the current solver understands that one answer is “better” than the other.
  • Suggested advice: in cases of ambiguity, the current solver knows to privilege where clauses and can give “suggestions” for how to unify variables based on those.

The final two points I think can be done in a fairly trivial fashion, though the full implications of fallback clauses may require some careful thought, but coinductive predicates seem a bit harder and may require some deeper tinkering.


I’m pretty excited about this new SLG-based solver. I think it is a big improvement over the existing solver, though we still have to work out the story for auto traits. The things that excited me the most:

  • The deeply integrated use of tabling offers a very strong caching story.
  • There is a lot of work on efficienctly executing the SLG solving algorithm. The work I did is only the tip of the iceberg: there are existing virtual machine designs and other things that we could adapt if we wanted to.

I am also quite keen on the story around guaranteed termination. I like that it does not involve a concept of overflow – that is, a hard limit on the depth of the query stack – but rather simply a maximum size imposed on types. The problem with overflow is that it means that the results of queries wind up dependent on where they were executed, complicating caching and other things. In other words, a query that may well succeed can wind up failing just because it was executed as part of something else. This does not happen with the SLG-based solver – queries always succeed or fail in the same way.

However, I am also worried – most notably about the fact that the current solver is designed to always enumerate all the answers to a query, even when that is unhelpful. I worry that this may waste a ton of memory in rustc processes, as we are often asked to solve silly queries like ?T: Sized during type-checking, which would basically wind up enumerating nearly all types in the system up to the maximum size[^ms]. Still, I am confident that we can find ways to address this shortcoming in time, possibly without deep changes to the algorithm.

Credit where credit is due

I also want to make sure I thank all the authors of the many papers on SLG whose work I gleefully stole built upon. This is a list of the papers that I papers that described techniques that went into the new solver, in no particular order; I’ve tried to be exhaustive, but if I forgot something, I’m sorry about that.


  1. True confessions: I have never (personally) managed to make a non-trivial Prolog program terminate. I understand it can be done. Just not by me.

  2. Assuming termination. More on that later.

  3. Some might say that lemons are not, in fact, sweet. Well fooey. I’m not rewriting this blog post now, dang it.

  4. Try this stuff, it’s for real.

  5. This technique is called “radial restraint” by its authors.

  6. In terms of the well-formed semantics that we’ll discuss later, its truth value is considered “unknown”.

  7. Actually, in the course of writing this blog post, I found I sometimes only see 5 answers, so YMMV. Some kind of bug I suppose. (Update: fixed it.)

  8. This technique is called “subgoal abstraction” by its authors.

Planet WebKitAdrián Pérez de Castro: Web Engines Hackfest, 2017 Edition

At the beginning of October I had the wonderful chance of attending the Web Engines Hackfest in A Coruña, hosted by Igalia. This year we were over 50 participants, which was great to associate even more faces to IRC nick names, but more importantly allows hackers working at all the levels of the Web stack to share a common space for a few days, making it possible to discuss complex topics and figure out the future of the projects which allow humanity to see pictures of cute kittens — among many other things.

Mandatory fluff (CC-BY-NC).

During the hackfest I worked mostly on three things:

  • Preparing the code of the WPE WebKit port to start making preview releases.

  • A patch set which adds WPE packages to Buildroot.

  • Enabling support for the CSS generic system font family.

Fun trivia: Most of the WebKit contributors work from the United States, so the week of the Web Engines hackfest is probably the only single moment during the whole year that there is a sizeable peak of activity in European day times.

Watching repository activity during the hackfest.

Towards WPE Releases

At Igalia we are making an important investment in the WPE WebKit port, which is specially targeted towards embedded devices. An important milestone for the project was reached last May when the code was moved to main WebKit repository, and has been receiving the usual stream of improvements and bug fixes. We are now approaching the moment where we feel that is is ready to start making releases, which is another major milestone.

Our plan for the WPE is to synchronize with WebKitGTK+, and produce releases for both in parallel. This is important because both ports share a good amount of their code and base dependencies (GStreamer, GLib, libsoup) and our efforts to stabilize the GTK+ port before each release will benefit the WPE one as well, and vice versa. In the coming weeks we will be publishing the first official tarball starting off the WebKitGTK+ 2.18.x stable branch.

Wild WEBKIT PORT appeared!

Syncing the releases for both ports means that:

  • Both stable and unstable releases are done in sync with the GNOME release schedule. Unstable releases start at version X.Y.1, with Y being an odd number.

  • About one month before the release dates, we create a new release branch and from there on we work on stabilizing the code. At least one testing release with with version X.Y.90 will be made. This is also what GNOME does, and we will mimic this to avoid confusion for downstream packagers.

  • The stable release will have version X.Y+1.0. Further maintenance releases happen from the release branch as needed. At the same time, a new cycle of unstable releases is started based on the code from the tip of the repository.

Believe it or not, preparing a codebase for its first releases involves quite a lot of work, and this is what took most of my coding time during the Web Engines Hackfest and also the following weeks: from small fixes for build failures all the way to making sure that public API headers (only the correct ones!) are installed and usable, that applications can be properly linked, and that release tarballs can actually be created. Exhausting? Well, do not forget that we need to set up a web server to host the tarballs, a small website, and the documentation. The latter has to be generated (there is still pending work in this regard), and the whole process of making a release scripted.

Still with me? Great. Now for a plot twist: we won't be making proper releases just yet.

APIs, ABIs, and Releases

There is one topic which I did not touch yet: API/ABI stability. Having done a release implies that the public API and ABI which are part of it are stable, and they are not subject to change.

Right after upstreaming WPE we switched over from the cross-port WebKit2 C API and added a new, GLib-based API to WPE. It is remarkably similar (if not the same in many cases) to the API exposed by WebKitGTK+, and this makes us confident that the new API is higher-level, more ergonomic, and better overall. At the same time, we would like third party developers to give it a try (which is easier having releases) while retaining the possibility of getting feedback and improving the WPE GLib API before setting it on stone (which is not possible after a release).

It is for this reason that at least during the first WPE release cycle we will make preview releases, meaning that there might be API and ABI changes from one release to the next. As usual we will not be making breaking changes in between releases of the same stable series, i.e. code written for 2.18.0 will continue to build unchanged with any subsequent 2.18.X release.

At any rate, we do not expect the API to receive big changes because —as explained above— it mimics the one for WebKitGTK+, which has already proven itself both powerful enough for complex applications and convenient to use for the simpler ones. Due to this, I encourage developers to try out WPE as soon as we have the first preview release fresh out of the oven.

Packaging for Buildroot

At Igalia we routinely work with embedded devices, and often we make use of Buildroot for cross-compilation. Having actual releases of WPE will allow us to contribute a set of build definitions for the WPE WebKit port and its dependencies — something that I have already started working on.

Lately I have been taking care of keeping the WebKitGTK+ packaging for Buildroot up-to-date and it has been delightful to work with such a welcoming community. I am looking forward to having WPE supported there, and to keep maintaining the build definitions for both. This will allow making use of WPE with relative ease, while ensuring that Buildroot users will pick our updates promptly.

Generic System Font

Some applications like GNOME Web Epiphany use a WebKitWebView to display widget-like controls which try to follow the design of the rest of the desktop. Unfortunately for GNOME applications this means Cantarell gets hardcoded in the style sheet —it is the default font after all— and this results in mismatched fonts when the user has chosen a different font for the interface (e.g. in Tweaks). You can see this in the following screen capture of Epiphany:

Web using hardcoded Cantarell and (on hover) -webkit-system-font.

Here I have configured the beautiful Inter UI font as the default for the desktop user interface. Now, if you roll your mouse over the image, you will see how much better it looks to use a consistent font. This change also affects the list of plugins and applications, error messages, and in general all the about: pages.

If you are running GNOME 3.26, this is already fixed using font: menu (part of the CSS spec since ye olde CSS 2.1) — but we can do better: Safari has had support since 2015, for a generic “system” font family, similar to sans-serif or cursive:

/* Using the new generic font family (nice!). */
body {
    font-family: -webkit-system-font;

/* Using CSS 2.1 font shorthands (not so nice). */
body {
    font: menu;       /* Pick ALL font attributes... */
    font-size: 12pt;  /* ...then reset some of them. */
    font-weight: 400;

During the hackfest I implemented the needed moving parts in WebKitGTK+ by querying the GtkSettings::gtk-font-name property. This can be used in HTML content shown in Epiphany as part of the UI, and to make the Web Inspector use the system font as well.

Web Inspector using Cantarell, the default GNOME 3 font (full size).

I am convinced that users do notice and appreciate attention to detail, even if they do unconsciously, and therefore it is worthwhile to work on this kind of improvements. Plus, as a design enthusiast with a slight case of typographic OCD, I cannot stop myself from noticing inconsistent usage of fonts and my mind is now at ease knowing that opening the Web Inspector won't be such a jarring experience anymore.


But there's one more thing: On occasion we developers have to debug situations in which a process is seemingly stuck. One useful technique involves running the offending process under the control of a debugger (or, in an embedded device, under gdbserver and controlled remotely), interrupting its execution at intervals, and printing stack traces to try and figure out what is going on. Unfortunately, in some circumstances running a debugger can be difficult or impractical. Wouldn't it be grand if it was possible to interrupt the process without needing a debugger and request a stack trace? Enter “Out-Of-Band Stack Traces” (proof of concept):

  1. The process installs a signal handler using sigaction(7), with the SA_SIGINFO flag set.

  2. On reception of the signal, the kernel interrupts the process (even if it's in an infinite loop), and invokes the signal handler passing an extra pointer to an ucontext_t value, which contains a snapshot of the execution status of the thread which was in the CPU before the signal handler was invoked. This is true for many platform including Linux and most BSDs.

  3. The signal handler code can get obtain the instruction and stack pointers from the ucontext_t value, and walk the stack to produce a stack trace of the code that was being executed. Jackpot! This is of course architecture dependent but not difficult to get right (and well tested) for the most common ones like x86 and ARM.

The nice thing about this approach is that the code that obtains the stack trace is built into the program (no rebuilds needed), and it does not even require to relaunch the process in a debugger — which can be crucial for analyzing situations which are hard to reproduce, or which do not happen when running inside a debugger. I am looking forward to have some time to integrate this properly into WebKitGTK+ and specially WPE, because it will be most useful in embedded devices.

Planet MozillaMission Control: Ready for contributions

One of the great design decisions that was made for Treeherder was a strict seperation of the client and server portions of the codebase. While its backend was moderately complicated to get up and running (especially into a state that looked at all like what we were running in production), you could get its web frontend running (pointed against the production data) just by starting up a simple node.js server. This dramatically lowered the barrier to entry, for Mozilla employees and casual contributors alike.

I knew right from the beginning that I wanted to take the same approach with Mission Control. While the full source of the project is available, unfortunately it isn’t presently possible to bring up the full stack with real data, as that requires privileged access to the athena/parquet error aggregates table. But since the UI is self-contained, it’s quite easy to bring up a development environment that allows you to freely browse the cached data which is stored server-side (essentially: git clone && yarn install && yarn start).

In my experience, the most interesting problems when it comes to projects like these center around the question of how to present extremely complex data in a way that is intuitive but not misleading. Probably 90% of that work happens in the frontend. In the past, I’ve had pretty good luck finding contributors for my projects (especially Perfherder) by doing call-outs on this blog. So let it be known: If Mission Control sounds like an interesting project and you know React/Redux/D3/MetricsGraphics (or want to learn), let’s work together!

I’ve created some good first bugs to tackle in the github issue tracker. From there, I have a galaxy of other work in mind to improve and enhance the usefulness of this project. Please get in touch with me (wlach) on #missioncontrol if you want to discuss further.

Planet MozillaBringing Mixed Reality to the Web

Today, Mozilla is announcing a new development program for Mixed Reality that will significantly expand its work in Virtual Reality (VR) and Augmented Reality (AR) for the web. Our initial focus will be on how to get devices, headsets, frameworks and toolsets to work together, so web developers can choose from a variety of tools and publishing methods to bring new immersive experiences online – and have them work together in a fully functional way.

In 2017, we saw an explosion of ways to create and experience Virtual Reality (VR) content on the web. Notable events included:

So the VR space is coalescing nicely, bringing VR models, games, and experiences online for anyone to enjoy and reuse. Unfortunately, the same is not yet true for AR. For instance, there is no way today to create a single web page that can be viewed by all these device types:

The Mixed Reality program aims to change that. We plan to work on the full continuum of specifications, browser implementations, and services required to create open VR and AR web experiences.

Proposing a WebXR API

We have created a draft WebXR API proposal for providing access to both augmented and virtual reality devices. The WebXR API formalizes the different ways these technologies expose views of reality around the user, and it exposes concepts common in AR platforms such as the Anchors found in Hololens, ARKit, and ARCore. You can take a look at an early implementation of this proposal, complete with examples that run on a range of AR- and VR-capable browsers.

WebXR is designed to make it easy for web developers to create web applications that adapt to the capabilities of each platform. These examples run in WebVR- and AR-enabled browsers, including desktop Firefox and experimental browsers such as one supporting ARCore on Android (although each small example is targeted at AR or VR for simplicity). We have developed an open-source WebXR Viewer iOS application that uses ARKit to implement AR support for these WebXR examples; it will be available in iTunes soon, but you can compile it yourself now if you have an iOS Developer account. We will be offering support for more browsers in the future, and welcome others to contribute to this effort and provide feedback on the proposal on GitHub.

Growing support for 3D Browsers

We are also expanding our browser support for Mixed Reality on the web. On desktop, we continue to evolve Firefox with broader 3D support, including recently announcing see-through AR support for Meta’s AR headset.

We are also developing a 3D mobile browser platform, based on our Servo project, that enables a new class of Mixed Reality headworn displays, expected to come to market in the near term. We will share more on this work soon, but some early teases include Servo DOM-to-Texture support and integrated support for Qualcomm’s Snapdragon 835 standalone VR hardware.

Ways to Contribute

We look forward to your feedback on WebXR, as well as engaging with hardware and software developers who might wish to collaborate with us in this space or Servo. Stay tuned for upcoming updates from us on more ways to produce WebVR content from popular authoring tools, experimental browser features for better access to the GPU, in-headset content discovery, and open, cross-platform social services.

We welcome Mixed Reality hardware and software developers who may wish to collaborate with us on Servo or other projects to drop us an email. Follow us on Twitter for updates.

The post Bringing Mixed Reality to the Web appeared first on The Mozilla Blog.

Planet MozillaAn Unofficial Guide to Unofficial Swag: Stickers

Mozillians like stickers.


However! Mozilla doesn’t print as many stickers as you might think it does. Firefox iconography, moz://a wordmarks, All Hands-specific rounds, and Mozilla office designs are the limit of official stickers I’ve seen come from official sources.

The vast majority of sticker designs are unofficial, made by humans like you! This guide contains tips that should help you create and share your own unofficial stickers.

<figure class="wp-caption aligncenter" id="attachment_5278" style="width: 209px;">Plan 9<figcaption class="wp-caption-text">(original poster by Tom Jung, modifications by :Yoric and myself. Use under CC-BY-SA 3.0)</figcaption></figure>


I’m not a designer. Luckily for my most recent printing project I was simply updating the existing design you see above. If you are adapting someone else’s design, ensure you either have permission or are staying within the terms of the design’s license. Basic Firefox product identity assets are released under generous terms for remixing, for instance.


The bigger they are, the harder they are to fit in a pocket or on the back of a laptop screen. Or in carry-on. The most successful stickers I’ve encountered have been at most 7cm on the longest side (or in diameter, for rounds), and many have been much smaller. With regards to size, less may in fact be more, but you have to balance this with any included text which must be legible. The design I used wouldn’t work much smaller than 7cm in height, and the text is already a little hard to read.


How will you distribute these? If your design is team-specific, a work week is a good chance to hand them out individually. If the design is for a location, then pick a good gathering point in that location (lunchrooms are a traditional and effective choice), fan out some dozen or two stickers, and distribution should take care of itself. All Hands are another excellent opportunity for individual and bulk distribution. If the timing doesn’t work out well to align with a work week or an All Hands, you may have to resort to mailing them over the globe yourself. In this case, foster contacts in Mozilla spaces around the world to help your stickers make it the last mile into the hands and onto the laptops of your appreciative audience.


50 is a bare minimum both in what you’ll be permitted to purchase by the printer and in how many you’ll want to have on hand to give away. If your design is timeless (i.e. doesn’t have a year on it, doesn’t refer to a current event), consider making enough leftovers for the future. If your design is generic enough that there will be interest outside of your team, consider increasing supply for this demand. Generally the second 50 stickers cost a pittance compared to the first 50, so don’t be afraid to go for a few extra.


You’ll be paying for this yourself. If your design is team-specific and you have an amenable expense approver you might be able to gain reimbursement under team-building expenses… But don’t depend on this. Don’t spend any money you can’t afford. You’re looking at between 50 to 100 USD for just about any number of any kind of sticker, at current prices.


I’m in Canada. The sticker printer I chose most recently (stickermule) was in the US. Unsurprisingly, it was cheaper and faster to deliver the stickers to the US. Luckily, :kparlante was willing to mule the result to me at the San Francisco All Hands, so I was able to save both time and money. Consider these logistical challenges when planning your swag.


Two weeks before an All Hands is probably too late to start the process of generating stickers. I’ve made it happen, but I was lucky. Be more prepared than I was and start at least a month ahead. (As of publication time you ought to have time to take care of it all before Austin).


After putting a little thought into the above areas it’s simply a matter of choosing a printing company (local to your region, or near your distribution venue) and sending in the design. They will likely generate a proof which will show you what their interpretation of your design on their printing hardware will look like. You then approve the proof to start printing, or make changes to the design and have the printer regenerate a proof until you are satisfied. Then you arrange for delivery and payment. Expect this part of the process to take at least a week.

And that’s all I have for now. I’ll compile any feedback I receive into an edit or a Part 2, as I’ve no doubt forgotten something essential that some member of the Mozilla Sticker Royalty will only too happily point out to me soonish.

Oh, and consider following or occasionally perusing the mozsticker Instagram account to see a sample of the variety in the Mozilla sticker ecosystem.

Happy Stickering!


Planet MozillaMy night at the museum

Thursday October 19, 2017,

I arrived at the Technical Museum in Stockholm together with my two kids just a short while before 17:30. A fresh, cool and clear autumn evening. For this occasion I had purchased myself a brand new suit as I hadn’t gotten one since almost twenty years before this and it had been almost that long since I last wore it. I went for a slightly less conservative purple colored shirt with the dark suit.

Apart from my kids, my wife was of course also present and so was my brother Björn and my parents in law. Plus a few hundred other visitors, most of them of course unknown to me.

My eleven year old son truly appreciates this museum so we took the opportunity to quickly check out parts of the exhibitions while the pre-event mingling went on and drinks were served. Not too long though as we were soon asked to proceed to the restaurant part and take our assigned seats. I was seated at table #6.

The whole evening was not entirely “mine”, but as I am the winner of this year’s Polhem Prize it was setup to eventually lead to the hand over of the award to me. An evening for me. Lots of attention on me and references to my work through-out the evening, that otherwise had the theme of traffic safety (my guess is that’s partly due to last year’s Prize winner who was a lead person in the invention of seat belts in cars).

A three-course dinner, with some entertainment intermixed. At my table I sat next to some brilliant and interesting people and I had a great time and good conversations. Sitting across the table from the His Majesty the king of Sweden was an unexpected and awesome honor.

Somewhere mid-through the evening, a short movie was presented on the big screens. A (Swedish-speaking) movie with me trying to explain what curl is, what it does and why I’ve made it. I think the movie was really great and I think it helps explaining curl to non-techies (including my own family). The movie is the result of a perhaps 40 minutes interview/talk we did on camera and then a fair amount of skilled editing by the production company. (I hope this film goes up on YouTube or otherwise becomes available at some point.)

At around 21:30 I was called on stage. I received a gold medal from the king and shook his hand. I also received a diploma and a paper with the award committee’s motivation for me getting the prize. And huge bouquet of lovely flowers. A bit more than what I could hold in my arms really.

(me, and Carl XVI Gustaf, king of Sweden)

As the king graciously offered to hold my diploma and medal, I took the microphone and expressed a few words of thanks. I was and I still am genuinely and deeply moved by receiving this prize. I’m happy and proud. I said my piece in which I explicitly mentioned my family members by name: Anja, Agnes and Rex for bearing with me.

(me, H.M the king and Cecilia Schelin Seidegård)

Afterwards I received several appraisals for my short speech which made me even happier. Who would’ve thought that was even possible?

I posed for pictures, shook many hands, received many congratulations and I even participated in a few selfies until the time came when it was time for me and my family to escape into a taxi and go home.

What a night. In the cab home we scanned social media and awed over pictures and mentions. I hadn’t checked my phone even once during the event so it had piled up a bit. It’s great to have so many friends and acquaintances who shared this award and moment with us!

I also experienced a strong “post award emptiness” sort of feeling. Okay, that was it. That was great. Now it’s over. Back to reality again. Back to fixing bugs and responding to emails.

Thank you everyone who contributed to this! In whatever capacity.

The Swedish motivation (shown in a picture above) goes like this, translated to English with google and edited by me:

Motivation for the Polhem Prize 2017

Our modern era consists of more and more ones and zeroes. Each individual programming tool that instructs technical machines to do
what we want has its own important function.

Everything that is connected needs to exchange information.  Twenty years ago, Daniel Stenberg started working on what we  now call cURL. Since then he has spent late evenings and weekends, doing unpaid work to refine his digital tool. It consists of open source code and allows you to retrieve data from home page URLs. The English letter c, see, makes it “see URL”.

In practice, its wide spread use means that millions, up to billions of people, worldwide, every day benefit from cURL in their mobile phones, computers, cars and a lot more. The economic value created with this can not be overestimated.

Daniel Stenberg initiated, keeps it together and leads the continuous development work with the tool. Completely voluntary. In total, nearly 1400 individuals have contributed. It is a solid engineering work and an expression of dedicated governance that has benefited many companies and the entire society. For this, Daniel Stenberg is awarded the Polhem Prize 2017.

Planet MozillaHolyJit: A New Hope

tl;dr: We believe there is a safer and easier way of writing a Jit.

Current State

Today, all browsers’ Jits share a similar design. This design makes extending the language or improving its performance time-consuming and complex, especially while avoiding security issues.

For instance, at the time of this writing, our Jit relies upon ~15000 lines of carefully crafted, hand-written assembly code (~36000 in Chromium’s v8). The Jit directory represents 5% of all the C++ code of Firefox, and contains 3 of the top 20 largest files of Firefox, all written by hand.

Interestingly, these files all contain code that is derived by hand from the Interpreter and a limited set of built-in functions of the JavaScript engine. But why do it by hand, when we could automatize the process, saving time and risk? HolyJit is exploring this possibility.

Introducing HolyJit (prototype)

This week, during the JS Team meetup, we have demonstrated the first prototype of a Rust meta-Jit compiler, named HolyJit. If our experiment proves successful, we believe that employing a strategy based on HolyJit will let us avoid many potential bugs and let us concentrate on strategic issues. This means more time to implement JavaScript features quickly and further improve the speed of our Jit.

HolyJit library instrumenting the Rust compiler to add a meta-Jit for Rust code.

For instance, in a recent change, we extended the support of optimizations to Array.prototype.push. What should have been a trivial modification required diving into safety-critical code and adding 135 lines of code, and reading even more code to check that we were not accidentally breaking invariants.

With HolyJit, what should have been a trivial change would effectively have been a trivial change. The following change to a hypothetical JS Jit built with HolyJit does exactly the same thing as the previous patch, i.e. allowing the Jit to inline the Array.prototype.push function when it is being called with more than one argument.

 fn array_push(args: &CallArgs) -> Result<JSValue, Error> {
-    jit_inline_if!(args.len() == 1);
+    jit_inline_if!(args.len() >= 1);

By making changes self-contained and simple, we hope that HolyJit will improve the safety of our Jit engine, and let us focus on optimizations.

HolyJit Repository:

Thanks to David Teller, Jason Orendorff, Sean Stangl, Jon Coppeard for proof reading this blog post.

Planet MozillaWebRender newsletter #8

Better late than never for the 8th newsletter. On the WebRender side, things keep getting faster and look smoother which is always nice. On Gecko’s side the work is, as always, hard to summarize but there are some self contained bits worth getting excited about like the great progress on reducing the overhead of building and transferring the display list to the parent process.

Lin Clark wrote an excellent blog post about WebRender. Now that the post is out I’ll resume working on the the series I started about WebRender on this blog, focusing on areas that were not included in Lin’s post and going to delve into some of the gory details of 2D rendering. Hopefully I’ll have time to work on this soon.

Notable WebRender changes

  • A large improvement in deserialization performance. This improved GMail drawing from 150fps to 200 fps
  • Nical improved (and fixed bugs in) the anti-aliasing of all rendering primitives.
  • Jerry added fallback paths to avoid crashing when some very large texture allocations fail.
  • Glenn made semi transparent text support sub-pixel anti-aliasing.
  • Glenn fixed text clipping.
  • Morris fixed a floating point precision issue in plane splitting (a method used to render 3d transforms with preserve-3d).
  • Gankro fixed several shadow rendering issues.
  • Martin fixed a bug with nested clips in position-sticky frames.

Notable Gecko changes

  • Further WebRender display list building time improvements
    • We now build the WebRender text display items directly during text paint instead of a two pass approach where we’d gather the information and then in a second pass construct the WebRender display items.
    • Inlining ToRelativeLayoutPoint to further speed up WebRender text display item construction.
  • Gankro made us stop hitting the fallback path for most elements in nsFieldSetFrame (in particular those used on GMail).
  • Sotaro ensured canvas updates are sent to the compositor on empty transactions.

Planet Mozillarob-bugson 1.0: or how I wrote a webextension

I work on Socorro and other projects which use GitHub for version control and code review and use Mozilla's Bugzilla for bug tracking.

After creating a pull request in GitHub, I attach it to the related Bugzilla bug which is a contra-dance of clicking and copy-and-paste. Github tweaks for Bugzilla simplified that by adding a link to the GitHub pull request page that I could click on, edit, and then submit the resulting form. However, that's a legacy addon and I use Firefox Nightly and it doesn't look like anyone wrote a webextension version of it, so I was out-of-luck.

Today, I had to bring in my car for service and was sitting around at the dealership for a few hours. I figured instead of working on Socorro things, I'd take a break and implement an attach-pr-to-bug webextension.

I've never written a webextension before. I had written a couple of addons years ago using the SDK and then Jetpack (or something like that). My JavaScript is a bit rusty, especially ES6 stuff. I figured this would be a good way to learn about webextensions.

It took me about 4 hours of puzzling through docs, writing code, and debugging and then I had something that worked. Along the way, I discovered exciting things like:

  • host permissions let you run content scripts in web pages
  • content scripts can't access browser.tabs--you need a background script for that
  • you can pass messages from content scripts to background scripts
  • seems like everything returns a promise, but async/await make that a lot easier to work with
  • the attachment page on Bugzilla isn't like the create-bug page and ignores querystring params

The MDN docs for writing webextensions and the APIs involved are fantastic. The webextension samples are also great--I started with them when I was getting my bearings.

I created a new GitHub repository. I threw the code into a pull request making it easier for someone else to review it. Mike Cooper kindly skimmed it and provided insightful comments. I fixed the issues he brought up.

TheOne helped me resurrect my AMO account which I created in 2012 back when Gaia apps were the thing.

I read through Publishing your webextension, generated a .zip, and submitted a new addon.

About 10 minutes later, the addon had been reviewed and approved.

Now it's a thing and you can install rob-bugson.

Planet MozillaReps Weekly Meeting Oct. 19, 2017

Reps Weekly Meeting Oct. 19, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Oct. 19, 2017

Reps Weekly Meeting Oct. 19, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaHow we rebuilt the website

As a front-end developer at Mozilla, I end up working on big sites that have been around for a long time. There are a lot of interesting challenges when working with legacy code at a large scale, but rebuilding from scratch usually isn’t an option.

The View Source Conference website, on the other hand, is a small site. So when we decided to move away from WordPress, we had the chance to start fresh.

Here are a few highlights of the architectural decisions we made to make the site faster, more secure, and more reliable.

A Static Site

When a user requests a page from a CMS (content management system) like WordPress the server puts it together from databases and templates. This takes the server a small amount of time. When a site is built on request like this we call it a “dynamic” website.

When a user requests a page from a static site the server only has to find and serve one file. It’s faster and takes fewer resources. We used a static site generator to generate our files before transferring them to the server.

Static files are also easier to copy than dynamic sites, this means we can copy our static site to different CDNs (content delivery networks) around the world. Getting our content closer to our users is a very effective way to reduce latency which is one of the biggest hurdles to delivering a site fast.

Offline First

A service worker is JavaScript that runs in a browser but not as part of a page. The most common use for service workers is to watch network requests and respond instead of the server.

I wanted to be sure the conference attendees would have access to the event schedule, even if they didn’t have wifi. So, when a user arrives on the site, browsers that support service workers automatically cache the conference schedule.

If the user returns to the site without a network connection the service worker will reply to the request with the cached schedule.

I am very grateful for the documentation published by The Guardian, Jeremy Keith, and others who are already using Service Workers.

Mobile First

When responsive web design first became the norm, the industry standard was to serve the full desktop site to all browsers with a bunch of extra code telling mobile browsers which pieces to remove to make the simplified mobile version. With the spread of mobile came the Mobile First development approach. Mobile first delivers the content and code for the mobile version of a site first and then the larger more powerful desktop computers do the work of creating a better large screen experience.

The View Source Conf site starts as a minimal mobile-friendly version. Then media queries in CSS and media queries in JavaScript add more complicated layout instructions for larger screens.


I used inline SVGs for the logo and icons. They look crisper on retina screens and, because they’re inline, don’t require any extra assets to download. Inlining also meant that I could change the logo’s colour in our print styles. It was my first time creating accessible SVGs.

No Script

All the content and functionality on the View Source site works with JavaScript disabled. Instead of sending shims and polyfills to older browsers to make them handle newer JavaScript features, we support those browsers by telling them not to load the JavaScript at all.

This meant we could write modern JavaScript! It also simplified testing. Less capable browsers just get functional, readable content, with no chance for odd JavaScript errors.

This isn’t a new idea, it’s progressive enhancement combined with the BBC News’ “Cut the Mustard” test.


HTTPS protects the privacy and security of your users and with Let’s Encrypt it’s free. You can tell browsers to only load your site over HTTPS with the Strict-Transport-Security header.

Do Not Track

We use Google Analytics to measure site traffic and try to improve our conversion rates but we respect the privacy of users visiting with Do Not Track enabled. By detecting Do Not Track settings we can avoid serving them the Google Analytics file. If a user has not set Do Not Track but has an ad blocker installed all our code runs without requiring Google Analytics to initialize.

View Source

Hear industry leaders speak about topics like web performance, security, reliability, CSS grids and more at the View Source Conference in London October 27, 2017. See the full schedule! Or watch videos of last year’s talks.

Planet MozillaTwo-Year Moziversary

Today marks two years since I became a Mozillian and MoCo Staff.

What did I do this year… well, my team was switched out from under me again. This time it was during the large Firefox + Platform reorg, and basically means my team (Telemetry Client Engineering) now has a name that more closely matches what I do: writing client-side Telemetry code, performing ad hoc data analysis, and reading a lot of email. I still lurk on #fce and answer questions for :ddurst about data matters from time to time, so it’s not a clean break by any means.

This means my work has been a little more client-focused. I completed my annual summer Big Refactor or Design Thing That Takes The Whole Summer For Some Reason. Last year it was bug 1218576 (whose bug number is lodged in my long-term memory and just won’t leave). This year it was bug 1366294 and its friends where, in support of Quantum, we reduced our storage overhead per-process by quite a fair margin. At the same time we removed excessive string hashes, fast-pathing most operations.

Ah, yes: Quantum. Every aspect of Firefox was under scrutiny… and from a data perspective. I’ve lost count of the number of times I’ve been called in to consult on data matters in support of the quickening of the new Firefox Quantum (coming this November to an Internet Near You!). I even spent a couple days in Toronto as part of a Quantum work week to nail down exactly what we could and should measure before and after shipping each build.

A pity I didn’t leave myself more time to just hang out with the MoCoTo folks.

In All Hands news we hit Hawai’i last December. Well, some of us did. With the unrest in the United States and the remoteness of the location this was a bit more of a Most Hands. Regardless, it was a productive time. Not sure how we managed to find so much rain and snow in a tropical desert, but we’re a special bunch I guess?

In June we were in San Francisco. There I ate some very spicy lunch and helped nail down some Telemetry Health metrics I’ve done some work on this autumn. Hopefully we’ll be able to get those metrics into Mission Control next year with proper thresholds for alerting if things go wrong.

This summer I mentored :flyingrub for Google Summer of Code. That was an interesting experience that ended up taking up quite a lot more time than I imagined it would when I started. I mean, sure, you can write it down on paper how many hours a week you’ll spend mentoring an intern through a project, and how many hours beforehand you’ll spend setting it up… but it’s another thing to actually put in the work. It was well worth it, and :flyingrub was an excellent contributor.

In last year’s Moziversary post I resolved to blog more, think more, and mentor more. I’ve certainly mentored more, with handfuls of mentored bugs contributed by first-time community members and that whole GSoC thing. I haven’t blogged more, though, as though I’ve written 23 posts with only April and July going without a single writing on this here blog, last year I posted 27. I also am not sure I have thought more, as simple and stupid mistakes still cast long shadows in my mind when I let them.

So I guess that makes two New MozYear Resolutions (New Year Mozolutions?) easy:

  • actually blog more, even if they are self-indulgent vanity posts. (Let’s be honest, though: they’re all self-indulgent vanity posts).
  • actually think more. Make fewer stupid mistakes, or if that’s not feasible at least reduce the size of their influence on the world and my mind after I make them.

That might be enough to think about for a year, right?


Planet MozillaFOSDEM 2018 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet ( contact
XMPP Planet Jabber ( contact
SIP Planet SIP ( contact
SIP (Español) Planet SIP-es ( contact

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Planet MozillaMicrosoft's Chrome Exploitation And The Limitations Of Control Flow Integrity

Microsoft published an interesting blog post about exploiting a V8 bug to achieve arbitrary code execution in a Chrome content sandbox. They rightly point out that then even if you don't escape the sandbox, you can break important Web security properties (e.g., assuming the process is allowed to host content from more than one origin, you can break same-origin restrictions). However, the message we're supposed to take away from this article is that Microsoft's CFI would prevent similar bugs in Edge from having the same impact. I think that message is basically wrong.

The problem is, once you've achieved arbitrary memory read/write from Javascript, it's very likely you can break those Web security properties without running arbitrary machine code, without using ROP, and without violating CFI at all. For example if you want to violate same-origin restrictions, your JS code could find the location in memory where the origin of the current document is stored and rewrite it to be a different origin. In practice it would quite a lot more complicated than that, but the basic idea should work, and once you've implemented the technique it could be used to exploit any arbitrary read/write bug. It might even be easier to write some exploits this way than using traditional arbitrary code execution; JS is a more convenient programming language than ROP gadgets.

The underlying technical problem is that once you've achieved arbitrary read/write you can almost completely violate data-flow integrity within the process. As I recently wrote, DFI is extremely important and (unlike CFI) it's probably impossible to dynamically enforce with low overhead in the presence of arbitrary read/write, with any reasonable granularity.

I think there's also an underlying cultural problem here, which is that traditionally "Remote Code Execution" — of unconstrained machine code — has been the gold standard for a working exploit, which is why techniques to prevent that, like CFI, have attracted so much attention. But Javascript (or some other interpreter, or even some Turing-complete interpreter-like behavior) armed with an arbitrary memory read/write primitive is just as bad in a lot of cases.

Planet MozillaBrowser Architecture Newsletter 4

A newsletter on architecture review, XBL Conversion, Storage and Sync, Workflow Improvements and a developer survey

Planet MozillaThunderbird is the next version of Instantbird

Ten years ago, on October 18th 2007, I released Instantbird 0.1. I was soon joined by a team of enthusiastic hackers, and I hoped we could make a better IM client that would replace the painfully broken ones that were dominant at the time.

The Internet has changed a lot since then. Messaging has moved significantly toward mobile apps. The clients we were competing with mostly died themselves. Even the services we were connecting to are closing down (AOL, MSN, ...), or moved away from standard protocols (Facebook).

While we made a pretty good product, we never managed to attract a critical mass of users, and we lost half of them when the Facebook XMPP gateway was closed. Instantbird still has some uses (especially as an IRC client), but its user interface has aged significantly.

I don't think maintaining our infrastructure to support only a few thousand users is a good use of my time, and I've lost motivation to do it. While Instantbird regularly received code contributions from several people and had a nice and friendly community, nobody stood up to replace me and take care of our build infrastructure. This means we haven't been able to produce nightly builds for the last couple months, and are extremely unlikely to be able to ship a new release any time soon. It's time to announce that we are stopping development of Instantbird as a standalone product.

The code base isn't dying though! A large part of it is shared with Thunderbird (since it received chat support in 2012). Thunderbird is actively maintained, and has lots of users.

Instead of working on Instantbird, we'll refocus our energy on improving the chat features in Thunderbird, so that it becomes friendly for users who loved Instantbird and will seek a replacement. This should allow us to focus on features and not worry about infrastructure that was sapping our energy and time. Thunderbird is the spiritual successor to Instantbird!

Planet MozillaA formal introduction to Ionut Goldan – Mozilla’s new Performance Sheriff and Tool hacker

About 8 months ago we started looking for a full time performance sheriff to help out with our growing number of alerts and needs for keeping the Talos toolchain relevant.

We got really lucky and ended up finding Ionut (:igoldan on irc, #perf).  Over the last 6 months, Ionut has done a fabulous job of learning how to understand Talos alerts, graphs, scheduling, and narrowing down root causes.  In fact, he has not only been able to easily handle all of the Talos alerts, Ionut has picked up alerts from Autophone (Android devices), Build Metrics (build times, installer sizes, etc.), AWSY (memory metrics), and Platform Microbenchmarks (tests run inside of gtest written by a few developers on the graphics and stylo teams).

While I could probably write a list of Ionut’s accomplishments and some tricky bugs he has sorted out, I figured your enjoyment of reading this blog is better spend on getting to know Ionut better, so I did a Q&A with him so we can all learn much more about Ionut.

Tell us about where you live?

I live in Iasi. It is a gorgeous and colorful town, somewhere in the North-East of Romania.  It is full of great places and enchanting sunsets. I love how a casual walk
leads me to new, beautiful and peaceful neighborhoods.

I have many things I very much appreciate about this town:
the people here, its continuous growth, its historical resonance, the fact that its streets once echoed the steps of the most important cultural figures of our country. It also resembles ancient Rome, as it is also built on 7 hills.

It’s pretty hard not to act like a poet around here.

What inspired you to be a computer programmer?

I wouldn’t say I was inspired to be a programmer.

During my last years in high school, I occasionally consulted with my close ones. Each time we concluded that IT is just the best domain to specialize in: it will improve continuously, there will be jobs available; things that are evident nowadays.

I found much inspiration in this domain after the first year in college, when I noticed the huge advances and how they’re conducted.  I understood we’re living in a whole new era. Digital transformation is now the coined term for what’s going on.

Any interesting projects you have done in the past (school/work/fun)?

I had the great opportunity to work with brilliant teams on a full advertising platform, from almost scratch.

It got almost everything: it was distributed, highly scalable, completely written in
Python 3.X, the frontend adopted material design, NoSQL database in conjunction with SQL ones… It used some really cutting-edge libraries and it was a fantastic feeling.

Now it’s Firefox. The sound name speaks for itself and there are just so many cool things I can do here.

What hobbies do you have?

I like reading a lot. History and software technology are my favourite subjects.
I enjoy cooking, when I have the time. My favourite dish definitely is the Hungarian goulash.

Also, I enjoy listening to classical music.

If you could solve any massive problem, what would you solve?

Greed. Laziness. Selfishness. Pride.

We can resolve all problems we can possibly encounter by leveraging technology.

Keeping non-values like those mentioned above would ruin every possible achievement.

Where do you see yourself in 10 years?

In a peaceful home, being a happy and caring father, spending time and energy with
my loved ones. Always trying to be the best example for them.  I envision becoming a top notch professional programmer, leading highly performant teams on
sound projects. Always familiar with cutting-edge tech and looking to fit it in our tool set.

Constantly inspiring values among my colleagues.

Do you have any advice or lessons learned for new students studying computer science?

Be passionate about IT technologies. Always be curious and willing to learn about new things. There are tons and tons of very good videos, articles, blogs, newsletters, books, docs…Look them out. Make use of them. Follow their guidance and advice.

Continuous learning is something very specific for IT. By persevering, this will become your second nature.

Treat every project as a fantastic opportunity to apply related knowledge you’ve acquired.  You need tons of coding to properly solidify all that theory, to really understand why you need to stick to the Open/Closed principle and all other nitty-gritty little things like that.

I have really enjoyed getting to know Ionut and working with him.  If you see him on IRC please ping him and say hi 🙂


Planet MozillaMozilla Festival 2017 - Volunteer Training

Mozilla Festival 2017 - Volunteer Training Training evening for those volunteering as part of Mozilla Festival 2017.

Planet MozillaMozilla Festival 2017 - Volunteer Training

Mozilla Festival 2017 - Volunteer Training Training evening for those volunteering as part of Mozilla Festival 2017.

Planet MozillaOn finding productivity

Recently, I joined a new-to-me team at Mozilla and started working on Firefox. Its not been an easy transition - from the stuff I was doing in the Connected Devices group to getting back to fixing bugs and writing code every day. And not just any code: the Firefox codebase is large and spread across a couple of decades. Any change is an exercise in code-sleuthing, to understand what it does today, why it was implemented that way and how to implement a patch that doesnt fix one thing while breaking a dozen others.

My intuition on how long a task should take has been proven so wildly wrong so many times in the last few months that I’ve had to step back and do some hard thinking. Do I just suck at this? Or am I pushing hard but in the wrong direction. Sometimes I think I’m just getting worse as a developer/software engineer over time, not better.

In truth, I have good days and bad days. On the bad days, the slightest snag, obfuscation of the problem, or ambiguity around how to proceed can freeze me up. I stare at it, futz with it. Procrastinate. Every possible action seems too complicated for my small brain, or to highlight something I haven’t learned well enough to proceed with. On these days, I count any movement forward at all as a success. Some trivial bug fixed, some observation noted down - its better than nothing.

Then there are the good days. By their nature they are not as note-worthy or memorable. I work through the tasks in front of me, fixing bugs and getting stuff done. I follow the trail to the end, note the solution and implement it. Maybe I see opportunities for future improvements or help out a colleague. The day ends and I go home feeling satisfied and ready to go at it again the next day.

Checklists and self-hacks

I’ve tried out lots of ways of turning bad days into good days. I have a list of check lists that I sometimes have the presence of mind to consult. One example goes like this:

For extrication from the weeds:

And there are others - for starting a new feature, for code reviews, for wrapping up and landing a patch. Check-lists are great - they are a concise way of distilling hard-won experience into something actionable and repeatable.

I keep notes on each task or bug I’m working on. I find a good first step is to write down all the questions that pertain to the problem, however obvious or simple. This list of questions then forms a task list and I can start filling in answers. Finding an answer to a question like “Q: wtf is this function supposed to do?” is a discrete, achievable task that removes an unknown and builds momentum. (A search of the code repository and bug database can tell me when it was introduced, by who and what problem it solved at the time.) Further questions start to close up the gaps in my knowledge and point to a path forward.

Sometimes, just re-writing out the problem as I understand it is enough to nudge me out of paralysis. Its a kind of rubber duck debugging. Re-reading my earlier notes might jog something. Other times, the best thing I can do I stand up and walk away for a bit - breathe some outside air and observe other humans going about their business.


I’ve had stints of success with the Pomodoro technique. I find breaking the day up into chunks, and having this focus and rhythm does sometimes help drive me forward. Again, its about building momentum. But, my experience is that sometimes its just not a good fit. I no longer attempt to do this every day, but treat it as a useful tool to be employed when the time is right.

Riding in others’ slipstream

I’m a “remotee”. I work as part of a distributed team, spread across the globe and separated by distance and time-zones. I work alone most of the time. That has advantages and disadvantages. One of the things you miss is the collective energy of your co-workers and office neighbours that boost you and help ride out the bumps and troughs. When all the above has failed to light a spark, I sometimes go looking for that energy. It turns out watching someone else tackle problems engages those parts of the brain that have thus far failed to engage. It takes time out of the day, but if the day was otherwise shot, its time well spent. Handmade Hero and Mike Conley’s Joy of Coding are two “channels” I turn to at these times. Both hosts have a knack for taking objectively difficult problems and proceeding to dismantle them into smaller, easier problems in a way that seems obvious with hindsight. And simply sharing this journey for a while is usually enough to clear the fog in my own brain and allow me to get back into the groove with my own work.

The swan effect

Of course, history tends to only record successes. When you see a project launch, or a patch land - fully formed and functional - it represents the end-state of a process. There might have been many dead-ends, hours of head-scratching and frustration before finally finding success. This phenomenon is a variation of the drunkards walk: why does he always end up in the ditch rather than just bouncing off the wall on the other side? He doesn’t. But, once in the ditch he’s not getting out, and he is only there to notice at all when that happens. Similarly with our efforts, to the observer we appear to glide gracefully on the surface, with the commit history showing neatly interlocking solutions stacking together until the goal is met. While the thrashing below the surface goes largely un-recorded.

These are the things I remind myself. Its not supposed to be easy. I’ve done it before and I can do it again. I do know how to do this and I’m privileged to work on a project where the outcome really matters.

Planet MozillaRust Berlin Meetup October 2017

Rust Berlin Meetup October 2017 Talks: Arvid E. Picciani ( Application container deployment for the Internet of Things with Rust Solving containerization on very constrained devices will enable a new...

Planet MozillaRust Berlin Meetup October 2017

Rust Berlin Meetup October 2017 Talks: Arvid E. Picciani ( Application container deployment for the Internet of Things with Rust Solving containerization on very constrained devices will enable a new...

Planet WebKitRelease Notes for Safari Technology Preview 42

Safari Technology Preview Release 42 is now available for download for macOS Sierra and macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 222556-223209.

If you recently updated from macOS Sierra to macOS High Sierra, you may need to install the High Sierra version of Safari Technology Preview manually.

File and Directory Entries API

  • Fixed a failure when calling fileSystemDirectoryEntry.getDirectory() with an empty path (r223118)
  • Fixed recognizing a path starting with 2 slashes as a valid absolute path (r223135)

Payment Request

  • Implemented PaymentRequest.canMakePayment() (r223160)
  • Implemented and PaymentRequest.hide() (r223076)

Clipboard API

  • Added the support for custom pasteboard MIME types and hid unsafe MIME types (r222595 and r222830)
  • Fixed copying and pasting of image files on TinyMCE and GitHub (r222656)
  • Fixed DataTransfer.items to expose custom pasteboard types (r223034)
  • Prevented revealing the file URL when pasting an image (r222688)
  • Prevented dragenter and dragleave from using the same data transfer object (r223031)
  • Removed “text/html” from DataTransfer.items when performing paste and match style (r222956)
  • Started pasting images in RTF and RTFD contents using blob URLs (r222839)
  • Sanitized the URL in the pasteboard for other applications and cross-origin content (r223195)


  • Added “display” to the FontFace JavaScript object (r222949)
  • Implemented font-display loading behaviors (r222926)
  • Upgraded Brotli to version 1.0.1 and WOFF2 to the latest upstream (r222903)


  • Removed constant() in favor of env() (r222627)


  • Added support for DOM aborting (r222692)
  • Added support for <link rel=preconnect> (r222613)
  • Changed to use the blob URL when pasting RTFD (r222839)
  • Changed XMLHttpRequest.setRequestHeader() to allow Content-Transfer-Encoding header (r222807, r222817)
  • Prevented submitting a form that is disconnected (r223117)
  • Updated Document.createEvent for recent DOM specification changes (r223023)


  • Added support for selecting an <option> element by sending keys to its parent <select> element.
  • Fixed an issue that caused driver.sendKeys("") to unexpectedly fail and throw an exception.


  • Addressed an issue with if (!await get(something)) (r223043)
  • Dropped instantiate hook in ES6 module loader (r223173)
  • Fixed object properties that are undefined in but not in (r223175)
  • Implemented polymorphic prototypes (r222827)
  • Implemented RegExp Unicode property escapes (r223081)
  • Introduced import.meta (r222895)


  • Exposed ARIA drag-and-drop attribute values via AtkObject attributes (r222787)
  • Exposed ARIA menu items with ATK*ROLE*MENU_ITEM even when it’s the child of group role (r222822)
  • Fixed redundant layout on tables (r222790)
  • Fixed exposing aria-rowindex set on a row element (r222821)
  • Fixed exposing the value of aria-level on non-heading roles (r222765)


  • Added basic support for getting an ImageBitmapRenderingContext (r222997)
  • Fixed slow WebGL compositing performance (r222961)
  • Fixed seek() command for encrypted content when the <video> element is not in the DOM at decode time (r222995)


  • Fixed incorrect fullscreen animation when the element has a transform (r223051)
  • Fixed an issue where minimum font size may cause elements to have an infinite line-height (r222588)
  • Improved the progressive display of large images (r223091)


  • Changed to allow async to be able to be used as an imported binding name (r223124)
  • Changed the way WebGL is composited into the page significantly, providing much better performance on lower-end hardware with high-resolution displays (r222961)
  • Reduced the maximum samples used in Multi-Sample Anti-Aliasing (MSAA) for improved performance (r222963)

Web Inspector

  • Added a Canvas tab (r223011)
  • Added auto-completion for min() and max() within a CSS calc() (r223038)
  • Added support for keyboard navigation with Tab, Shift-Tab, Enter, and ESC in the redesigned styles sidebar (r222959)
  • Added support for editing rule selectors in the redesigned styles sidebar (r222799)
  • Added support for undo and redo of manual edits in the redesigned styles sidebar (r222678)
  • Added detail views for resources in Network tab (r222868)
  • Added a headers detail view for resources in the Network tab (r223006)
  • Added remote address in the headers detail view of the Network tab (r223078)
  • Added a cookies detail view in the Network tab (r223058)
  • Added support to search in the headers detail view of the Network tab (r223057)
  • Changed Layers tab sidebar DOM highlight to be by row hover, not row selection (r222801)
  • Changed Network tab filter resources to be based on URL and text content (r223065)
  • Changed the Network tab to show initially loaded resources even if network info was not logged (r223170)
  • Fixed jitter in timeline ruler labels (r223171)
  • Fixed an issue where clicking in the Web Inspector web view clears the selection in the inspected page (r223007)
  • Fixed Beacon and Ping grouping issues (r222865)
  • Fixed Layers tab sidebar popover (r222566)
  • Fixed a row wrapping issue causing waterfall graphs to display behind the next row’s name (r223059)
  • Fixed blurry quick open resource dialog icons (r222662)
  • Fixed misaligned popover when selecting child layers using the keyboard (r222759)
  • Fixed the table in the Network tab from appearing blank when scrolling it reduces the number of rows (r222899)
  • Enabled 3D objects to be selectable in the Layers visualization (r223209)
  • Ensured popovers are not malformed on window resize. (r222742)
  • Escaped more characters in the command generated by “Copy as cURL” (r222762)
  • Improved Canvas recording events (r222888)
  • Improved setting the initial default sorting for tables (r222983)
  • Improved reliability of selection in a table in the Network tab (r222988)
  • Improved the quick open dialog to include source mapped files in the search results (r223164)
  • Included Beacon and Ping requests in Network tab (r222739)
  • Set initial column widths to allow the waterfall column to expand more by default in the Network tab (r223147)

Bug Fixes

  • Fixed an issue introduced in Safari Technology Preview 41 where the tab bar could get out of sync with which tab’s content is being displayed when opening links from another app

Planet MozillaThe Joy of Coding - Episode 117

The Joy of Coding - Episode 117 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 117

The Joy of Coding - Episode 117 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaFirefox 57 Beta 8 Testday Results

Hello everyone,

As you may already know, last Friday – October 13th – we held a new Testday event, for Firefox 57 Beta 8.

Thank you all for helping us make Mozilla a better place: Surentharan R.A and Suren.
Thank you India community: Surentharan R.A and K. Bhuvana Meenakshi.
Thank you Bangladesh community: Huque Nayeem, Tanvir Rahman, Humayra Khanum, Saheda Reza Antora, Maruf Rahman, Md. Almas Hossain, Syed Nayeem Roman, Ratul Islam Mizanur Rahman, Sontus Chandra Anik, Sajedul Islam, Sahara Samia Sam and Md. Rahimul islam.


– several test cases executed for Activity Stream, Photon Structure and Photon Onboarding Tour Notifications & Tour Overlay 57 features;
– several bugs were verified: 1399963, 1396205, 1404286, 1395332 and 1404651

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaThe Future of Keyword Search

A lot of folks have been asking about the future of my Keyword Search extension with the switch to WebExtensions coming with Firefox 57. Unfortunately, because Keyword Search relies on the internals of browser search as well as access to privileged pages, it can’t be simply ported to a WebExtension.

What I’ve done instead is created a WebExtension that uses the omnibox API. In order to execute a keyword search, you simply type the letter “k” followed by a space and then the search. That search will be sent to the Keyword Search WebExtension.

Because I don’t have access to the list of search engines in the browser, I’ve just created a basic set based on the US version of Firefox. You can also set a custom search URL. If you’d like your favorite search engine added to Keyword Search, please let me know.

I’ve also decided to make this a completely different extension on AMO so that existing users are not migrated. You can get the new extension here.

Planet MozillaMozilla brings Microsoft, Google, the W3C, Samsung together to create cross-browser documentation on MDN

Today, Mozilla is announcing a plan that grows collaboration with Microsoft, Google, and other industry leaders on MDN Web Docs. The goal is to consolidate information about web development for multiple browsers – not just Firefox. To support this collaboration, we’re forming a Product Advisory Board that will formalize existing relationships and guide our progress in the years to come.

Why are we doing this? To make web development just a little easier.

“One common thread we hear from web developers is that documentation on how to build for the cross-browser web is too fragmented,” said Daniel Appelquist, Director of Developer Advocacy at Samsung Internet and Co-Chair of W3C’s Technical Architecture Group. “I’m excited to be part of the efforts being made with MDN Web Docs to address this issue and to bring better and more comprehensive documentation to developers worldwide.”

More than six million web developers and designers currently visit MDN Web Docs each month – and readership is growing at a spectacular rate of 40 percent, year over year. Popular content includes articles and tutorials on JavaScript, CSS and HTML, as well as detailed, comprehensive documentation of new technologies like Web APIs.

Community contributions are at the core of MDN’s success. Thousands of volunteers have helped build and refine MDN over the past 12 years. In this year alone, 8,021 users made 76,203 edits, greatly increasing the scope and quality of the content. Cross-browser documentation contributions include input from writers at Google and Microsoft; Microsoft writers have made more than 5,000 edits so far in 2017. This cross-browser collaboration adds valuable content on browser compatibility and new features of the web platform. Going forward, Microsoft writers will focus their Web API documentation efforts on MDN and will redirect relevant pages from Microsoft Developer Network to MDN.

A Broader Focus

Now, the new Product Advisory Board for MDN is creating a more formal way to absorb all that’s going on across browsers and standards groups. Initial board members include representatives from Microsoft, Google, Samsung, and the W3C, with additional members possible in the future. By strengthening our relationships with experts across the industry, the Product Advisory Board will ensure MDN documentation stays relevant, is browser-agnostic, and helps developers keep up with the most important aspects of the web platform.

“The reach of the web across devices and platforms is what makes it unique, and Microsoft is committed to helping it continue to thrive,” said Jason Weber, Partner Director of Program Management, Microsoft Edge. “We’re thrilled to team up with Mozilla, Google, and Samsung to create a single, great web standards documentation set on MDN for web developers everywhere.”

Mozilla’s vision for the MDN Product Advisory Board is to build collaboration that helps the MDN community, collectively, maintain MDN as the most comprehensive, complete, and trusted reference documenting the most important aspects of modern browsers and web standards.

The board’s charter is to provide advice and feedback on MDN content strategy, strategic direction, and platform/site features. Mozilla remains committed to MDN as an open source reference for web developers, and Mozilla’s team of technical writers will continue to work on MDN and collaborate with volunteers and corporate contributors.

“Google is committed to building a better web for both users and developers,” said Meggin Kearney, Lead Technical Writer, Web Developer Relations at Google. “We’re excited to work with Mozilla, Microsoft, and Samsung to help guide MDN towards becoming the best source of up-to-date, comprehensive documentation for developers on the web.”

MDN directly supports  Mozilla’s overarching mission. We strive to ensure the Internet is a global public resource that is open and accessible to all. We believe that our award-winning documentation helps web developers build better web experiences – which also adhere to established standards and work across platforms and devices.

MDN Board Members

  • Ali Spivak, Chair, Mozilla
  • Daniel Appelquist, Samsung Internet
  • Dominique Hazael-Massieux, W3C
  • Meggin Kearney, Google
  • Patrick Kettner, Microsoft
  • Christopher Mills, Mozilla
  • Erika Doyle Navara, Microsoft
  • Robert Nyman, Google
  • Kadir Topal, Mozilla

The post Mozilla brings Microsoft, Google, the W3C, Samsung together to create cross-browser documentation on MDN appeared first on The Mozilla Blog.

Planet MozillaPrivacy as a Competitive Advantage with Gry Hasselbalch

Privacy as a Competitive Advantage with Gry Hasselbalch Today it's a competitive edge for companies to respect user privacy and their right to control their own data. The organizations who view data ethics...

Planet MozillaPrivacy as a Competitive Advantage with Gry Hasselbalch

Privacy as a Competitive Advantage with Gry Hasselbalch Today it's a competitive edge for companies to respect user privacy and their right to control their own data. The organizations who view data ethics...

Planet MozillaKRACK is wack on Power Macs

After WEP fell due to the 2001 Flurher-Mantin-Shamir attack, WPA2 became the standard way to secure a WiFi connection. Now, the mighty have fallen due to KRACK (Key Reinstallation AttACK), meaning no WiFi network is safe.

KRACK is particularly wack problematic because there are multiple varieties of attack and virtually every system tested was vulnerable to at least one of them:

The attacks concentrate primarily on the handshakes used to distribute keys, including the 4-way handshake used to bring up a new client ("supplicant"). This last point is particularly relevant because Mavericks and Sierra were both vulnerable to attacks on the 4-way handshake but iOS 10.3.1 is not.

We can confidently assume that 10.4 and 10.5 (and 10.6, for that matter) are vulnerable in the same or similar ways that at least 10.9.5 are (I'll dive into this in a moment), but the situation is really bad for Linux. wpa_supplicant 2.6 and prior are vulnerable to all of the variants, including current PPC Linux users and devices running Android 6.0+. These will almost certainly be patched eventually, even considering the shrinking support for 32-bit PowerPC in Linux. OpenBSD is also vulnerable, but patches emerged prior to the embargo, and its close relative NetBSD will likely be repaired in a similar fashion. Microsoft has quietly issued a Patch Tuesday update that covers KRACK. There are reports that the issue is already patched in current betas of macOS and iOS, but it's not clear yet if these patches will be backported to Sierra or El Capitan.

10.5 and earlier exclusively use the private framework Apple80211.framework for WiFi connectivity. Although the public wireless networking framework CoreWLAN was introduced with 10.6, the later private framework CoreWifi is not present and a comparison of symbols shows subsequent upgrades to Apple80211's functionality in Snow Leopard, so it is very likely in use invisibly there as well. Although this framework still exists in 10.12, it does not appear to be used or linked by CoreWLAN, implying it was since internally deprecated. Apple never documented this framework or made it open source, but there have been attempts to reverse engineer it. However, the necessary changes likely mean inserting more sanity checks during the key handshake, which would require a bit more than just patching the library in place. I've done a little preliminary disassembly of it but I haven't found where this critical section exists yet. However, there is a tool in this framework which will be very helpful to determine your actual risk; read on.

WPA2 has three major encryption protocols, only two of which are supported by PPC Mac OS X, namely TKIP (a legacy encryption protocol from WEP intended as an interim compatibility measure), and AES-CCMP, a more secure protocol which is supported in 10.3.3+ and is sometimes just abbreviated "AES" (incorrectly) or "CCMP." TKIP was deprecated in 2012, but is still often used. The last form is GCMP, which no Power Mac supports in OS X and is part of 802.11ac Gigabit WiFi. This turns out to be a blessing, because KRACK can actually recover the key from GCMP-based connections and forge packets in both directions. This is even worse than TKIP's exposure, despite being the older and historically more insecure means of encryption.

The router situation is probably worst of all. Many older WiFi access points will never receive firmware updates, and even if they do, just patching the router is insufficient; every connecting client must also be patched. Some information circulated earlier that said only patching the router is adequate to mitigate the risk, but the discoverer of the flaw is clear both clients and the router must be updated to eliminate the risk completely.

Given it's not currently clear how we can patch OS X, then, what can you do with your Power Mac? Well, obviously, if you have the ability to hardwire your system, that would be preferable. All of my desktop Power Macs connect to a secured internal network over wired Ethernet that cannot directly route to the Internet.

If your connection to the router you control is still using TKIP (or any form of WPA or WEP), you should make sure it is WPA2 AES-CCMP. Log into your router and look at your security settings and change them if necessary; while you're at it, also see if your manufacturer has a firmware update for your router. While AES-CCMP is still vulnerable to some of these attacks and traffic secured by it can be decrypted, the actual key cannot be forged, so an attacker cannot actually join your network and attack it in place; they would have to clone your WiFi router's MAC address to a new access point with the same name on a different channel that's in range. This might be a risk in a hotel or apartment building but probably not in your house unless your neighbour is naughty and needs a baseball bat education. (If you have an Apple AirPort base station, this old TidBITS article can help you with the steps.) You can confirm your setup by opening a Terminal and entering these commands ([...] means not important for this usage):

% cd /System/Library/PrivateFrameworks/Apple80211.framework/Resources/
% ./airport -s
1 Infrastructure networks found:
SSID Security Ch [...] BSSID
yournetwork WPA2 PSK 6 [...] 20:4e:7f:ff:fe:fd [...](AES2),[...]
% ./airport -I
SSID: yournetwork
Security: WPA2 PSK cipher: AES2

These steps use a command line utility called airport that comes with Apple80211.framework; you can see other commands with ./airport --help. The first airport command scans for access points. In place of the SSID yournetwork, you should see the name you assigned your router in its settings; its channel may or may not be six, its BSSID will almost certainly differ from this example, and you may see any number of other access points your PowerBook is in range of. What you should not see under ordinary circumstances are multiple copies of your network SSID with the same BSSID on multiple channels. If you do, something might be wrong!

The second airport command tells you what access point you are currently associated with. Verify the SSID matches the one you expect and that the security is WPA2 and AES2 (notice this appeared in the first command, too). Periodically recheck these commands as you get suspicious-looking new neighbours or black vans and helicopters show up on your block. Consider replacing your router if there is no update; this won't help your Power Mac, but it would potentially help other connecting devices that were themselves updated.

If you are connecting to a router you can't control, like a public access point in your coffee shop or hotel, you should treat any WiFi connection you make to it as if it were open and unencrypted and that an attacker can see and forge any traffic you generate. Though the commands above can give you an idea of your instantaneous risk, even if AES-CCMP is in use a wily attacker may choose to deploy their malicious access point intermittently or when you're not checking, so your best defense is to encrypt what you send and receive. Only use https:// URLs and prefer sites that use HTTP Strict Transport Security and HTTP Public Key Pinning, both of which TenFourFox supports, so that an initial HTTP-to-HTTPS redirect is less likely to be intercepted and stripped and it is much harder for an attacker to impersonate a HPKP-secured site. There are still some sophisticated ways to get around even these added precautions, however, so if you need to do something highly secure like banking or taxes I'd strongly advise going home and plugging into your router's Ethernet ports directly. Even a VPN might not be enough.

Meanwhile, I guess I'll be rewriting that Power Mac security rollup post again. Assuming the current state of AES-CCMP holds, though, there may be a way to design a tool to programmatically/automatically detect a forged connection even if the underlying vulnerability cannot be corrected. I have a few ideas about that. More later.

Planet MozillaTalos tests- summary of recent changes

I have done a poor job of communicating status on our performance tooling, this is something I am trying to rectify this quarter.  Over the last 6 months many new talos tests have come online, along with some differences in scheduling or measurement.

In this post I will highlight many of the test related changes and leave other changes for a future post.

Here is a list of new tests that we run:

* cpstartup – (content process startup: thanks :gabor)
* sessionrestore many windows – (instead of one window and many tabs, thanks :beekill)
* perf-reftest[-singletons] – (thanks bholley, :heycam)
* speedometer – (thanks :jmaher)
* tp6 (amazon, facebook, google, youtube) – (thanks :rwood, :armenzg)

These are also new tests, but slight variations on existing tests:

* tp5o + webextension, ts_paint + webextension (test web extension perf, thanks :kmag)
* tp6 + heavy profile, ts_paint + heavy profile (thanks :rwood, :tarek)

The next tests have  been updated to be more relevant or reliable:

* damp (many subtests added, more upcoming, thanks :ochameau)
* tps – update measurements (thanks :mconley)
* tabpaint – update measurements (thanks :mconley)
* we run all talos tests on coverage builds (thanks :gmierz)

It is probably known to most, but earlier this year we stood up testing on Windows 10 and turned off our talos coverage on Windows 8 (big thanks to Q, for making this happen so fast)

Some changes that might not be so obvious, but worth mentioning:

* Added support for Time to first non blank paint (only tp6)
* Investigated mozAfterPaint on non-empty rects– updated a few tests to measure properly
* Added support for comparing perf measurements between tests (perf-reftests) so we can compare rendering time of A vs B- in this case stylo vs non-stylo
* tp6 requires mitmproxy for record/replay- this allows us to have https and multi host dns resolution which is much more real world than serving pages from http://localhost.
* Added support to wait for idle callback before testing the next page.

Stay tuned for updates on Sheriffing, non Talos tests, and upcoming plans.

Planet MozillaTechWomen 2017 Emerging Leader Presentations

TechWomen 2017 Emerging Leader Presentations As part of the TechWomen program, an Initiative of the U.S. Department of State's Bureau of Educational and Cultural Affairs, Mozilla has had the fortunate...

Planet MozillaTechWomen 2017 Emerging Leader Presentations

TechWomen 2017 Emerging Leader Presentations As part of the TechWomen program, an Initiative of the U.S. Department of State's Bureau of Educational and Cultural Affairs, Mozilla has had the fortunate...

Planet MozillaAn Introduction to CSS Grid Layout: Part 1

This is the first post in a two-part series for getting started with CSS Grid Layout. If you are interesting in learning more about CSS Grid and the new CSS Grid Layout feature in Firefox, visit the Firefox DevTools Playground.

CSS Grid Layout is completely changing the game for web design. It allows us to create complex layouts on the web using simple CSS.

“But wait! I can already create layouts with floats/hacks/tables/frameworks.”

This is true, but CSS Grid Layout is a two-dimensional grid system that is native to CSS. It is a web standard, just like HTML, and it works in all modern browsers. With CSS Grid Layout you can create precise layouts for the web. You can build orderly columns and rows, or artful overlapping content areas to create stunning new designs.

Ready? Let’s get started.

Before we dive into CSS Grid concepts, let’s cover some basic terminology.


Grid lines
The vertical and horizontal lines that divide the grid and separate the columns and rows.

Grid cell
A single unit of a CSS grid.

Grid area
A rectangular space surrounded by four grid lines. A grid area can contain any number of grid cells.

Grid track
The space between two grid lines. This space can be horizontal or vertical

Grid row
A horizontal track of a grid.

Grid column
A vertical track of a grid.

Note: Rows and columns are switched if you are using a vertical writing mode.

The space between rows and columns in a grid.

Grid container
The container that holds the entire CSS grid. It will be the element that has the display: grid or display: inline-grid property on it.

Grid item
Any element that is the direct child of a grid container.

…Got it? Let’s move on now to creating our first grid with CSS Grid Layout.

Create a grid

The first thing we want to do is create a grid container. We can do this by declaring display: grid on the container element. In this example we are using a div with the class of container.

Define rows and columns

There are several ways to define rows and columns. For our first grid, we will use properties grid-template-columns and grid-template-rows. These properties allow us to define the size of the rows and columns for our grid. To create a grid where the first two rows have a fixed-height of 150px and the first three columns have a fixed-width of 150px, simply write:

grid-template-columns: 150px 150px 150px;
grid-template-rows: 150px 150px;

To set the fourth column as 70px wide, write:

grid-template-columns: 150px 150px 150px 70px;

…and so on to add more columns.

Note: In the above example, we defined an explicit grid of 3×2. If we place something outside of that defined grid, then CSS Grid Layout will create those rows and columns in the implicit grid. Implicit grids aren’t covered in this tutorial, but check out this article on MDN to learn more about implicit and explicit grids.

Add a gutter

Adding a gutter to your grid is amazingly easy with CSS Grid Layout. Simply add:

grid-gap: 1rem;

That simple line of code gives you an equal-sized gutter between all rows and columns. To define the gutter size for columns and rows individually, you can use the grid-column-gap and grid-row-gap properties instead.

Now let’s put that all together. Here is our HTML:

<div class="container">
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>

With just a few lines of CSS, we can create a simple grid:

.container {
  display: grid;
  grid-template-columns: 150px 150px 150px;
  grid-template-rows: 150px 150px;
  grid-gap: 1rem;

Here is the result:

See the Pen CSS Grid Layout – Basic Grid by Mozilla Developers (@mozilladevelopers) on CodePen.

The fr Unit

Creating grids with a fixed px width is great, but it isn’t very flexible. Thankfully, CSS Grid Layout introduces a new unit of length called fr, which is short for fraction. MDN defines the fr unit as a unit which represents a fraction of the available space in the grid container. If we want to rewrite our previous grid to have three equal-width columns, we could change our CSS to use the fr unit:

.container {
  display: grid;
  width: 800px;
  grid-template-columns: 1fr 1fr 1fr;
  grid-template-rows: 150px 150px;
  grid-gap: 1rem;

The repeat() notation

Handy tip: If you find yourself repeating length units, use the repeat() notation. Rewrite the above code like so:

.container {
  display: grid;
  width: 800px;
  grid-template-columns: repeat(3, 1fr);
  grid-template-rows: repeat(2, 150px);
  grid-gap: 1rem;

Here is the result:

See the Pen CSS Grid Layout – Fractional Unit by Mozilla Developers (@mozilladevelopers) on CodePen.

When declaring track sizes, you can use fixed sizes with units such as px and em. You can also use flexible sizes such as percentages or the fr unit. The real magic of CSS Grid Layout, however, is the ability to mix these units. The best way to understand is with an example:

.container {
  width: 100%;
  display: grid;
  grid-template-columns: 100px 30% 1fr;
  grid-template-rows: 200px 100px;
  grid-gap: 1rem;

Here, we have declared a grid with three columns. The first column is a fixed width of 100px. The second column will occupy 30% of the available space, and the third column is 1fr which means it will take up a fraction of the available space. In this case, it will take up all of the remaining space (1/1).

Here is our HTML:

<div class="container">
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>
  <div class="item"></div>

Here is the result:

See the Pen CSS Grid – Mixing Units by Mozilla Developers (@mozilladevelopers) on CodePen.

You can visit our playground for the full guide on how to get started with CSS Grid Layout, or check out Part 2 before you go. If you are ready to dive deeper into CSS Grid Layout, check out the excellent guides on MDN.

Planet MozillaAn Introduction to CSS Grid Layout: Part 2

This is the second post in a two-part article for getting started with CSS Grid Layout. If you are interesting in learning more about CSS Grid and the new CSS Grid Layout feature in Firefox DevTools, visit the Firefox DevTools Playground.

Understanding grid lines

If you’ve read Part 1, you should now be comfortable creating a grid and defining the row and column sizes. We can now move on to placing items on a grid. There are several ways to place items, but we will start with a basic example. Consider a grid with six items:

Each item within this grid will be placed automatically in the default order.

If we wish to have greater control, we can position items on the grid using grid line numbers. Grid lines are numbered left to right and top to bottom (if you are working in a right-to-left language, then grid lines are numbered right to left). The above example would be numbered like so:

Position an item

Here is the HTML we will be using for this example:

<div class="container">
  <div class="item item1">1</div>
  <div class="item item2">2</div>
  <div class="item item3">3</div>
  <div class="item item4">4</div>
  <div class="item item5">5</div>
  <div class="item item6">6</div>

Say we want to position our first grid item (with a class of item1) to be in the second row and occupy the second column. This item will need to start at the second row line, and span to the third row line. It will also need to start at the second column line and span to the third column line. We could write our CSS like so:

.item1 {
  grid-row-start: 2;
  grid-row-end: 3;
  grid-column-start: 2;
  grid-column-end: 3;

Shorthand property

We can also rewrite this with shorthand properties:

.item1 {
  grid-row: 2 / 3;
  grid-column: 2 / 3;

Here is the result:

See the Pen CSS Grid Layout – Position Items by Mozilla Developers (@mozilladevelopers) on CodePen.

Creating a Basic Layout

Now that we have a basic understanding of how to position items, we can create a basic layout. Let’s create the same layout using three different methods.

Method 1: Position Items

For our first layout method, we won’t be introducing any new concepts. We’ll simply be using the grid-row and grid-column shorthand properties to manually place items such as a header, footer, and so on.

Here is the HTML:

<div class="container">
  <div class="header">header</div>
  <div class="sidebar">sidebar</div>
  <div class="content-1">Content-1</div>
  <div class="content-2">Content-2</div>
  <div class="content-3">Content-3</div>
  <div class="footer">footer</div>

Here is the CSS:

.container {
  display: grid;
  width: 750px;
  height: 600px;
  grid-template-columns: 200px 1fr 1fr;
  grid-template-rows: 80px 1fr 1fr 100px;
  grid-gap: 1rem;

.header {
  grid-row: 1 / 2;
  grid-column: 1 / 4;

.sidebar {
  grid-row: 2 / 4;
  grid-column: 1 / 2;

.content-1 {
  grid-row: 2 / 3;
  grid-column: 2 / 4;

.content-2 {
  grid-row: 3 / 4;
  grid-column: 2 / 3;

.content-3 {
  grid-row: 3 / 4;
  grid-column: 3 / 4;

.footer {
  grid-row: 4 / 5;
  grid-column: 1 / 4;

Here is the result:

See the Pen CSS Grid Layout – Basic Layout by Mozilla Developers (@mozilladevelopers) on CodePen.

Quick tip: If you are using Firefox Quantum, you can try out the ‘display line numbers’ setting on the Firefox CSS Grid Layout Panel. Inspect the result above and select the layout panel. Here you can activate the overlay grid and check the box to ‘display line numbers’. Handy right? This tool makes it very easy to visualize your grid when positioning items. You’ll need to use Firefox Quantum to access this feature. Don’t have Quantum yet? Download Firefox Quantum Developer Edition here.

Method 2: Named Lines

Another method for positioning items is to use named grid areas with the grid-template-areas and grid-area properties. The best way to explain this is with an example. Let’s recreate the grid from our previous example with the grid-template-areas property:

.container {
  display: grid;
  width: 100%;
  height: 600px;
  grid-template-columns: 200px 1fr 1fr;
  grid-template-rows: 80px 1fr 1fr 100px;
  grid-gap: 1rem;
  "header header header"
  "sidebar content-1 content-1"
  "sidebar content-2 content-3"
  "footer footer footer";

Here we have defined three columns and four rows. Instead of placing each individual item, we can define the entire layout using the grid-template-areas property. We can then assign those areas to each grid item using the grid-area property.


<div class="container">
  <div class="header">header</div>
  <div class="sidebar">sidebar</div>
  <div class="content-1">Content-1</div>
  <div class="content-2">Content-2</div>
  <div class="content-3">Content-3</div>
  <div class="footer">footer</div>

The rest of our CSS:

.header {
  grid-area: header;

.sidebar {
  grid-area: sidebar;

.content-1 {
  grid-area: content-1;

.content-2 {
  grid-area: content-2;

.content-3 {
  grid-area: content-3;

.footer {
  grid-area: footer;

Here is the result:

See the Pen CSS Grid Layout – Template Areas by Mozilla Developers (@mozilladevelopers) on CodePen.

Quick Tip: Did you know that FireFox DevTools can display the area names? Try it out! Inspect the grid above and open the layout panel. From here you can toggle the overlay grid and the ‘Display Area Names’ feature. You’ll need Firefox Quantum to have access to this feature. You’ll need to use Firefox Quantum to access this feature. Don’t have Quantum yet? Download Firefox Quantum Developer Edition here.

Method 3: Named Lines

We have placed an item on the grid by providing the grid-column and grid-row properties with specific grid lines. We can also name some or all of those grid lines when defining a grid. This allows us to use those names instead of grid lines.

To name a grid line, we simply provide the name in square brackets:

.container {
  display: grid;
  width: 100%;
  height: 600px;
  grid-gap: 1rem;
  [main-start sidebar-start] 200px
  [sidebar-end content-start] 1fr
  [column3-start] 1fr
  [content-end main-end];
  [row1-start] 80px
  [row2-start] 1fr
  [row3-start] 1fr
  [row4-start] 100px

Now that we have line names, we can use those names when placing items. Let’s recreate our basic layout using named lines, instead of line numbers:

.header {
  grid-column: main-start / main-end;
  grid-row: row1-start / row2-start;

.sidebar {
  grid-column: sidebar-start / sidebar-end;
  grid-row: row2-start / row4-start;

.content-1 {
  grid-column: content-start / content-end;
  grid-row: row2-start / row3-start;

.content-2 {
  grid-column: content-start / column3-start;
  grid-row: row3-start / row4-start;

.content-3 {
  grid-column: column3-start / content-end;
  grid-row: row3-start / row4-start;

.footer {
  grid-column: main-start / main-end;
  grid-row: row4-start / row4-end;

Here is our HTML:

<div class="container">
  <div class="header">header</div>
  <div class="sidebar">sidebar</div>
  <div class="content-1">Content-1</div>
  <div class="content-2">Content-2</div>
  <div class="content-3">Content-3</div>
  <div class="footer">footer</div>

Here is the result:

See the Pen CSS Grid Layout – Named Lines by Mozilla Developers (@mozilladevelopers) on CodePen.

Quick Tip: Did you know you can customize the color of the grid overlay in Firefox DevTools? The above example is on a white background, and the default purple may not be the best color to use. When selecting an overlay grid to display, you will see a circle next to the grid name that indicates the color of the overlay. Click on that circle, and you can customize the color to whatever you’d like. Try a different color, such as red. You’ll need to use Firefox Quantum to access this feature. Don’t have Quantum yet? Download Firefox Quantum Developer Edition here.

That’s a wrap on getting started with CSS Grid Layout. You can visit our playground for the full guide on how to get started with CSS Grid Layout. If you are ready to dive deeper intoCSS Grid Layout is completely changing the game for web design. It allows us to create complex layouts on the web using simple CSS. Part 2 CSS Grid Layout, check out the excellent guides on MDN.

Planet MozillaJoin the Featured Add-ons Advisory Board

Do you love add-ons? Have a keen appreciation for great functionality? Interested in making a huge impact on AMO? If so, consider applying to join our Featured Add-ons Community Board!

The board is comprised of a small group of add-ons enthusiasts from the community. During a six-month term, board members help nominate and select new featured extensions for AMO each month. Your participation will make a big difference to millions of users who look to AMO’s featured content to help them find great content!

As the current board wraps up their tour of duty, we are looking to assemble a new board for the months January – June.

Anyone from the add-ons community is welcome to apply: power users, developers, and advocates. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member.

To be considered, please send us an email at amo-featured [at] mozilla [dot] org with your name and a few sentences about how you’re involved with AMO and why you are interested in joining the board. The deadline is Monday, October 30, 2017 at 23:59 PDT. The new board will be announced shortly thereafter.

The post Join the Featured Add-ons Advisory Board appeared first on Mozilla Add-ons Blog.

Planet MozillaSaying thanks to teammates halfway around the world


Photo by Hanny Naibaho

One of things I struggle with as a member of a distributed team is that a lot of  feedback on my work in text form.  For instance, code reviews generally state what was good and what was could be improved, specific to the code you submitted in a patch. There is often a lot of emotion associated with that patch, because you have spent a lot of time working understanding the how the existing code base works, the problem definition, iterating on patches for the best solution and then implementing tests to ensure your code works.  So if the code review that your colleague gives you only concentrates on the negative, it can be often difficult to process.  Text often lacks nuance and emotion.  This is even more difficult if the code reviewer doesn’t work in a timezone that overlaps with your working hours, because it’s difficult to discuss the review in person.

On a recent bug,  I noticed a simple statement that my colleague Joel made that explicitly  states positive intent on code reviews.   “Looking forward to this landing”.  With all the back and forth on code reviews, this statement is a way to center that you are happy that this problem will be solved soon.  In the case of this bug, it was one I had shifted to another coworker because I had too much on my plate with the 56 release, so I also tried to convey enthusiasm and gratitude in the comments for Alin’s work.  Thanks Alin!

Screen Shot 2017-10-16 at 5.23.05 PM

The other thing that I would mention with code reviews if that if you have a lot of changes to discuss with the person, or as a reviewer you feel that the they should take a different approach, the best path forward is probably to discuss face to face in a video call.  Again, you can convey thanks for their work, but it will probably save time if you communicate in a manner that allows any misunderstandings to be cleared up immediately versus back and forth in text.

I recently watched a talk by Mathias Meyer, who is the CEO of Travis CI from the Lead Developer conference. (Side note: All the talks from the Lead Dev conference are fantastic and worth watching)

It’s an excellent talk about how the the culture of Travis CI has evolved over the years to be more remote friendly, sensitive to the timezones people work in, and incorporate continuous learning. Around the 19 minute mark, he talks about how the entire team has an online all hands every month, where they have shout outs where a person can thank an individual or entire person for their work, celebrate achievements and discuss what they plan to ship over the next couple of months.  This is a great idea!  I really like the idea of thanking people on a regular basis.

I recently read a post by Cate Huston, Automattic’s mobile lead, about showing appreciation for her distributed team.  She asks her engineers to write something they are happy about that they accomplished in the last month, and something that one of their teammates did that they really appreciate. She then summarized the list in a post for the team.  My idea is that this could probably be extended to a weekly meeting: shoutout to a team member for their work, and a mention of where you are looking for help.

On our team, we tend to thank people in irc/slack whenever they do something awesome. This was after an all hands on deck day of various problems crossing multiple timezones as well as AWS S3 experiencing a downtime.

Screen Shot 2017-09-15 at 2.08.25 PM

There are other ways to say thanks, like a spot award but the thing that makes the most impact is very simple.  An email to the person you are thanking, cc’ing their manager, which describes the work they have done, the impact it had, and why you are so happy.  Managers love to hear about the good work that their employees are doing.  If the manager has several emails about the great work this person is doing from team members or other teams, this can be very helpful for them at performance review time.

One note giving regarding thanks is that some people feel very uncomfortable receiving public feedback.  If you are a manager, Lara Hogan, VP of Engineering at Kickstarter, has great post about conducting your first 1×1 with a new employee that includes explicitly asking for how they prefer to receive feedback.

How do you express appreciation for the work of your team members on distributed teams?

Planet MozillaA Week-Long Festival for Internet Health

MozFest is convening technologists, activists and artists this October to tackle the biggest problems facing the web


The Internet is sick.

From ransomware and trolls to misinformation and mass surveillance, the very health of the Internet is at risk.

Says Mark Surman, Mozilla’s Executive Director: “The Internet is layered into our lives like we never could have imagined. Access is no longer a luxury — it’s a fundamental part of 21st century life. A virus is no longer a nuisance consigned to a single terminal — it’s an existential threat that can disrupt hospitals, governments and entire cities.”

But much of the Internet’s best nature is flourishing, too. Each day, new communities form despite members being separated by whole continents. Start-ups and artists have access to a global stage. And open-source projects put innovation and inclusion ahead of profit.

In an effort to heal the bad and uplift the good, Mozilla is reimagining MozFest, our annual London-based festival. We will address these issues head on.

This October, our eighth-annual festival will draw nearly 2,000 technologists, hackers and activists from around the world to experience:

A week-long festival

Three days isn’t enough time to heal the Internet. So for the first time ever, we’re making MozFest a week-long festival. Monday, October 23 through Friday, October 27 is “MozFest House” — workshops, talks and film screenings at the Royal Society of Arts (RSA) in London. Programming will include MisinfoCon London, an event exploring solutions to misinformation online; Detox & Defend for Women, an online privacy workshop; and much more. The week culminates with the traditional MozFest weekend at Ravensbourne College from October 27 to October 29

An interactive exhibit at MozFest 2016

19 big-name keynote speakers

hailing from nine countries. Speakers include Audrey Tang (Digital Minister, Taiwan), Gisela Perez de Acha (journalist and lawyer, Derechos Digitales) and Alan Knott-Craig (founder of Project Isizwe). Speakers will discuss hacking, botnets, digital rights and misinformation. Meet all 19 speakers

320 hands-on sessions

On Saturday, October 28 and Sunday, October 29, sessions will be led by international experts and spread across five tracks: Privacy and Security; Digital Inclusion; Decentralization; Web Literacy; and Openness. Here’s a peek at just five of them:

The Glass Room, a sister event  hosted by Mozilla and Tactical Tech

The Glass Room is a London pop-up shop that’s like a ‘Black Mirror’ episode come to life. A faux technology store open for three weeks in central London, the Glass Room doesn’t sell products — it features interactive art about the role of technology in society. 69-71 Charing Cross Road, London WC2. Open October 25 to November 12 between 12 p.m. and 8 p.m.

A Glass Room exhibit in New York City in 2016

To learn more about MozFest, visit

The post A Week-Long Festival for Internet Health appeared first on The Mozilla Blog.

Planet WebKitEnrique Ocaña: Attending the GStreamer Conference 2017

This weekend I’ll be in Node5 (Prague) presenting our Media Source Extensions platform implementation work in WebKit using GStreamer.

The Media Source Extensions HTML5 specification allows JavaScript to generate media streams for playback and lets the web page have more control on complex use cases such as adaptive streaming.

My plan for the talk is to start with a brief introduction about the motivation and basic usage of MSE. Next I’ll show a design overview of the WebKit implementation of the spec. Then we’ll go through the iterative evolution of the GStreamer platform-specific parts, as well as its implementation quirks and challenges faced during the development. The talk continues with a demo, some clues about the future work and a final round of questions.

Our recent MSE work has been on desktop WebKitGTK+ (the WebKit version powering the Epiphany, aka: GNOME Web), but we also have MSE working on WPE and optimized for a Raspberry Pi 2. We will be showing it in the Igalia booth, in case you want to see it working live.

I’ll be also attending the GStreamer Hackfest the days before. There I plan to work on webm support in MSE, focusing on any issue in the Matroska demuxer or the vp9/opus/vorbis decoders breaking our use cases.

See you there!

UPDATE 2017-10-22:

The talk slides are available at and the video is available at (the rest of the talks here).

Planet MozillaHelp wanted with HTML user interface for es7-membrane

I’ve continued to work on es7-membrane in my spare time, to the point where I released version 0.8 (“first beta”) without announcing it on my blog… oops.  (I also forgot to tag the 0.8.1 release on GitHub.)  For those who don’t know what it is about, just read the first few paragraphs of the 0.7 release announcement.

I also have a low-traffic Google Groups mailing list for release announcements and general support.

I’m looking for unpaid-intern-level help.  Not in the membrane implementation itself, but in crafting the es7-membrane distortions user interface.  (“Distortions” is a relatively new term in the Membranes lexicon:  it means altering proxies so that they don’t exactly match the original value, such as a whitelist for hiding properties.)   The distortions user interface is a subproject for configuring a Membrane instance, and for persisting that configuration for future edits… all with a static GitHub website.

This means JavaScript, HTML, CSS, SVG, modern Web API’s (FileReader, Blob, CSS grids, etc.), build configuration, continuous integration, and more JavaScript (Jasmine, CodeMirror).  It means in particular almost no HTTP server code (so no Python, PHP, etc.)

It does not mean including a library like jQuery.  Call me biased if you want, but I think libraries like jQuery or YUI are unnecessary with recent advances in web browser technologies, and even less so in the future with Web Components evolving.  These libraries were written for older Web API’s, and have to support thousands of websites… I don’t mind reinventing the wheel a little bit, as long as it’s tightly written code.

I’m looking for help because while I could do all of this on my own, I have time constraints in the form of a full-time job and university classes.  On the other hand, I am an experienced Mozilla developer and JavaScript expert, so I can definitely mentor people… and this is cutting-edge JavaScript we’re dealing with here.  Already, I have two interested customers for this open-source project (besides myself, of course), and one fellow student who took over a small widget (a “multistate” HTML button).

What I’m looking for are people who don’t have a lot of experience, but do have the time, an open mind and the willingness to do some of the grunt work in exchange for mentorship and letters of recommendation and/or equivalent written credit good for a résumé.  I just recently added a few “good-first-bug” labels to the 0.9 milestone list of tickets.

If this fits your bill, please reach out through the Google Groups link above, or through my GitHub user page… and thank you.

Planet MozillaChecking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

Planet MozillaFastClick.js (more like Thing-of-the-Past-Click.js)

<select> elements have always been kind of awkard, IMO.

web-bug 12553 describes an issue where a pretty famous 3rd party library, FastClick.js turns <select>-level awkward to middle-school-dance-party-level awkward.

(In that the <select> doesn't function at all if you're on Android, unless you're using Chrome Mobile (depending on what FastClick version you're running). Just like middle school—trust me this analogy makes total sense, 7th grade was really hard for me and I'm still working through it, OK.)

The issue here is the result of a web standards interoperability failure (more on that in a second, there's a happy ending I swear) and the inability to predict the future of the web platform and browsers by FastClick (more on that now).

So if you don't know much about FastClick, or why it was so popular, put on 2012's "Now That's What I Call Music! Volume 44" to set the mood and read their GitHub page.

Cover of Now Thats What I Call Music Volume 44 album

Back in 2012, when tapping on things in mobile browsers was slow (because browsers always had a 300ms delay between a touchend event and a click event), it was cool to use FastClick to make tapping on things fast. Noted soda water critic Jake Archibald has a good article on this and how to get rid of it these days (tl;dr make your site's viewport mobile friendly).

(The article is a bit dated given that Firefox didn't support touch-action: manipulation when it was authored, but it does now, so consider that another option.)

OK, so. Anyways. The way this library works is to dispatch its own synthetic click event after a touchend event, without the 300ms delay. This is all great and fine for links. Things get trickier for inputs like <select>, because there's never really been a web standards way to programatically open them via JS.

Per DOM level 3(000), untrusted events (think event.isTrusted == false...except for click) shouldn't trigger the default action. And the default action for <select> is to open the widget thingy and display the 400 countries you don't live in.

OK, back to FastClick in the year 2012. Things either broke on <select> elements, or never worked properly in Chrome Mobile, but developers (being developers) found a workaround with mousedown events for Chrome Mobile (in a stackoverflow thread, naturally), put it into FastClick. This unintentionally broke it for other browsers, and later on some fixes were introduced to unbreak that for Firefox and Blackberry. But... that's only some of the select-related bugs.

Fast foward a few years and Chrome fixed their untrusted events default action bug, and as a result broke <select>s on pages using FastClick. Fortunately for interop, they decided to not back out the fix (except for Android WebView!).

Anyways, this blog post is getting too long.

In conclusion, if you run into bugs with FastClick, you should probably delete it from your app. It's basically unmaintained and they don't want you to use it either.

And besides, we have touch-action: manipulation anyways.

Planet MozillaThis Week in Rust 204

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is if_chain a macro that helps combat rightwards drift where code nests many ifs and if lets. Since the latter cannot be contracted with &&, this can be really helpful to make code more readable. Thanks to Michael Budde for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

163 pull requests were merged in the last week

New Contributors

  • Alexander Kuleshov
  • Bráulio Bezerra
  • Cameron Steffen
  • Christopher Vittal
  • Hoàng Đức Hiếu
  • Jean Lourenço
  • Jimmy Brisson
  • JLockerman
  • Joe Rattazzi
  • Joshua Lockerman
  • k0pernicus
  • Matt
  • Michael Hewson

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaAdd-ons Update – 2017/10

Here’s your monthly add-ons update.


We changed the way contributions are handled on AMO. This should be simpler to maintain, and offer more payment options for developers.

The Review Queues

We recently moved to a new review model, where developers don’t have to wait for long before their add-ons are reviewed. Legacy add-ons still go through the old model, but there are only a small number of updates awaiting review now. So I’m discontinuing this section of the monthly update for now.

Compatibility Update

Firefox 57 is now on the Beta channel and will be released on November 14th. It will only accept WebExtensions add-ons. In order to ease the transition to 57, here are some changes we’re implementing on AMO.


We would like to thank the following people for their recent contributions:

  • ian-henderso
  • Jp-Rivera
  • Apoorva Pandey
  • ilmanzo
  • Trishul Goel
  • Tom Schuster
  • Apoorva Singh
  • Tiago Morais Morgado
  • zombie
  • wouter
  • kwan
  • Kevin Jones
  • Aastha
  • Masatoshi Kimura
  • asamuzaK
  • Christophe Villeneuve

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/10 appeared first on Mozilla Add-ons Blog.

Planet MozillaMake LastPass Work Across App and Website

The Problem: My stock broker outsourced their Android app to a third-party company, so LastPass treat the desktop website and Android app as different sites. Although I can save them separately in LassPass, their password won’t synchronize with each other.

The Solution: LassPass do have a way to “merge” two sites into one, but the UI for doing so is not in the password vault page nor in the individual site’s configuration. Instead, the list of “equivalent” sites are under your account settings.

Here are the procedure to merge them:

  1. Log-in to your LastPass desktop website. (I didn’t found the setting in the mobile app.)
  2. Click your account icon on the top right and go to settings. account settings
  3. In your account settings, choose the “Equivalent Domains” tab. equivalent domains settings
  4. Here you can add to the list of equivalent sites. For example my broker’s desktop site is using the domain “”, but its app uses “”, which is the outsource company that built the app. So I’ll add them into one entry so they are treated as equivalent. my_site

  5. Ta-dah! Now I can login to both the app and desktop site using the same LastPass entry.

Side note: Why I decided to use a password manager?

I avoid using a password manager for quite a while. I always believe it’s a bad idea to have a single point of failure. But after listening to a presentation by the security expert in my company, I decided to give it a try. The presenter mentioned that although LassPass had suffered from a few breaches in the past, the main password vault was never compromised, and they were very transparent about how and what had happened. So after I moved to Amsterdam and had to register tons of new online services, I simply can’t come up with a good algorithm to design my password that works across all sites. The risk of LastPass being completely hacked is relatively low comparing to my one password used across all sites being breached. If I use one password across all sites, then the level of security I get is equal to the security level of the weakest site I use (e.g. Yahoo?), so I would rather bet on LastPass.

Planet WebKitWho knew we still had low-hanging fruits?

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coruña, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year.

Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/


It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google’s Robert Kroeger, about the way forward for Chromium on Wayland.

We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way!


The idea seems to have been mostly welcomed, the only concern being that Wayland’s interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes.

On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView.


Some people noticed that when the connection was lost Mattermost would take a very long time to notice and reconnect – its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura.

I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)!


I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either!

That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual.

I’d like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other Collaborans. It was a great hackfest and I’m looking forward to the next one! See you in 2018 =)

Planet MozillaWeb Truths: We need granular control over web APIs, not abstractions

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of offering new functionality to the web. Should we deliver low-level APIs to functionality to offer granular control? Or should we have abstractions that get people started faster? Or both?

In a perfect scenario, both is the obvious answer. We should have low-level APIs for those working “close to the metal”. And we should offer abstractions based on those APIs that allow for easier access and use.

In reality there is quite a disconnect between the two. There is no question that newer web standards learned a lot from abstractions. For example, jQuery influenced many additions to the DOM specification. When browsers finally got querySelector and classList we expected this to be the end of the need for abstractions. Except, it wasn’t and still isn’t. What abstractions also managed to do is to even out implementation bugs and offer terser syntax. Both of these things resonate well with developers. That’s why we have a whole group of developers that are happy to use an abstraction and trust it to do the right thing for them.

Before we had a standardised web, we had to develop to the whims of browser makers. With the emergence of standards this changed. Web standards were our safeguard. By following them we had a predictable way of debugging. We knew what browsers were supposed to do. Thus we knew when we made a mistake and when it was a bug in the platform. This worked well for a textual and forms driven web. When HTML5 broke into the application space web standards became much more complex. Add the larger browers and platform fragmentation and working towards standards and on the web became much harder. It doesn’t help when some of the standards felt rushed. An API that returns empty string, “possibly” or “maybe” when asked if the current browser can play a video doesn’t fill you with confidence. For outsiders and beginners, web standards are not considered any more the “use this and it will work” approach. They seem convoluted in comparison with other offers and a lot of changes seem to be a lot of work for developers to keep up. Maybe too much work.

Here’s what it boils down to:

  • Abstractions shield developers from a lot of implementation quirks and help them work on what they want to achieve instead
  • Low-level APIs allow for leaner solutions, but expect developers to know them, keep track of changes and to deal in a sensible way with non-supporting environments (see: Progressive Enhancement)

What do developers want?

As web developers in the know, you want to have granular control. We’ve been burnt too often by “magical” abstractions. We want to know what we use and see where it comes from. That way we can create a lot of different solutions and ensure that what we want to standardise works. We also want to be able to fix and replace parts of our solutions when a part becomes problematic. What we don’t want is to be unable to trace back where a certain issue comes from. We also want to ensure that new functionality of the web stays transparent and secure. We achieve this by creating smaller, specialised components that can get mixed and matched.

As new developers who haven’t gone through the pains of the browser wars or don’t need to know how browsers work, things look different. We want to code and reach our goal instead of learning about all the different parts along the way. We want to re-use what works and worry about our product instead. We’re not that bothered about the web as a platform and its future. For us, it is only one form factor to build against and to release products on. Much like iOS is or gaming plaforms are.

This is also where our market is going: we’re not paid to understand what we do – we’re expected to already know. We’re paid to create a viable product in the shortest amount of time and with the least effort.

The problems is that the track record of the web shows that we often have to start over whenever there is a new technology. And that instead of creating web specific functionality we got caught up trying to emulate what other platforms did.

The best case in point here is offline functionality. When HTML5 became the thing and Flash was declared dead we needed a fast solution to offer offline content. AppCache was born, and it looked like a simple solution to the issue. As it turns out, once again what looked too good to be true wasn’t that great at all. A lot of functionality of AppCache was unreliable. In retrospect it also turned out to be more of a security issue than we anticipated.

There was too much “magic” going on that browsers did for us and we didn’t have enough insight as implementers. That’s how Service Workers came about. We wanted to do the right thing and offer a much more granular way of defnining what browsers cache where and when. And we wanted to give developers a chance to intercept network requests and act on them. This is a huge endeavour. In essence we replicate the networking stack of a browser with an API. By now, Service Workers are doing much more than just offline functionality. They also should deal with push notifications and app updates in the background.

This makes Service Workers tougher to work with as they seemed complex. Add to that the lack of support in Safari (which is now changing) and you lost a lot of developer enthusiasm.

There is more use in abstractions like Workbox as they promise you to keep up-to-date whilst the changes in the spec are ironed out. Instead of getting a “here are all the lego bricks, build your own car”, it has a “so you want to build a car, here are some ways to do so” approach.

This is a good thing. Of course we need to define more granular and transparent standards and solutions to build the web on. But there is a reluctance in developers to take part in the definition phase and keep an eye on changes. We can not expect everybody who wants to build for the web to care that much. That is not how the web grew – not everybody had to be a low level engineer or know JavaScript. We should consider that the web outgrew the time where everyone was deeply involved with the standards world.

We need to face the fact that the web has become much more complex than it used to be. We demand a lot from developers if we want them all to keep up to date with standards. Work that often isn’t as appreciated by employers or clients than shipping products is.

This isn’t good. This isn’t maintainable or future facing. And it shouldn’t have come to this. But it is a way of development we allowed to take over. Development has become a pretty exhausting and competitive environment. Deliver fast and in a short cadence. Move fast and break things. If you can re-use something, do it, don’t worry too much if you don’t know what it does or if it is secure to do so. If you don’t deliver it first to market someone else will.

This attitude is not healthy and we’re rubbing ourselves raw following it. It also ensures that diversity in our market it tough to achieve. It is an aggressive game that demands a lot of our time and an unhealthy amount of competitiveness.

We need to find a way to define what’s next on the web and make it available as soon as possible. Waiting for all players to support a new feature makes it hard for developers to use things in production.

Relying on abstractions seems to be the way things are going anyways. That means as standards creators and browser makers we need to work more with abstraction developers. It seems less and less likely that people are ready to give up their time to follow specs as they change and work with functionality behind flags. Sure, at conferences and in our talks everyone gets excited. The hardware and OS configurations we have support all the cool and new. But we need to get faster to market and reach those who aren’t already sold on our ideas.

So, the question isn’t about granular definition of specifications, small parts that work together or abstractions. It is about getting new and sensible, more secure and better performing solutions into production code quicker. And this means we need both. And abstractions should have a faster update cycle to incorporate new APIs under the hood. We should work on abstractions using standards, not patching them.

Planet MozillaFriend of Add-ons: Sylvain Giroux

Please meet our newest Friend of Add-ons, Sylvain Giroux! Sylvain has been creating extensions for Firefox since 2007 and began contributing to (AMO) as an add-on reviewer in 2015.  While he had originally planned to help with the add-on review queues for a few months before moving on to other activities, Sylvain quickly connected with Mozilla’s mission and found friends within the community, and has been an active contributor ever since.

Currently, Sylvain is an add-on reviewer and peer mentor for new reviewers. He is also helping the AMO team create an improved application process for prospective add-on reviewers. Additionally, Sylvain is finishing a six-month term as a member of the featured add-on advisory board, where he has helped nominate and select new extensions to feature on AMO each month.

Of his experience as a contributor, Sylvain says, “Being part of such a vast community helped me understand the underlying AMO process and its continuous evolution over time. This is especially important with the massive changes that Firefox 57 is bringing to this community. The knowledge I’ve gathered reviewing code after all those years has also strengthened my understanding of what “safe code” should look like. This is especially important when creating web-based software, APIs or features that may affect the security and privacy of end-users.”

Sylvain also has a suggestion for anyone interested in getting more involved in Mozilla. “I strongly suggest people visit and see if there is something in there that could bring you closer to this awesome free-software community.”

In his spare time, Sylvain is an active home-brewer and has made nearly twenty batches of beer over the last three years from home-grown hops. He also frequently creates personal development projects, like fixing bugs for a nodeJS card game program, to meet new challenges and keep his coding skills up to date.

On behalf of the AMO community, thank you for your many wonderful contributions, Sylvain!
If you are interested in contributing to the add-ons community, please see our wiki for a list of ways to get involved. If you’re currently a contributor, please be sure to add your contributions to our recognition wiki!

The post Friend of Add-ons: Sylvain Giroux appeared first on Mozilla Add-ons Blog.

Planet MozillaMWoS: Improving ssh_scan Scalability and Feature Set

Editors Note: This is a guest post by Ashish Gaurav, Harsh Vardhan, and Rishabh Saxena

Maintaining a large number of servers and keeping them secure is a tough job! System administrators rely on tools like Puppet and Ansible to manage system configurations.  However, they often lack the means of independently testing these systems to ensure expectations match reality.

ssh_scan was created in an effort to provide a “simple to configure and use” tool that fills this gap for system administrators and security professionals seeking to validate their ssh configurations against a predefined policy. It aims to provide control over what policies and configurations you self-identify as important.

As CS undergraduates, we had the opportunity to participate in the 2016-2017 edition of Mozilla Winter of Security (MWoS), where we volunteered to improve the scalability and feature set of ssh_scan.

The goal of the project was to improve the existing scanner to make securing your ssh servers easier. It scans ssh servers by initiating a remote unauthenticated connection, enumerates all the attributes about the service and compares them against a user defined policy. For providing a sane baseline policy recommendation for SSH configuration parameters the Mozilla OpenSSH Security Guide was used.

Early Work

Before we started working on the project, ssh_scan was a simple command-line tool. It had limited fingerprinting support and no logging capability. We started with introducing some key features to improve the CLI tool, like adding logging, making it multi-threaded, and extending its dev-ops usability.  However, we really wanted to make the tool more accessible for everyone, so we decided to evolve ssh_scan into a web API.  As soon as the initial CLI tool was leveled up, we moved on to architecture planning for the web API.


Since ssh_scan is written in Ruby, we looked at different Ruby web frameworks to implement the web API.  We finally settled on Sinatra, as it was a lightweight framework which gave us the power and flexibility to adapt and evolve quickly.

We started with providing a REST API around the existing command-line tool so that it could be integrated into the Mozilla Observatory as another module.  Because the Observatory receives a large number of scan requests per day, we had to make our API scale enough to keep pace with that high demand if it was ever to be enabled by default.

High-level Design Overview

Our high-level design evolved around a producer/consumer model. We also tried to keep things simple and modular so that it was easy to trade out or upgrade components as needed, using HTTPS as a transport where-ever possible. This flexibility was invaluable as we progressed throughout the project, learned where the bottlenecks were, and upgraded individual sub-components when they showed strain.

In our approach, a user makes a request to the API which is queued in the database as a state machine to track a scans progress throughout. The worker then polls the API for work, takes the work off the database queue, performs the scan and sends the scan results back to API server. The scan results are then stored in the database. As a starting point, an ssh_scan_api operator can have a single worker process running on the API/DB server.  As the workload requirements increases and queues build up, which we can monitor through our stats API route, we simply scale workers horizontally with Docker to pick up the additional load with relative ease without the need to disrupt other system components.


Asynchronous job management was a totally new concept for us before we started this this project.  Because of this, it took us some time to settle on the components to efficiently handle our use-cases.  Fortunately, with the help of our mentors, we settled on implementing many things from scratch to start, which gave us a more detailed insight on the following:

  • How asynchronous API systems work
  • How to make it scale by identifying and removing the bottlenecks

As the end-to-end scan time depends mainly in completing the scan, we have achieved scalability with the help of multiple workers doing the scans in parallel.  Also to avoid API abuse, we provided authentication requirements around the API to prevent the abuse of essential functions.

Current Status of Project

We have already integrated ssh_scan_api as a supporting sub-module of the Mozilla Observatory and it is deployed as a beta here.  However, even as a beta service, we’ve already run over 4,000+ free scans for public SSH services, which is far more than we could have ever done with the single-threaded command-line version we started with.  We also expect usage to increase significantly as we raise awareness of this free tool.

Future Plans

We plan to do more performance testing of the API to continue to identify and plan for future scaling needs as demand presents itself. Outcomes of this effort might also include an even more robust work management strategy, as well as performance stressing the API.  The process continues to be iterative and we are solving challenges one step at a time.

Thanks Mozilla!

This project was a great opportunity to help Mozilla in building a more secure and open web and we believe we’ve done that. We’d like to give special thanks to claudijd, pwnbus and kang who supported us as mentors and helped guide us through the project.  Also, a very special thanks to April for doing all the front-end web development to add this as a submodule in the Observatory and helping make this real.

If you would like to contribute to ssh_scan or ssh_scan_api in any way, please reach out to us using GitHub issues on the following respective projects as we’d love your help:


Ashish Gaurav, Harsh Vardhan, and Rishabh Saxena

The post MWoS: Improving ssh_scan Scalability and Feature Set appeared first on Mozilla Security Blog.


Updated: .  Michael(tm) Smith <>