Planet MozillaThings I’ve Learned This Week (April 13 – April 17, 2015)

When you send a sync message from a frame script to the parent, the return value is always an array


// Some contrived code in the browser
let browser = gBrowser.selectedBrowser;
browser.messageManager.addMessageListener("GIMMEFUE,GIMMEFAI", function onMessage(message) {

// Frame script that runs in the browser
let result = sendSendMessage("GIMMEFUE,GIMMEFAI");
// Writes to the console: GIMMEDABAJABAZA

From the documentation:

Because a single message can be received by more than one listener, the return value of sendSyncMessage() is an array of all the values returned from every listener, even if it only contains a single value.

I don’t use sync messages from frame scripts a lot, so this was news to me.

You can use [cocoaEvent hasPreciciseScrollingDeltas] to differentiate between scrollWheel events from a mouse and a trackpad

scrollWheel events can come from a standard mouse or a trackpad1. According to this Stack Overflow post, one potential way of differentiating between the scrollWheel events coming from a mouse, and the scrollWheel events coming from a trackpad is by calling:

bool isTrackpad = [theEvent hasPreciseScrollingDeltas];

since mouse scrollWheel is usually line-scroll, whereas trackpads (and Magic Mouse) are pixel scroll.

The srcdoc attribute for iframes lets you easily load content into an iframe via a string

It’s been a while since I’ve done web development, so I hadn’t heard of srcdoc before. It was introduced as part of the HTML5 standard, and is defined as:

The content of the page that the embedded context is to contain. This attribute
is expected to be used together with the sandbox and seamless attributes. If a
browser supports the srcdoc attribute, it will override the content specified in
the src attribute (if present). If a browser does NOT support the srcdoc
attribute, it will show the file specified in the src attribute instead (if

So that’s an easy way to inject some string-ified HTML content into an iframe.

Primitives on IPDL structs are not initialized automatically

I believe this is true for structs in C and C++ (and probably some other languages) in general, but primitives on IPDL structs do not get initialized automatically when the struct is instantiated. That means that things like booleans carry random memory values in them until they’re set. Having spent most of my time in JavaScript, I found that a bit surprising, but I’ve gotten used to it. I’m slowly getting more comfortable working lower-level.

This was the ultimate cause of this crasher bug that dbaron was running into while exercising the e10s printing code on a debug Nightly build on Linux.

This bug was opened to investigate initializing the primitives on IPDL structs automatically.

Networking is ultimately done in the parent process in multi-process Firefox

All network requests are proxied to the parent, which serializes the results back down to the child. Here’s the IPDL protocol for the proxy.

On bi-directional text and RTL

gw280 and I noticed that in single-process Firefox, a <select> dropdown set with dir=”rtl”, containing an <option> with the value “A)” would render the option as “(A”.

If the value was “A) Something else”, the string would come out unchanged.

We were curious to know why this flipping around was happening. It turned out that this is called “BiDi”, and some documentation for it is here.

If you want to see an interesting demonstration of BiDi, click this link, and then resize the browser window to reflow the text. Interesting to see where the period on that last line goes, no?

It might look strange to someone coming from a LTR language, but apparently it makes sense if you’re used to RTL.

I had not known that.

Some terminal spew

Some terminal spew

Now what’s all this?

My friend and colleague Mike Hoye showed me the above screenshot upon coming into work earlier this week. He had apparently launched Nightly from the terminal, and at some point, all that stuff just showed up.

“What is all of that?”, he had asked me.

I hadn’t the foggiest idea – but a quick DXR showed inside Breakpad, the tool used to generate crash reports when things go wrong.

I referred him to bsmedberg, since that fellow knows tons about crash reporting.

Later that day, mhoye got back to me, and told me that apparently this was output spew from Firefox’s plugin hang detection code. Mystery solved!

So if you’re running Firefox from the terminal, and suddenly see some stuff show up… a plugin you’re running probably locked up, and Firefox shanked it.

  1. And probably a bunch of other peripherals as well 

Planet MozillaThe Joy of Coding (Ep. 10): The Mystery of the Cache Key

In this episode, I kept my camera off, since I was having some audio-sync issues1.

I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.

And what did I need to do?

I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!

I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3

Episode Agenda


Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes

  1. I should have those resolved for Episode 11! 

  2. And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( 

  3. Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. 

Planet MozillaMy second year working at Mozilla

This week marked my second year Mozillaversary. I did plan to write this blog post of the 15th April, which would have marked the day I started, but this week flew by so quickly I almost completely missed it!

Carrying on from last years blog post, much of my second year at Mozilla has been spent working on various parts of, to which I made a total of 196 commits this year.

Much of my time has been spent working on Firefox on-boarding. Following the success of the on-boarding flow we built for the Firefox 29 Australis redesign last year, I went on to work on several more on-boarding flows to help introduce new features in Firefox. These included introducing the Firefox 33.1 privacy features, Developer Edition firstrun experience, 34.1 search engine changes, and 36.0 for Firefox Hello. I also got to work on the first time user experience for when a user makes their first Hello video call, which initially launched in 35.0. It was all a crazy amount of work from a lot of different people, but something I really enjoyed getting to work on alongside various other teams at Mozilla.

In between all that I also got to work on some other cool things, including the 2015 homepage redesign. Something I consider quite a privilege!

On the travel front, I got to visit both San Fransisco and Santa Clara a bunch more times (I’m kind of losing count now). I also got to visit Portland for the first time when Mozilla had their all-hands week last December, which was such a great city!

I’m looking forward to whatever year three has in store!

Planet MozillaWebdev Beer and Tell: April 2015

Webdev Beer and Tell: April 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebmaker Demos April 17 2015

Webmaker Demos April 17 2015 Webmaker Demos April 17 2015

Planet MozillaMy Current Thoughts on System Administration

I attended PyCon last week. It's a great conference. You should attend. While I should write up a detailed trip report, I wanted to quickly share one of my takeaways.

Ansible was talked about a lot at PyCon. Sitting through a few presentations and talking with others helped me articulate why I've been drawn to Ansible (over say Puppet, Chef, Salt, etc) lately.

First, Ansible doesn't require a central server. Administration is done remotely Ansible establishes a SSH connection to a remote machine and does stuff. Having Ruby, Python, support libraries, etc installed on production systems just for system administration never really jived with me. I love Ansible's default hands off approach. (Yes, you can use a central server for Ansible, but that's not the default behavior. While tools like Puppet could be used without a central server, it felt like they were optimized for central server use and thus local mode felt awkward.)

Related to central servers, I never liked how that model consists of clients periodically polling for and applying updates. I like the idea of immutable server images and periodic updates work against this goal. The central model also has a major bazooka pointed at you: at any time, you are only one mistake away from completely hosing every machine doing continuous polling. e.g. if you accidentally update firewall configs and lock out central server and SSH connectivity, every machine will pick up these changes during periodic polling and by the time anyone realizes what's happened, your machines are all effectively bricked. (Yes, I've seen this happen.) I like having humans control exactly when my systems apply changes, thank you. I concede periodic updates and central control have some benefits.

Choosing not to use a central server by default means that hosts are modeled as a set of applied Ansible playbooks, not necessarily as a host with a set of Ansible playbooks attached. Although, Ansible does support both models. I can easily apply a playbook to a host in a one-off manner. This means I can have playbooks represent common, one-off tasks and I can easily run these tasks without having to muck around with the host to playbook configuration. More on this later.

I love the simplicity of Ansible's configuration. It is just YAML files. Not some Ruby-inspired DSL that takes hours to learn. With Ansible, I'm learning what modules are available and how they work, not complicated syntax. Yes, there is complexity in Ansible's configuration. But at least I'm not trying to figure out the file syntax as part of learning it.

Along that vein, I appreciate the readability of Ansible playbooks. They are simple, linear lists of tasks. Conceptually, I love the promise of full dependency graphs and concurrent execution. But I've spent hours debugging race conditions and cyclic dependencies in Puppet that I'm left unconvinced the complexity and power is worth it. I do wish Ansible could run faster by running things concurrently. But I think they made the right decision by following KISS.

I enjoy how Ansible playbooks are effectively high-level scripts. If I have a shell script or block of code, I can usually port it to Ansible pretty easily. One pass to do the conversion 1:1. Another pass to Ansibilize it. Simple.

I love how Ansible playbooks can be checked in to source control and live next to the code and applications they manage. I frequently see people maintain separate source control repositories for configuration management from the code it is managing. This always bothered me. When I write a service, I want the code for deploying and managing that service to live next to it in version control. That way, I get the configuration management and the code versioned in the same timeline. If I check out a release from 2 years ago, I should still be able to use its exact configuration management code. This becomes difficult to impossible when your organization is maintaining configuration management code in a separate repository where a central server is required to do deployments (see Puppet).

Before PyCon, I was having an internal monolog about adopting the policy that all changes to remote servers be implemented with Ansible playbooks. I'm pleased to report that a fellow contributor to the Mercurial project has adopted this workflow himself and he only has great things to say! So, starting today, I'm going to try to enforce that every change I make to a remote server is performed via Ansible and that the Ansible playbooks are checked into version control. The Ansible playbooks will become implicit documentation of every process involved with maintaining a server.

I've already applied this principle to deploying MozReview. Before, there was some internal Mozilla wiki documenting commands to execute in a terminal to deploy MozReview. I have replaced that documentation with a one-liner that invokes Ansible. And, the Ansible files are now in a public repository.

If you poke around that repository, you'll see that I have Ansible playbooks referencing Docker. I have Ansible provisioning Docker images used by the test and development environment. That same Ansible code is used to configure our production systems (or is at least in the process of being used in that way). Having dev, test, and prod using the same configuration management has been a pipe dream of mine and I finally achieved it! I attempted this before with Puppet but was unable to make it work just right. The flexibility that Ansible's design decisions have enabled has made this finally possible.

Ansible is my go to system management tool right now. And I still feel like I have a lot to learn about its hidden powers.

If you are still using Puppet, Chef, or other tools invented in previous generations, I urge you to check out Ansible. I think you'll be pleasantly surprised.

Planet MozillaFirefox 38 beta4 to beta5

In this beta, we disabled the define EARLY_BETA_OR_EARLIER (used by some features to get testing during the first half of beta cycle).

In this release, we took some changes related to reading list, polishing of in-tab preferences and some various minor crash fixes.

We also landed the stability fixes which should ship with the release 37.0.2.

  • 52 changesets
  • 86 files changed
  • 3766 insertions
  • 2141 deletions



List of changesets:

Jon CoppeardBug 1149526 - Check HeapPtrs have GC lifetime r=terrence a=sylvestre - 7ca7e178de40
Sylvestre LedruPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - 4c2454564144
Bill McCloskeyBack out Bug 1083897 a=backout - 56f805ac34ce
Bill McCloskeyBack out Bug 1103036 to resolve shutdown hangs a=backout - 8a5486269821
JW WangBug 1153739 - Make Log() usable outside EME test cases. r=edwin, a=test-only - bf3ca76f10c3
JW WangBug 1080685 - Add more debug aids and longer timeout. r=edwin, a=test-only - b2d1be38dab1
Sami JaktholmBug 1150005 - Don't wait for "editor-selected" event in browser_styleeditor_fetch-from-cache.js as it may have already been emitted. r=bgrins, a=test-only - d1e3ce033c7a
Mark HammondBug 1151666 - Fix intermittent orange by reducing verified timer intervals and always using mock storage. r=zaach, a=test-only - 87f3453f6cc0
Shu-yu GuoBug 996982 - Fix Debugger script delazification logic to account for relazified clones. r=bz, a=sledru - 5ca4e237b259
Brian GrinsteadBug 1151259 - Switch <toolbar> to <box> to get rid of -moz-appearance styles for devtools sidebar. r=jryans, a=sledru - 7af104b169fa
Jared WeinBug 1152327 - ReadingListUI.init() should be called from delayedStartup, not onLoad. r=gavin, a=sledru - 9e1bf10888cd
Tim NguyenBug 1013714 - Remove old OSX focusring from links in in-content prefs. r=Gijs, a=sledru - 48976876cdb9
Chris PearceBug 1143278 - Make gmp-clearkey not require a Win8 only DLL to decode audio on Win7. r=edwin, a=sledru - f9f96ba1dbdb
Chris PearceBug 1143278 - Add more null checks in gmp-clearkey's decoders. r=edwin, a=sledru - 5779893b39a5
Chris PearceBug 1143278 - Use a different CLSID to instantiate the H264 decoder MFT in gmp-clearkey, as Win 7 Enterprise N requires that. r=edwin, a=sledru - dfce472edd1e
Chris PearceBug 1143278 - Support IYUV and I420 in gmp-clearkey on Windows, as Win 7 Enterprise N's H.264 decoder doesn't output I420. r=edwin, a=sledru - 3beb9cbddb3f
Cameron McCormackBug 1153693 - Only call ReleaseRef on nsStyle{ClipPath,Filter} once when setting a new value. r=dbaron, a=sledru - f5d0342230c0
Milan SreckovicBug 1152331 - If we do not delete indices array, it gets picked up down the line and breaks some assumptions in aboutSupport.js. r=dvander, a=sledru - 4cc36a9a958b
Richard NewmanBug 1153358 - Client mitigation: don't upload stored_on. r=nalexander, a=sledru - 1412c445ff0d
Mark HammondBug 1148701 - React to Backoff and Retry-After headers from Reading List server. r=adw, a=sledru - 91df81e2edac
Ryan VanderMeulenBacked out changeset d1e3ce033c7a (Bug 1150005) for leaks - 4f36d5aff5cf
Cameron McCormackBug 1146101 - Call ClearCachedInheritedStyleDataOnDescendants on more style contexts that had structs swapped out from them. r=dbaron, a=sledru - baa8222aaafd
Reed LodenBug 1152939 - Upgrade to SQLite 3.8.9. r=mak77, a=sledru - 01e0d4e09b6d
Mike HommeyBug 1146738 - Fix race condition between js/src/target and js/src/host. r=mshal, a=NPOTB - 7496d2eea111
Ben TurnerBug 1114788 - Disable failing test on workers. r=mrbkap, a=test-only - c82fcbeb7194
Matthew GreganBug 1144199 - Require multiple consecutive timeouts to accumulate before triggering timeout error handling in libcubeb's WASAPI backend; this avoids spurious timeout errors triggered by system sleep/wake cycles. r=padenot, a=sledru - ea342656f3cb
Xidorn QuanBug 1145448 - Avoid painting native frame on fullscreen window when activate/inactivate. r=jimm, a=sledru - a27fb9b83867
Bas SchoutenBug 1151361 - Wrap WARP D3D11 creation in a try catch block like done with regular D3D11. r=jrmuizel, a=sledru - 4954faa47dd0
Jan-Ivar BruaroeyBug 1153056 - Fix about:webrtc to not blank on zero allocated PeerConnections. r=jesup, a=sledru - e487ace8d7f9
Ryan VanderMeulenBug 1154434 - Bump mozharness.json to revision 4567c42063b7. a=test-only - 97856a6ac44d
Richard NewmanBug 1153357 - Don't set SYNC_STATUS_MODIFIED unless an update touches fields that we sync. r=nalexander, a=sledru - 199b60ec60dc
vivekBug 1145567 - Display toolbar only after Domcontentloaded is triggered. r=margaret, a=sledru - df47a99c442f
Mark GoodwinBug 1153090 - Unaligned access in cert bock list. r=keeler, a=sledru - 58f203b17be2
Michael ComellaBug 1148390 - Dynamically add padding to share icon on GB devices. r=wesj, a=sledru - e10ddd2bc05f
Ben TurnerBug 1154599 - Revert unintentional change to crash reporting infra in changeset ce2692d64bcf. a=sledru - 7b296a71b115
Edwin FloresBug 1148071 - Fix CDM update behaviour. r=cpearce, a=sledru, ba=jorgev - 6c7e8d9f955c
Gijs KruitboschBug 1154447 - add aero asset for update badge, r=me, a=sylvestre - 98703ce041e2
Gijs KruitboschBug 1150703, allow about: pages to be unlinkable even if "safe for content", r=mcmanus, IGNORE IDL, ba=sylvestre - 5c9df6adebed
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on mobile, r=margaret, a=sylvestre - a5203cabcc04
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on desktop, r=gavin, a=sylvestre - 062e49bcb2da
Ryan VanderMeulenBug 1092202 - Skip testGetUserMedia for frequent failures. a=test-only - 85106e95bcb8
Ryan VanderMeulenBug 1123563 - Annotate test-animated-image-layers.html and test-animated-image-layers-background.html as random on Android and Linux. a=test-only - fe141895d7ab
Ryan VanderMeulenBug 1097721 - Skip test_mozaudiochannel.html on OSX 10.6 due to intermittent crashes. a=test-only - 86b6cb966d95
Ryan VanderMeulenBug 1021174 - Skip test_bug495145.html on OSX 10.6 due to intermittent crashes. a=test-only - 34331bbc9575
Mark GoodwinBug 1120748 - Resolve intermittent failure of browser_ssl_error_reports.js. r=ttaubert, a=test-only - 22eb12ac64e9
Ryan VanderMeulenBug 847903 - Skip 691096-1.html on OSX 10.6 due to intermittent crashes. a=test-only - 348cc6be3ba0
Nicolas SilvaBug 1145981 - Do not crash when a DIB texture is updated without a compositor. r=jrmuizel, a=sledru - 16d7e20d9565
Xidorn QuanBug 1141931 - Part 0: Fix unicode-bidi value of ruby elements in html.css. a=sledru - ccb54262291d
Margaret LeibovicBug 1152121 - Factor out logic to get original URL from reader URL into shared place, and handle malformed URI excpetions. r=Gijs, r=mcomella, a=sledru - 7a10ff7fd9e4
Richard NewmanBug 1153973 - Don't blindly apply deletions as insertions. r=nalexander, a=sledru - f9d36adcdf51
Xidorn QuanBug 1154814 - Move font rules from 'rt' to 'rtc, rt' and make text-emphasis conditional. r=heycam, a=sledru - 8fd05ce16a5f
Gijs KruitboschBug 1148923 - min-width the font menulists. r=jaws, a=sledru - 45a5eaa7813b

Planet MozillaOffline localization by Sandra

Pontoon is a web application, which is great. You can run it on almost any device with any operating system. You can be sure you always have the latest version, so you don’t need to worry about updates. You don’t even need to download or install anything. There’s just one particular occasion when web applications aren’t so great.

When you’re offline.

Mostly that means the game is over. But it doesn’t need to be so. Application caching together with web storage has made offline web applications a reality. In its latest edition released yesterday, Pontoon now allows translating even when you’re offline. See full changelog for details.

There are many scenarios where offline localization is the only option our localizers have. Decent internet connection simply cannot be taken for granted in many parts of the World. If it’s hard for you to belive that, visit any local tech conference. :-) Or, if you started localizing at home, you can now continue with localization on your daily commute to work. And vice versa.

The way it works is very simple. After Pontoon detects you no longer have a connection, it saves translations to localStorage instead of server. Once you get online again, translations are stored to server. In the meantime, connection dependant functionality like History and Machinery is of course unavailable.

Offline mode was single-handedly developed by our new contributor Sandra Shklyaeva. She just joined Mozilla community and has already fixed one of our oldest bugs. She’s attacking the bugs everybody was pushing away. I can’t wait to see what the future holds (shhhhh)!

Sandra has an interesting story on what got her attracted to Mozilla:

I was exploring some JS API on the when I noticed pretty tabzilla on the top. I clicked it and my chrome became unresponsive completely XD. Maybe it was just a coincidence… Anyway, the tabzilla has caught my attention and that’s how I found out about Get Involved stuff in Mozilla.

If you also want to get involved, now you know where you can find us!

Planet MozillaWeb Compatibility in Japan

I'm living in Japan. And time to time, I'm confronted with issues on the Japanese market when using Firefox on Mobile (Firefox OS and Firefox for Android). There's a situation in Japan which has similarities with the Chinese Market. For example, many sites have been designed with old WebKit CSS properties only, such as flexbox. The sites have not been updated to the new set of properties.

We started our testing with a list of around 100 Japanese Web sites. This list needs to be refined and improved. After a first batch of testing one year ago, we ended up with a list of about 50 sites having some issues. Most of them have been tested against a Firefox OS User Agent aka something like User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0 (the version number is irrelevant).

Here I'm making a summary of the issues to help us

  1. refine our future testing
  2. have a better understanding of the issues at stake

We currently have 51 bugs on bugzilla (json) related to issues with Web Compatibility on Japane Web Sites. On these 51 sites, there is 1 duplicate and 13 resolved.

Type Of Issues

  • HTTP Redirection to a mobile domain based on User-Agent: HTTP header
  • JavaScript redirection to a mobile domain based on navigator.userAgent on the client side through window.location
  • Content customization based on User-Agent: or navigator.userAgent
  • Display of a banner to switch to a mobile version of the site based on User-Agent: or navigator.userAgent. Example: Asahi Web site
  • Receiving a mobile site with outdated WebKit CSS only properties
  • Site using a Web framework or JavaScript library which is exclusively compatible with a set of browsers. Example: Sencha on Nezu Museum. Not a lot can be done here.

Todo List For Better Testing

  • Most of these are only the surface as we have tested most of the time, only the home page. We need to try to test a couple of subpages
  • The sites need to be tested again with screenshots for:
  • Firefox OS User Agent (User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android User Agent (User-Agent: Mozilla/5.0 (Android; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android Modified User Agent (User-Agent: Mozilla/5.0 (Android 5.0; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0) (this is a fake Firefox for Android UA, but some sites keep sending different versions or no version at all based on the following detection /.*Android ([0-9])\.([0-9])/ or match(/Android\s+(\d\.\d)/i. DO NOT DO THIS AT HOME at least not without a sensible fallback)
  • A recent Android Chrome User Agent
  • A recent iOS Safari User Agent

Some Ideas and Things We Can Do Together

There are a couple of things which can be done and where you can help.

  • Translating this article in Japanese.
  • Advocacy around you.
  • Publish an article about Web Compatibility and Recipes in Japanese Press (Web Designing, Web Creators, etc.). I can help. Or maybe we could propose a monthly column in Web Designing on "let's fix this it" where we would go through a known site issues and how to solve them.
  • Contact Web sites directly and pointing them to the bugs.
  • Share with us if you know a person or a friend of a friend working on these sites/companies. Talk about it around you! A CTO, a Web developer, someone who can help us negociate a change on the site.
  • Report sites which are broken on It helps.

Old WebKit CSS, Flexbox Nightmare

Maybe in all these efforts in contacting Web sites, the flexbox story is the most frustrating one. I talked a couple of times about it: Fix Your Flexbox Web site and Flexbox old syntax to new syntax converter. The frustration comes from two things:

  1. It's very easy to fix.
  2. The sites are using outdated 1st version of Flexbox which was developed for WebKit only.

Swicthing to the new standard syntax would actually improve their customers reach and make them compatible with the future. It must also be frustrating for Apple and Co, because it means they can't really retire the old code from their rendering engine without breaking sites. Chicken and egg situation. If you remove the support, you break sites but push sites to update. If you keep the support, sites don't fix, but users using other browsers can't go to these sites. If they don't go to these sites, the browser doesn't show up in the stats, and so the site owners say: "We do not have to support this browser, nobody is using it on our site." Yes… you know. Running into circles.

In the end it forces other browser vendors to do dirty things for making it usable for everyone.

Fixing Your CSS - Easy!

Hallvord Steen has developped a quick tool to help you fix your CSS. It's not perfect, but it will remove a big part of the hard work on figuring out how to convert this WebKit only flexbox or gradient to a standard one supported everywhere.


All of these is part of a much bigger effort for Web Compatibility in general. In the next couple of days, I will go through all bugs we already have opened and check if there are new things.

If we get the flexbox/gradient right and the User Agent sniffing, we will have solved probably 80% of the issues of Web Compatibility issues in Japan.


Planet Mozillasystemsetupusthebomb revisited: Vulnerable after all

Previously, previously. tl;dr: Apple uses a setuid binary called writeconfig to alter certain system settings which on at least 10.7+ could be used to write arbitrary files as setuid root, allowing almost instantaneous privilege escalation -- i.e., your computer is now pwned. This was fixed in Yosemite 10.10.3, but not any previous version. Originally I had not been able to exploit my 10.4 systems in the same fashion, so despite the binary being there, I concluded the actual vulnerability did not exist.

Well, Takashi Yoshi has succeeded where I failed (I'm still pretty confident on Darwin Nuke, though), and I have confirmed it on my systems using his RootPipe Tester tool. Please note, before you run, that this tool specifically exploits the vulnerability to write a setuid root file to disk, which if he weren't a nice guy means he now owns your system. Takashi is clearly a good guy but with any such tool you may wish to get in the habit of building from source you've closely examined, which he provides. The systemsetupusthebomb vulnerability is indeed successful on all versions of OS X going back to at least 10.2.8.

The workaround for this vulnerability is straightforward in concept -- disable writeconfig or neuter it -- but has side effects, because if you monkey with writeconfig the system will lose the capability to control certain configuration profiles (in 10.4, this generally affects the Sharing pane in System Preferences; 10.5+, which specifically exposes systemsetup, may be affected in other ways) and may also affect remote administration capabilities. Takashi and I exchanged E-mails on two specific solutions. Both of these possible solutions will alter system functionality, in a hopefully reversible fashion, but a blown command may interfere with administering your computer. Read carefully.

One solution is to rename (or remove, but this is obviously more drastic) writeconfig to something else. Admittedly this works a bit too well. RootPipe Tester actually crashed, which may be useful to completely stop a malicious app in its tracks, but it also made System Preferences unstable and will likely do the same to any app expecting to use Admin.framework. Although 10.4 seemed to handle this a bit better, it too locked up the Sharing pane after banging on it a bit. However, you can be guaranteed nothing will happen in this configuration because it's not possible for it to occur -- apps looking for the victim ToolLiaison class won't be able to find it. Since I'm rarely in that panel, this is the approach I've personally selected for my own systems, but I'm also fully comfortable with the limitations. You can control this with two commands in Terminal on 10.4-10.6 (make sure you fixed the issue with sudo first!):

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv writeconfig noconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv noconfig writeconfig

For added security, make noconfig a custom filename only you know so an attacker won't be easily able to find it in an alternate location ... or, if you're nucking futs, archive or delete it entirely. (Not recommended except for the fascistic maniac.)

Takashi found the second approach to be gentler, but is slightly less secure: strip the setuid bits off. In this mode, the vulnerability can still be exploited to write arbitrary files, but as it lacks the setuid permission it cannot run as root and the file is only written as the current user (so no privilege escalation, just an unexpected file write). Applications that use Admin.framework simply won't work as expected; they shouldn't crash. For example, System Preferences will just "look at you" in the Sharing panel when you try to change or start a new system service -- nothing will happen. For many users, this will be the better option. Here are the Terminal commands for 10.4-10.6:

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u-s writeconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u+s writeconfig

Choose one of these options. Most of the time, you should leave your system in the safe state. If you need to change Sharing or certain other settings with systemsetup or System Preferences, return to the original state, make the change, and return to the safe state.

Of course, one other option is to simply do nothing. This might be a surprising choice, but Takashi does make the well-taken point that this attack can only be perpetrated upon an administrative user where root is just your password away anyhow, and no implementation of this attack other than his runs on PowerPC. This isn't good enough for me personally, but his argument is reasonable, and if you have to do a lot of configuration changes on your system I certainly understand how inconvenient these approaches could be. (Perhaps someone will figure out a patch for System Preferences that does it for you. I'll leave that exercise to the reader.) As in many such situations, you alone will have to decide how much you're willing to put up with, but it's good to see other people are also working to keep our older Macs better protected on OS X.

Ob10.4Fx IonPower status report: 75% of V8 passing, interrupted briefly tonight to watch the new Star Wars trailer. I have cautious, cautious hope it won't suck, but J. J. Abrams, if you disappoint me, it will be for the last time (to paraphrase).

Planet MozillaMixing matching, mutation, and moves in Rust

One of the primary goals of the Rust project is to enable safe systems programming. Systems programming usually implies imperative programming, which in turns often implies side-effects, reasoning about shared state, et cetera.

At the same time, to provide safety, Rust programs and data types must be structured in a way that allows static checking to ensure soundness. Rust has features and restrictions that operate in tandem to ease writing programs that can pass these checks and thus ensure safety. For example, Rust incorporates the notion of ownership deeply into the language.

Rust's match expression is a construct that offers an interesting combination of such features and restrictions. A match expression takes an input value, classifies it, and then jumps to code written to handle the identified class of data.

In this post we explore how Rust processes such data via match. The crucial elements that match and its counterpart enum tie together are:

  • Structural pattern matching: case analysis with ergonomics vastly improved over a C or Java style switch statement.

  • Exhaustive case analysis: ensures that no case is omitted when processing an input.

  • match embraces both imperative and functional styles of programming: you can continue using break statements, assignments, et cetera, rather than being forced to adopt an expression-oriented mindset.

  • match "borrows" or "moves", as needed: Rust encourages the developer to think carefully about ownership and borrowing. To ensure that one is not forced to yield ownership of a value prematurely, match is designed with support for merely borrowing substructure (as opposed to always moving such substructure).

We cover each of the items above in detail below, but first we establish a foundation for the discussion: What does match look like, and how does it work?

The Basics of match

The match expression in Rust has this form:


where each of the PATTERNS_i contains at least one pattern. A pattern describes a subset of the possible values to which INPUT_EXPRESSION could evaluate. The syntax PATTERNS => RESULT_EXPRESSION is called a "match arm", or simply "arm".

Patterns can match simple values like integers or characters; they can also match user-defined symbolic data, defined via enum.

The below code demonstrates generating the next guess (poorly) in a number guessing game, given the answer from a previous guess.

enum Answer {

fn suggest_guess(prior_guess: u32, answer: Answer) {
    match answer {
        Answer::Higher => println!("maybe try {} next", prior_guess + 10),
        Answer::Lower  => println!("maybe try {} next", prior_guess - 1),
        Answer::Bingo  => println!("we won with {}!", prior_guess),

fn demo_suggest_guess() {
    suggest_guess(10, Answer::Higher);
    suggest_guess(20, Answer::Lower);
    suggest_guess(19, Answer::Bingo);

(Incidentally, nearly all the code in this post is directly executable; you can cut-and-paste the code snippets into a file, compile the file with --test, and run the resulting binary to see the tests run.)

Patterns can also match structured data (e.g. tuples, slices, user-defined data types) via corresponding patterns. In such patterns, one often binds parts of the input to local variables; those variables can then be used in the result expression.

The special _ pattern matches any single value, and is often used as a catch-all; the special .. pattern generalizes this by matching any series of values or name/value pairs.

Also, one can collapse multiple patterns into one arm by separating the patterns by vertical bars (|); thus that arm matches either this pattern, or that pattern, et cetera.

These features are illustrated in the following revision to the guessing-game answer generation strategy:

struct GuessState {
    guess: u32,
    answer: Answer,
    low: u32,
    high: u32,

fn suggest_guess_smarter(s: GuessState) {
    match s {
        // First arm only fires on Bingo; it binds `p` to last guess.
        GuessState { answer: Answer::Bingo, guess: p, .. } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~  ~~~~~~~~  ~~
     //     |                 |                 |     |
     //     |                 |                 |     Ignore remaining fields
     //     |                 |                 |
     //     |                 |      Copy value of field `guess` into local variable `p`
     //     |                 |
     //     |   Test that `answer field is equal to `Bingo`
     //     |
     //  Match against an instance of the struct `GuessState`

            println!("we won with {}!", p);

        // Second arm fires if answer was too low or too high.
        // We want to find a new guess in the range (l..h), where:
        // - If it was too low, then we want something higher, so we
        //   bind the guess to `l` and use our last high guess as `h`.
        // - If it was too high, then we want something lower; bind
        //   the guess to `h` and use our last low guess as `l`.
        GuessState { answer: Answer::Higher, low: _, guess: l, high: h } |
        GuessState { answer: Answer::Lower,  low: l, guess: h, high: _ } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~   ~~~~~~  ~~~~~~~~  ~~~~~~~
     //     |                 |                 |        |        |
     //     |                 |                 |        |    Copy or ignore
     //     |                 |                 |        |    field `high`,
     //     |                 |                 |        |    as appropriate
     //     |                 |                 |        |
     //     |                 |                 |  Copy field `guess` into
     //     |                 |                 |  local variable `l` or `h`,
     //     |                 |                 |  as appropriate
     //     |                 |                 |
     //     |                 |    Copy value of field `low` into local
     //     |                 |    variable `l`, or ignore it, as appropriate
     //     |                 |
     //     |   Test that `answer field is equal
     //     |   to `Higher` or `Lower`, as appropriate
     //     |
     //  Match against an instance of the struct `GuessState`

            let mid = l + ((h - l) / 2);
            println!("lets try {} next", mid);

fn demo_guess_state() {
    suggest_guess_smarter(GuessState {
        guess: 20, answer: Answer::Lower, low: 10, high: 1000

This ability to simultaneously perform case analysis and bind input substructure leads to powerful, clear, and concise code, focusing the reader's attention directly on the data relevant to the case at hand.

That is match in a nutshell.

So, what is the interplay between this construct and Rust's approach to ownership and safety in general?

Exhaustive case analysis

...when you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.

-- Sherlock Holmes (Arthur Conan Doyle, "The Blanched Soldier")

One useful way to tackle a complex problem is to break it down into individual cases and analyze each case individually. For this method of problem solving to work, the breakdown must be collectively exhaustive; all of the cases you identified must actually cover all possible scenarios.

Using enum and match in Rust can aid this process, because match enforces exhaustive case analysis: Every possible input value for a match must be covered by the pattern in a least one arm in the match.

This helps catch bugs in program logic and ensures that the value of a match expression is well-defined.

So, for example, the following code is rejected at compile-time.

fn suggest_guess_broken(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        // ERROR: non-exhaustive patterns: `Bingo` not covered
    println!("maybe try {} next", next_guess);

Many other languages offer a pattern matching construct (ML and various macro-based match implementations in Scheme both come to mind), but not all of them have this restriction.

Rust has this restriction for these reasons:

  • First, as noted above, dividing a problem into cases only yields a general solution if the cases are exhaustive. Exhaustiveness-checking exposes logical errors.

  • Second, exhaustiveness-checking can act as a refactoring aid. During the development process, I often add new variants for a particular enum definition. The exhaustiveness-check helps points out all of the match expressions where I only wrote the cases from the prior version of the enum type.

  • Third, since match is an expression form, exhaustiveness ensures that such expressions always either evaluate to a value of the correct type, or jump elsewhere in the program.

Jumping out of a match

The following code is a fixed version of the suggest_guess_broken function we saw above; it directly illustrates "jumping elsewhere":

fn suggest_guess_fixed(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        Answer::Bingo  => {
            println!("we won with {}!", prior_guess);
    println!("maybe try {} next", next_guess);

fn demo_guess_fixed() {
    suggest_guess_fixed(10, Answer::Higher);
    suggest_guess_fixed(20, Answer::Lower);
    suggest_guess_fixed(19, Answer::Bingo);

The suggest_guess_fixed function illustrates that match can handle some cases early (and then immediately return from the function), while computing whatever values are needed from the remaining cases and letting them fall through to the remainder of the function body.

We can add such special case handling via match without fear of overlooking a case, because match will force the case analysis to be exhaustive.

Algebraic Data Types and Structural Invariants

Algebraic data types succinctly describe classes of data and allow one to encode rich structural invariants. Rust uses enum and struct definitions for this purpose.

An enum type allows one to define mutually-exclusive classes of values. The examples shown above used enum for simple symbolic tags, but in Rust, enums can define much richer classes of data.

For example, a binary tree is either a leaf, or an internal node with references to two child trees. Here is one way to encode a tree of integers in Rust:

enum BinaryTree {
    Node(Box<BinaryTree>, i32, Box<BinaryTree>)

(The Box<V> type describes an owning reference to a heap-allocated instance of V; if you own a Box<V>, then you also own the V it contains, and can mutate it, lend out references to it, et cetera. When you finish with the box and let it fall out of scope, it will automatically clean up the resources associated with the heap-allocated V.)

The above enum definition ensures that if we are given a BinaryTree, it will always fall into one of the above two cases. One will never encounter a BinaryTree::Node that does not have a left-hand child. There is no need to check for null.

One does need to check whether a given BinaryTree is a Leaf or is a Node, but the compiler statically ensures such checks are done: you cannot accidentally interpret the data of a Leaf as if it were a Node, nor vice versa.

Here is a function that sums all of the integers in a tree using match.

fn tree_weight_v1(t: BinaryTree) -> i32 {
    match t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(left, payload, right) => {
            tree_weight_v1(*left) + payload + tree_weight_v1(*right)

/// Returns tree that Looks like:
///      +----(4)---+
///      |          |
///   +-(2)-+      [5]
///   |     |   
///  [1]   [3]
fn sample_tree() -> BinaryTree {
    let l1 = Box::new(BinaryTree::Leaf(1));
    let l3 = Box::new(BinaryTree::Leaf(3));
    let n2 = Box::new(BinaryTree::Node(l1, 2, l3));
    let l5 = Box::new(BinaryTree::Leaf(5));

    BinaryTree::Node(n2, 4, l5)

fn tree_demo_1() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

Algebraic data types establish structural invariants that are strictly enforced by the language. (Even richer representation invariants can be maintained via the use of modules and privacy; but let us not digress from the topic at hand.)

Both expression- and statement-oriented

Unlike many languages that offer pattern matching, Rust embraces both statement- and expression-oriented programming.

Many functional languages that offer pattern matching encourage one to write in an "expression-oriented style", where the focus is always on the values returned by evaluating combinations of expressions, and side-effects are discouraged. This style contrasts with imperative languages, which encourage a statement-oriented style that focuses on sequences of commands executed solely for their side-effects.

Rust excels in supporting both styles.

Consider writing a function which maps a non-negative integer to a string rendering it as an ordinal ("1st", "2nd", "3rd", ...).

The following code uses range patterns to simplify things, but also, it is written in a style similar to a switch in a statement-oriented language like C (or C++, Java, et cetera), where the arms of the match are executed for their side-effect alone:

fn num_to_ordinal(x: u32) -> String {
    let suffix;
    match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => {
            suffix = "st";
        (2, 2) | (2, 22...92) => {
            suffix = "nd";
        (3, 3) | (3, 23...93) => {
            suffix = "rd";
        _                     => {
            suffix = "th";
    return format!("{}{}", x, suffix);

fn test_num_to_ordinal() {
    assert_eq!(num_to_ordinal(   0),    "0th");
    assert_eq!(num_to_ordinal(   1),    "1st");
    assert_eq!(num_to_ordinal(  12),   "12th");
    assert_eq!(num_to_ordinal(  22),   "22nd");
    assert_eq!(num_to_ordinal(  43),   "43rd");
    assert_eq!(num_to_ordinal(  67),   "67th");
    assert_eq!(num_to_ordinal(1901), "1901st");

The Rust compiler accepts the above program. This is notable because its static analyses ensure both:

  • suffix is always initialized before we run the format! at the end of the function, and

  • suffix is assigned at most once during the function's execution (because if we could assign suffix multiple times, the compiler would force us to mark suffix as mutable).

To be clear, the above program certainly can be written in an expression-oriented style in Rust; for example, like so:

fn num_to_ordinal_expr(x: u32) -> String {
    format!("{}{}", x, match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => "st",
        (2, 2) | (2, 22...92) => "nd",
        (3, 3) | (3, 23...93) => "rd",
        _                     => "th"

Sometimes expression-oriented style can yield very succinct code; other times the style requires contortions that can be avoided by writing in a statement-oriented style. (The ability to return from one match arm in the suggest_guess_fixed function earlier was an example of this.)

Each of the styles has its use cases. Crucially, switching to a statement-oriented style in Rust does not sacrifice every other feature that Rust provides, such as the guarantee that a non-mut binding is assigned at most once.

An important case where this arises is when one wants to initialize some state and then borrow from it, but only on some control-flow branches.

fn sometimes_initialize(input: i32) {
    let string: String; // a dynamically-constructed string value
    let borrowed: &str; // a reference to string data
    match input {
        0...100 => {
            // Construct a String on the fly...
            string = format!("input prints as {}", input);
            // ... and then borrow from inside it.
            borrowed = &string[6..];
        _ => {
            // String literals are *already* borrowed references
            borrowed = "expected between 0 and 100";
    println!("borrowed: {}", borrowed);

    // Below would cause compile-time error if uncommented...

    // println!("string: {}", string);

    // ...namely: error: use of possibly uninitialized variable: `string`

fn demo_sometimes_initialize() {
    sometimes_initialize(23);  // this invocation will initialize `string`
    sometimes_initialize(123); // this one will not

The interesting thing about the above code is that after the match, we are not allowed to directly access string, because the compiler requires that the variable be initialized on every path through the program before it can be accessed. At the same time, we can, via borrowed, access data that may held within string, because a reference to that data is held by the borrowed variable when we go through the first match arm, and we ensure borrowed itself is initialized on every execution path through the program that reaches the println! that uses borrowed.

(The compiler ensures that no outstanding borrows of the string data could possibly outlive string itself, and the generated code ensures that at the end of the scope of string, its data is deallocated if it was previously initialized.)

In short, for soundness, the Rust language ensures that data is always initialized before it is referenced, but the designers have strived to avoid requiring artificial coding patterns adopted solely to placate Rust's static analyses (such as requiring one to initialize string above with some dummy data, or requiring an expression-oriented style).

Matching without moving

Matching an input can borrow input substructure, without taking ownership; this is crucial for matching a reference (e.g. a value of type &T).

The "Algebraic Data Types" section above described a tree datatype, and showed a program that computed the sum of the integers in a tree instance.

That version of tree_weight has one big downside, however: it takes its input tree by value. Once you pass a tree to tree_weight_v1, that tree is gone (as in, deallocated).

fn tree_demo_v1_fails() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // If you uncomment this line below ...

    // assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // ... you will get: error: use of moved value: `tree`

This is not a consequence, however, of using match; it is rather a consequence of the function signature that was chosen:

fn tree_weight_v1(t: BinaryTree) -> i32 { 0 }
//                   ^~~~~~~~~~ this means this function takes ownership of `t`

In fact, in Rust, match is designed to work quite well without taking ownership. In particular, the input to match is an L-value expression; this means that the input expression is evaluated to a memory location where the value lives. match works by doing this evaluation and then inspecting the data at that memory location.

(If the input expression is a variable name or a field/pointer dereference, then the L-value is just the location of that variable or field/memory. If the input expression is a function call or other operation that generates an unnamed temporary value, then it will be conceptually stored in a temporary area, and that is the memory location that match will inspect.)

So, if we want a version of tree_weight that merely borrows a tree rather than taking ownership of it, then we will need to make use of this feature of Rust's match.

fn tree_weight_v2(t: &BinaryTree) -> i32 {
    //               ^~~~~~~~~~~ The `&` means we are *borrowing* the tree
    match *t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(ref left, payload, ref right) => {
            tree_weight_v2(left) + payload + tree_weight_v2(right)

fn tree_demo_2() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v2(&tree), (1 + 2 + 3) + 4 + 5);

The function tree_weight_v2 looks very much like tree_weight_v1. The only differences are: we take t as a borrowed reference (the & in its type), we added a dereference *t, and, importantly, we use ref-bindings for left and right in the Node case.

The dereference *t, interpreted as an L-value expression, is just extracting the memory address where the BinaryTree is represented (since the t: &BinaryTree is just a reference to that data in memory). The *t here is not making a copy of the tree, nor moving it to a new temporary location, because match is treating it as an L-value.

The only piece left is the ref-binding, which is a crucial part of how destructuring bind of L-values works.

First, let us carefully state the meaning of a non-ref binding:

  • When matching a value of type T, an identifier pattern i will, on a successful match, move the value out of the original input and into i. Thus we can always conclude in such a case that i has type T (or more succinctly, "i: T").

For some types T, known as copyable T (also pronounced "T implements Copy"), the value will in fact be copied into i for such identifier patterns. (Note that in general, an arbitrary type T is not copyable.)

Either way, such pattern bindings do mean that the variable i has ownership of a value of type T.

Thus, the bindings of payload in tree_weight_v2 both have type i32; the i32 type implements Copy, so the weight is copied into payload in both arms.

Now we are ready to state what a ref-binding is:

  • When matching an L-value of type T, a ref-pattern ref i will, on a successful match, merely borrow a reference into the matched data. In other words, a successful ref i match of a value of type T will imply that i has the type of a reference to T (or more succinctly, "i: &T").

Thus, in the Node arm of tree_weight_v2, left will be a reference to the left-hand box (which holds a tree), and right will likewise reference the right-hand tree.

We can pass these borrowed references to trees into the recursive calls to tree_weight_v2, as the code demonstrates.

Likewise, a ref mut-pattern (ref mut i) will, on a successful match, borrow a mutable reference into the input: i: &mut T. This allows mutation and ensures there are no other active references to that data at the same time. A destructuring binding form like match allows one to take mutable references to disjoint parts of the data simultaneously.

This code demonstrates this concept by incrementing all of the values in a given tree.

fn tree_grow(t: &mut BinaryTree) {
    //          ^~~~~~~~~~~~~~~ `&mut`: we have exclusive access to the tree
    match *t {
        BinaryTree::Leaf(ref mut payload) => *payload += 1,
        BinaryTree::Node(ref mut left, ref mut payload, ref mut right) => {
            *payload += 1;

fn tree_demo_3() {
    let mut tree = sample_tree();
    tree_grow(&mut tree);
    assert_eq!(tree_weight_v2(&tree), (2 + 3 + 4) + 5 + 6);

Note that the code above now binds payload by a ref mut-pattern; if it did not use a ref pattern, then payload would be bound to a local copy of the integer, while we want to modify the actual integer in the tree itself. Thus we need a reference to that integer.

Note also that the code is able to bind left and right simultaneously in the Node arm. The compiler knows that the two values cannot alias, and thus it allows both &mut-references to live simultaneously.


Rust takes the ideas of algebraic data types and pattern matching pioneered by the functional programming languages, and adapts them to imperative programming styles and Rust's own ownership and borrowing systems. The enum and match forms provide clean data definitions and expressive power, while static analysis ensures that the resulting programs are safe.

For more information on details that were not covered here, such as:

  • how to say Higher instead of Answer::Higher in a pattern,

  • defining new named constants,

  • binding via ident @ pattern, or

  • the potentially subtle difference between { let id = expr; ... } versus match expr { id => { ... } },

consult the Rust documentation, or quiz our awesome community (in #rust on IRC, or in the user group).

(Many thanks to those who helped review this post, especially Aaron Turon and Niko Matsakis, as well as Mutabah, proc, libfud, asQuirrel, and annodomini from #rust.)

Planet Mozillamerit badge

Are you jealous of the badges that npm modules or rubygems have, showing off their latest published version? Of course you are. I was too. When looking at a repository, the version may change as code is merged in, before being published to Now, instead of looking at a repo and wondering what version is latest, that repo can add a badge.

Here’s hyper’s:

Besides wanting to be able to use badges like these for my own repositories, I also wanted a simple example of using hyper to create a simple app. Bonus points: it users the hyper Server and Client, together.

Go ahead, get your merit badge.

Planet MozillaParticipation at Mozilla

Participation at Mozilla The Participation Forum

Planet MozillaTop 50 DOS Problems Solved: Whoops, I Deleted Everything

Q: I accidentally deleted all the files in the root directory of my hard disk for the second time this month. I managed to reinstall everything, but is there a way of avoiding the problem?

A: There are two approaches you could try, both of which have applications for other things too:

  • Modify the files so that they cannot be deleted without first explicitly making them deletable. You can do this with the DOS utility Attrib which was supplied with your system. … To protect the file use the command:

    ATTRIB +R filename

    The +R switch means “make this file read-only”.

  • Stop using the DEL command to delete files. Use a batch file instead which will prompt you before taking action.

    … <batch file code is given> …

    This batch file has a useful enhancement beyond the precautionary message. You can use it to specify multiple files, for example:


    With one command this would delete all .BAK files, FRED.BAS, and all .DOC files whose names begin with a single letter.

A delete command which takes multiple arguments – wow…

Planet MozillaKids' Vision - Mentorship Series

Kids' Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Planet MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Planet MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Planet MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 10

The Joy of Coding (mconley livehacks on Firefox) - Episode 10 Watch mconley livehack on Firefox Desktop bugs!

Planet MozillaMozilla pushes - March 2015

Here's March 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

The number of pushes increased from those recorded in the previous month with a total of 10943. 

  • 10943 pushes
  • 353 pushes/day (average)
  • Highest number of pushes/day: 579 pushes on Mar 11, 2015
  • 23.18 pushes/hour (highest average)

General Remarks
  • Try keeps on having around 49% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 26% of all the pushes.

  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 

Planet MozillaRunning a web server on the front-end

The introduction of TCP sockets support in Firefox OS made it possible to run a web server from the front-end, and all is written in JavaScript. Think of having something similar to express.js… but running on a browser (because after all, Firefox OS is a superturbocharged browser).

Again, JS server superstar Justin d’Archangelo wrote an implementation of a web server that works on Firefox OS. It’s called fxos-web-server and it includes a few examples you can run.

None of the examples particularly fit my use case–I want to serve static content from a phone to other phones, but the examples were a bit more contrived. So I decided to build a simpler proof-of-concept example: catserver, a web server that served a simple page with full screen Animated GIFs of cats:


The first thing I wanted to do is to use Browserify “proper”, to write my app in a more modular way. For this I had to fork Justin’s original project and modify its package.json so it would let me require() the server instead of tucking its variable in the window globals :-) – sadly I yet have to send him a PR so you will need to be aware of this difference. The dependency in package.json points to my fork for now:

"fxos-web-server": "git+"

Building (with gulp)

My example is composed of two “websites”. The first one is the web server app itself which is what is executed in the “server” device. The sources for this are in src.

The second website is what the server will transfer to devices that connect to it, so this actually gets executed in the client devies. This is where the cats are! Its contents are in the www folder.

Both websites are packaged in just one ZIP file and then installed onto the server device.

This build process also involves running Browserify first so I can get from node-style code that uses require() to a JavaScript bundle that runs on Firefox OS. So I’m using gulp to do all that. The tasks are in gulpfile.js.

Web server app

Thanks to Justin’s work, this is fairly simple. We create an HTTP server on port 80:

var HTTPServer = require('fxos-web-server');
var server = new HTTPServer(80);

And then we add a listener for when a request is made, so we can respond to it (and serve content to the connected client):

server.addEventListener('request', function(evt) {
      var request = evt.request;
      var response = evt.response;

      //... Decide what to send

The “decide what to send” part is a bit like writing nginx or express config files. In this case I wanted to:

  • serve an index.html file if we request a “directory” (i.e. a path that ends with “/”)
  • serve static content if we request a file (i.e. the path doesn’t end with “/”)

Before serving a file we need to tell the client what kind of content we’re sending to it. I’m using a very simple “extension to content type” function that determines MIME types based on the extension in the path. E.g. ‘html’ returns text/html.

Once we know the content type, we set it on the response headers:

response.headers['Content-Type'] = getContentType(fileToSend);

and use Justin’s shortcut function sendFile to send a file from our app to the client:


With the request handler set up, we can finally start the server!


But… how does it even work?!

Welcome to the Hack of The Week!

When you say “sendFile” what the server app does is: it creates an XMLHttpRequest with type = arraybuffer to load the raw contents of the resource. When the XMLHttpRequest emits the load event, the server takes the loaded data and sends it to the client. That’s it!

Naive! Simple! It works! (for simple cases)

Ways that this could be improved – AKA “wanna make this better?! this is your chance!”

As I mentioned above, right now this is very naive and assumes that the files will exist. If they don’t, well, horrible things will happen. Or you will get a 404 error. I haven’t tried it myself. I’m not sure. I’d say there is no error handling (yet).

The extension to content type function could be made into a module, and probably extended to know about more file types. It probably exists already, somewhere in npmlandia.

Another idea I had is that instead of loading the entire resource in memory and then flush it down the pipe as sendFile does, we could use progress events on the XMLHttpRequest and feed it to the client as we load it–so that the server device won’t run out of memory. I don’t know how we could find the length of the resource first, perhaps a HEAD request could work!

And finally we could try to serve each request in a worker, so that the server doesn’t block when responding to a large request. Am I going too far? I don’t even know if the TCP sockets work with workers, or if there are issues with that, or who knows!? This is unexplored territory, welcome to Uncertainty Land! :-D

Even extremer ways to get very… “creative”

What if you wrote a PHP parser that runs in JavaScript and then parsed .php files instead of just returning their contents as text/html?

I’m only half kidding, but you could maybe execute JS templates on the server. Forget security! You trust the content, right? ;-)

Another thing you could do is take advantage of the fact that the server is also a browser. So you can do browsersy things such as using Canvas to generate images, instead of having to load libraries that simulate canvas in node. Or you could synthesise web audio stuff on demand–maybe you could use an OfflineAudioWorker for extra non-blocking goodness! Or, if you want to go towards a relatively more boring direction, you could do DOM / text handling on a server that can deal with that kind of stuff natively.

With all the new platforms in which we can run Firefox OS, there are so many things we can do! Phone servers might be limited by battery, but a Raspberry PI running Firefox OS and connected to a power source can be an interesting platform with which to experiment with this kind of easy-to-write web servers.

But I saw NFC mentioned in the video!

Indeed! But I will cover that in another post, so we keep this one focused on the HTTP server :-)

Happy www serving!

flattr this!

Planet MozillaFirefox mobile 37.0.1 to 37.0.2

37.0.2 mobile fix a critical issue for our Japanese users.

A 37.0.2 Desktop release is planned to fix some issues (especially graphic). It should be published in the next few days.

  • 1 changesets
  • 4 files changed
  • 4 insertions
  • 5 deletions



List of changesets:

Mark FinkleBug 1151469 - Tweak the package manifest to avoid packaging the wrong file. r=rnewman, a=lmandel - c8866e34cbf3

Planet Mozilla(re)Introducing eSpeak.js


Look! A flashy demo with buttons!


A long time ago, we were investigating a way to expose text-to-speech functionality on the web. This was long before the Web Speech API was drafted, and it wasn’t yet clear what this kind of feature would look like. Alon Zakai stepped up, and proposed porting eSpeak to Javascript with Emscripten. This was a provocative idea: was our platform powerful enough to support speech synthesis purely in JS? Alon got back a few days later with a working demo, the answer was “yes”.

While the speak.js port was very impressive, it didn’t answer many of our practical needs. For example, the latency was not good enough for making a responsive UI, you could wait more than a couple of seconds to hear a short phrase. In addition, the longer the text you wanted to synthesize, the longer you needed to wait.

It proved a concept, but there were missing pieces we didn’t have four years ago. Today, we live in the future of 2011, and things that were theoretical then, are possible now (in the future).


Today, Emscripten will compile C/C++ code into a subset of Javascript called asm.js. This subset is optimized on all current browsers, and allows performance to be about 2x native. That is really good. eSpeak is a pretty lightweight library already, the extra performance boost of asm.js makes speech instantaneous.

Transferable Objects

Passing data between a web worker and a parent process used to mean a lot of copying, since the worker doesn’t share memory with the parent process. But today, you can transfer ownership of ArrayBuffers with zero copying. When the web worker is ready to send audio data back to the calling process, it could do so while maintaining a single copy of the audio buffer.

Web Audio API

We have a slick, full featured Audio API today on the web. When speak.js came out in 2011, it used a prefixed method on an <audio> element to write PCM data to. Today, we have a proper API that enables us to take the audio data and send it through an elaborate pipeline of filters and mixers, or even send it into the ether with WebRTC.

Emscripten Got Fancy

This was my first time playing with it, so I am not sure what was available in 2011. But, if I have to guess, it was not as powerful and fun to work with. Emscripten’s new WebIDL support makes adding bindings extremely easy. You still get a chance to do some pointer arithmetic, but that’s supposed to be fun. Right?

So here is eSpeak.js!

I wanted to do a real API port, as opposed to simply porting a command line program that takes input and writes a WAV file. Why? two main reasons:

  1. eSpeak can progressively synthesize speech. If you provide a callback to espeak_Synth(), it will be called repeatedly with as many samples as you defined in the buffer size. It doesn’t matter how long the text is that you want synthesized, it will fill the buffer and return it to you immediately. This allows for a consistent low latency from the moment you call espeak_Synth(), until you could start playing audio.
  2. eSpeak supports events. If you use a callback, you get access to a list of events that provide a timestamp in the audio, and the type of event that occurs there, such as word or sentence boundaries.

And, of course, with all the recent-ish platform improvements above, I was really time for a fresh attempt.

Future Work

  • Break up the data files. Right now, eSpeak.js is over a 2MB download. That’s because I packaged all the eSpeak data files indiscriminately. There may be a few bits that are redundant. On the flip side you get all 99 voice/language combinations (that’s a good deal for 2MB, eh?). It would be cool to break it up to a few data files and allow the developer to choose which voices to bundle or, even better, just grab them on demand.
  • Make a demo of the speech events. It makes my head hurt to think about how to do something compelling. But it is a neat feature that should somehow be shown.
  • ScriptProcessorNode is apparently deprecated. This is going to need to be ported to an AudioWorker once that is widely implemented.

I’m done apologizing, here is the demo.

Planet MozillaAnnouncing git-cinnabar 0.2.1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

What’s new since 0.2.0?

Not much, but this felt important enough to warrant a release, even though the issue has been there since before 0.1.0:

Mercurial can be slower when cloning or pulling a list of “heads” that contain non-topological heads. On repositories like the mercurial repository, it’s not so much of a big deal, taking 7s instead of 4s. But on big repositories like mozilla-central, it’s taking 23 minutes instead of 2 minutes and 20s (on my machine). And that’s with 100% CPU use on the server side.

The problem is that mozilla-central recently merged some old closed heads, such that it now has branch heads that aren’t topological heads. Git-cinnabar, until this release, would request those branch heads, leading the server to use the slow path mentioned above. This release works around the issue.

It also fixes an issue pushing to a remote empty mercurial repository.

Planet MozillaGetting ready for GCC 5.1

Confusingly, GCC has a new, weird, version scheme. The first release of GCC 5 will be 5.1. It is due soon (next week). For that reason, and because it’s better to find compiler bugs before it’s released, I started looking into building Firefox with it.

The first round of builds I did was with 5-20150405. That got me to find a small bunch of issues:

So that got me to do a second round with the first 5.1 RC, which had the fix for that ICE.

With all the above fixed, I could finally get builds out of try, and tests running, which revealed two more issues:

  • Another (quickly fixed) Internal Compiler Error on 32-bits PGO builds (but only for a nightly setup, with --enable-profiling, not for a release setup, which doesn’t have it).
  • JS engine assertions during some JIT tests on 64-bits builds (with or without PGO), which Dan Gohman kindly tracked down and reduced to a small test case allowing to file a GCC bug and bisect to pinpoint at the GCC upstream commit that broke it (yay git bisect run on a 36-CPU EC2 instance).

Preliminary results are promising, with benchmarks improving up to 16%, but the comparison wasn’t entirely fair, because they compared GCC 4.8 builds with frame pointers and JS engine diagnostics to GCC 5.1 builds without.

I’ll also give a spin to LTO, possibly finding more GCC bugs in the process.

Planet MozillaReport on Recognition

Earlier this year I embarked on a journey to investigate how we could improve participation in the Mozilla QA community. I was growing concerned that we were failing in one key component of a vibrant community: recognition.

Before I could began, I needed to better understand Mozilla QA’s recognition story. To this end I published a survey to get anonymous feedback and today, I’d like to share some of that feedback.

Profile of Participants

The first question I asked was intended to profile the respondents in terms of how long they’d been involved with Mozilla and whether they were still contributing.

recognition-activityThis revealed that we have a larger proportion of contributors who’ve been involved for more than a couple of years. I think what this indicates we need to be doing a better job of developing long-term relationships with new contributors.

recognition-teamsWhen asked which projects contributors identified with, 100% of respondents identified as being volunteers with the Firefox QA team. The remaining teams breakdown fairly evenly between 11% and 33%. I think this indicates most people are contributing to more than one team, and that teams at the lower end of the scale have an excellent opportunity for growth.

Recognizing Recognition

The rest of the questions were focused more on evaluating the forms of recognition we’ve employed in the past.


When looking at how we’ve recognized contributors it’s good to see that everyone is being recognized in some form or another, in many cases receiving multiple forms of recognition. However I suspect the results are somewhat skewed (ie. people who haven’t been recognized are probably long gone and did not respond to the survey). In spite of that, it appears that seemingly simple things, like being thanked in a meeting, are well below what I’d expect to see.

recognition-likelihoodWhen looking at the impact of being recognized, it seems that more people found recognition to be nice but not necessarily a motivation for continuing to contribute. 44% found recognition to be either ineffective or very ineffective while 33% found it to be either effective or very effective. This could point to a couple of different factors, either our forms of recognition are not compelling or people are motivated by the work itself. I don’t have a good answer here so it’s probably worth following up.

What did we learn?

After all said and done, here is what I learned from doing this survey.

1. We need to be focused on building long-term relationships. Helping people through their first year and making sure people don’t get lost long-term.

2. Most people are contributing to multiple projects. We should have a framework in place that facilitates contribution (and recognition of contribution) across QA. Teams with less participation can then scale more quickly.

4. We need to be more proactive in our recognition, especially in its simplest form. There is literally no excuse for not thanking someone for work done.

5. People like to be thanked for their work but it isn’t necessarily a definitive motivator for participation. We need to learn more about what drives individuals and make sure we provide them whatever they need to stay motivated.

6. Recognition is not as well “baked-in” to QA as it is with other teams — we should work with these teams to improve recognition within QA and across Mozilla.

7. Contributors find testing to be difficult due to inadequate description of how to test. In some cases, people spend considerable amounts of time and energy figuring out what and how to test, presenting a huge hurdle to newcomers in particular. We should make sure contribution opportunities are clearly documented so that anyone can get involved.

8. We should be engaging with Mozilla Reps to build a better, more regional network of QA contributors, beginning with giving local leaders the opportunity to lead.

Next Steps

In closing, I’d like to thank everyone who took the time to share their feedback. The survey remains open if you missed the opportunity. I’m hoping this blog post will help kickstart a conversation about improving recognition of contributions to Mozilla QA. In particular, making progress toward solving some of the lessons learned.

As always, I welcome comments and questions. Feel free to leave a comment below.


Internet Explorer blogApril 2015 security updates for Internet Explorer

Microsoft Security Bulletin MS15-032 (3038314)

This critical security update resolves vulnerabilities in Internet Explorer. For more information, please see Microsoft Security Bulletin MS15-032.

Security Update for Flash Player (3049508)

This security update for Adobe Flash Player in Internet Explorer 10 and 11 on supported editions of Windows 8, Windows 8.1 and Windows Server 2012 and Windows Server 2012 R2 is also available. The details of the vulnerabilities are documented in Adobe security bulletin APSB15-06. This update addresses the vulnerabilities in Adobe Flash Player by updating the affected Adobe Flash binaries contained within Internet Explorer 10 and Internet Explorer 11. For more information, please see Microsoft Security Advisory 2755801.

Disabling SSL 3.0

As communicated in our February 2015 security updates blog, today we’re releasing an update that disables SSL 3.0 by default in Internet Explorer 11. Enterprise customers can choose to enable SSL 3.0 for compatibility with their web applications, however we strongly recommend instead that they update their web servers and web applications to use latest security protocols such as TLS 1.2. For more information, please see Microsoft Security Advisory 3009008.

Staying up-to-date

Most customers have automatic updates enabled and will not need to take any action because these updates will be downloaded and installed automatically. Customers who have automatic updates disabled need to check for updates and install this update manually.

– Maliha Qureshi, Senior Program Manager, Customer Experience Quality

[Updated on 4/15/2015 at 11:00 AM to include Security Update for Flash Player (3049508). - ed]

Planet MozillaFirefox impedirá que recopilen nuestra información

Si de protección a la privacidad se habla, tenemos que reconocer los esfuerzos que realiza Mozilla para mantenernos seguros en este mundo donde todos quieren espiarnos y robar nuestra información a cualquier costo. Por esa razón, Mozilla recibió el premio a la compañía más confiable en Internet.

Firefox ya incluye la funcionalidad No rastrear, que le indica a los sitios web que no monitorizen tu actividad web, las compañías no están obligadas a hacerle caso. Sin embargo, Mozilla no se detiene y en la próxima versión de Firefox introducirá un nuevo filtro de seguridad que ya podemos activar de forma manual.

Antes de seguir, es importante que conozcan ¿qué es el rastreo en línea u online? El rastreo online consiste en la recopilación de los datos de navegación de una persona a través de los diferentes sitios web; esta recopilación se hace, por lo general, solo con ver el contenido de un sitio web. Los dominios rastreadores solo intentan identificar a una persona a través del uso de cookies o mediante la utilización de otras tecnologías, como por ejemplo, la huella digital.

Pero ¿qué solución ofrece Mozilla? La protección de rastreo te permite tomar el control de tu privacidad. Aunque Firefox proporciona  La Protección de rastreo de Firefox te devuelve de nuevo el control de tu privacidad, bloqueando activamente los dominios y sitios web que son conocidos por rastrear a sus visitantes.

La lista negra inicial utilizada por la herramienta Protección de rastreo, está basada en la lista negra de Disconnect.
Si deseas probar esta funcionalidad, debes instalar la versión Nightly que se encuentra en nuestra zona de Descargas para tu sistema,

¿Cómo activar la Protección de rastreo?

  1. En la barra de direcciones, escribe about:config y pulsa IntroRetorno.
    • El aviso de about:config “¡Zona hostil para manazas!” o “¡Esto puede cancelar su garantía!” puede aparecer. Haz clic en ¡Tendré cuidado, lo prometo! o ¡Seré cuidadoso, lo prometo!, para continuar.
  2. Busca privacy.trackingprotection.enabled.
  3. Haz doble clic en privacy.trackingprotection.enabled para cambiar su valor a true.

Esto activará la Protección de rastreo. Si más tarde quieres desactivarla, repite los pasos anteriores para cambiar el valor a false.

Cómo usar la Protección de rastreo

Una vez activada la Protección de rastreo, verás un escudo en la barra de direcciones cuando Firefox esté bloqueando el rastreo desde ese dominio o desde contenido mixto.


Para desactivar la Protección de rastreo en un sitio web en particular, simplemente haz clic en el icono del escudo y selecciona “Desactivar protección para este sitio”. Una vez que la Protección de rastreo se desactiva en un sitio web, podrás ver un escudo con una cruz roja. Puedes volver a activar la Protección de rastreo en ese sitio web volviendo a hacer clic en el escudo y seleccionando “Activar protección”.


Para ver qué recursos están siendo bloqueados, abre la consola web y busca en los mensajes que están en la pestaña Seguridad.

Si deseas probar esta funcionalidad, debes instalar la versión Nightly que se encuentra en nuestra zona de Descargas para tu sistema,

Fuente: Ayuda de Firefox

Planet MozillaInput: 2015q1 quarter in review

We got a lot of stuff done in 2015q1--it was a busy quarter.

Things to know:

  • Input is Mozilla's product feedback site.
  • Fjord is the code that runs Input.
  • We maintain project details and plans at
  • I am Will Kahn-Greene and I'm the tech lead, architect, QA and primary developer on Input.

Bugzilla and git stats

Quarter 2015q1 (2015-01-01 -> 2015-03-31)


Bugs created: 73
Creators: 7

         Will Kahn-Greene [:willkg] : 67
     Gregg Lind (User Advocacy - He : 1
                  Shashishekhar H M : 1
              Matt Grimes [:Matt_G] : 1
           Mark Banner (:standard8) : 1
                         deshrajdry : 1
                      L. Guruprasad : 1

Bugs resolved: 97

                            WONTFIX : 13
                              FIXED : 82
                          DUPLICATE : 1
                         INCOMPLETE : 1

                         Tracebacks : 4
                           Research : 2
                            Tracker : 6

Research bugs: 2

    1124412: [research] evaluate SUMO search APIs for best results
        given a piece of feedback

Tracker bugs: 6

    907871: [tracker] add analytics infrastructure and reports to
    967037: [tracker] add classifier page to Firefox OS feedback form
    968230: [tracker] capture carrier in Firefox OS form
    1092280: [tracker] heartbeat v2 (Input-specific changes)
    1104932: [tracker] about:support support tracker
    1130599: [tracker] Alerts phase 1

Resolvers: 6

         Will Kahn-Greene [:willkg] : 65
                      L. Guruprasad : 16
     Ricky Rosario [:rrosario, :r1c : 7
                             aokoye : 5
                            mgrimes : 2
                         deshrajdry : 2

Commenters: 22

                             willkg : 579
                           rrosario : 25
                         deshrajdry : 10
                            mcooper : 10
                          lgp171188 : 10
                            mgrimes : 9
                                cww : 8
                              glind : 8
                         adnan.ayon : 6
                             aokoye : 4
                               robb : 4
                            brnet00 : 3
                          mozaakash : 2
                        John99-bugs : 2
                           chofmann : 2
                            bwalker : 1
                              laura : 1
                          standard8 : 1
                    shashishekharhm : 1
                          christian : 1
                            fwenzel : 1
                          dlucian93 : 1


Total commits: 276

      Will Kahn-Greene :   189  (+9707, -3904, files 558)
         L. Guruprasad :    38  (+916, -402, files 160)
         Ricky Rosario :    36  (+1756, -10764, files 314)
            Adam Okoye :     6  (+210, -12, files 12)
               deshraj :     3  (+19, -19, files 5)
         Michael Kelly :     2  (+2, -2, files 4)
      Adrian Gaudebert :     2  (+20, -6, files 4)

Total lines added:   12630
Total lines deleted: 15109
Total files changed: 1057


    Adam Okoye
    Adrian Gaudebert
    Gregg Lind (User Advocacy - Heartbeat - Test Pilot)
    L. Guruprasad
    Mark Banner (:standard8)
    Matt Grimes [:Matt_G]
    Michael Kelly
    Ricky Rosario
    Shashishekhar H M
    Will Kahn-Greene

Code line counts:

2014q1: April 1st, 2014:        15195 total  6953 Python
2014q2: July 1st, 2014:         20456 total  9247 Python
2014q3: October 7th. 2014:      23466 total  11614 Python
2014q4: December 31st, 2014:    30158 total  13615 Python
2015q1: April 1st, 2015:        28977 total  12623 Python

Input finally shrunk, though this is probably due to switching from the South migration system to the Django 1.7 migration system and in the process of doing that ditching most of our old migration code.

Contributor stats

L Guruprasad worked through 16 bugs this quarter--that's awesome!

Adam worked on the Thank You page overhaul. It's not quite done, but it's in a good place--I'll be finishing up that work in 2015q2.

Ricky finisedh up the Django 1.7 update just in time for Django 1.8 to be released. In doing that work, we cleaned up a lot of code, shed a bunch of dependencies and are in a much better place in regards to technical debt. Yay!

Thank you to everyone who contributed!


Django 1.7 upgrade: We upgraded to Django 1.7. That's a big deal since Django 1.8 was just released so Django 1.6 isn't supported anymore. Django 1.7 has a new migration system, so there was a lot of work required to upgrade Input.

Heartbeat v2: We did most of Heartbeat v2 in 2014q4, however it didn't really launch until 2015q1. We did a bunch of work to tweak things for the release.

Alerts v1: We added an Alerts API. Input collects a variety of feedback-type data. After several discussions, we decided that it was a better idea to have alert systems live outside of Input, but push alert events to Input. This allows us to develope alert emitting systems faster because they're outside of the Input development process. Further, it relaxes implementation details. The Alerts API has GET and POST abilities and lets us capture and report on arbitrary alert events.

Alerts API.

Remote troubleshooting data capture: We finished this work in 2015q1. It's now rolled out for specific products and in all locales.

Remote troubleshooting data capture project plan.

12 Factor App: At some point, we're going to move Input to AWS. In the process of doing that, we're going to change how Input is configured and deployed and switch to a 12-factor-app-friendly model. I spent a good portion this quarter cleaning things up and redoing configuration so it's more 12-factor-app-compliant.

There's still some work to do, but it'll be easier to do as we're in the process of switching to AWS and know more about how the infrastructure is going to be structured.

12 Factor App.

Snow removal: I live next town over from Lowell, MA, USA. We got 118 inches of snow this winter the bulk of which came in a 6-week period where it pretty much snowed every three days. It was exhausting.

I did a lot of shoveling, but never really solved the problem. However, it did subside after a while and now it's gone.

Snow removal.


2015q1 went by really fast and we got a lot of stuff done and we worked through a lot of technical debt, too. It was a good quarter.

Planet MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaFirefox 38 beta3 to beta4

This beta release is going back to normal level in term of number of patches and size.

We took some important fixes for graphics and also uplifted some polish patches for the new features which will ship with 38 or 38.1 (Reading list, in-tab preferences, etc).

  • 33 changesets
  • 68 files changed
  • 1204 insertions
  • 986 deletions



List of changesets:

Ryan VanderMeulenBug 1153060 - Bump the in-tree mozharness revision. a=test-only - 7f7236a5b6dd
Patrick BrossetBug 1139925 - Make the BoxModelHighlighter highlight all quads and draw guides around the outer-most rect. r=miker, a=sledru - e2f81a3ca1e5
Brian HackettBug 1151401 - Watch for non-object unboxes while optimizing object-or-null operations. r=jandem, a=sledru - d2bade84e15e
Gijs KruitboschBug 1147337 - Stop checking article URL as AboutReader.jsm gets created separately every time anyway. r=margaret, a=sledru - 948241aa9d1a
Gijs KruitboschBug 1152104 - Use command event and delegate clicks to it. r=jaws, a=sledru - f5f2adb88968
Matt WoodrowBug 1116812 - Blacklist two intel GPUs that are trigger driver crashes frequently. r=jrmuizel, a=sledru - 6d9fdd280e65
Jean-Yves AvenardBug 1133633: Part2. Enable async decoding on mac. r=mattmoodrow, a=sledru - 10f75583d21a
Jean-Yves AvenardBug 1153469: Ensure IOSurface isn't released before being composited. r=mattwoodrow, a=sledru - 7efd806788be
Kannan VijayanBug 1134515 - Ensure SPSBaselineOSRMarker checks pseudostack size properly. r=shu, a=sledru - 10c3198eb453
Aryeh GregorBug 1134545 - Insufficient null check. r=ehsan, a=sledru - 5f042fe29707
Chris PearceBug 1148286 - Ensure we don't nullpointer deref if the CDM crashes in MediaKeys and Reader::SetCDMProxy implementations. r=edwin, a=sledru - 999636e73165
Jean-Yves AvenardBug 1147744 - Part 1: Round down display size. r=k17e, a=sledru - 8f8ebd186863
Jean-Yves AvenardBug 1147744 - Part 2: Properly calculate cropping value. r=k17e, a=sledru - c4a01c159cb6
Margaret LeibovicBug 1150695 - Use isProbablyReaderable function from Readability.js. r=Gijs, a=sledru - ab0337907115
Drew WillcoxonBug 1151077 - Make the desktop reading list sync module batch its POST /batch requests. r=markh, a=sledru - 1b6ba1cb52f6
John SchoenickBug 1139560 - Reject non-standard parses of integers in srcset descriptors. r=jst, a=sledru - dffb5c867f47
John SchoenickBug 1139560 - Fix srcset parser 'After descriptor' state mishandling spaces. r=jst, a=sledru - 07666fc071be
John SchoenickBug 1139560 - <img>.currentSrc should be not be nullable. r=jst, a=sledru - 7285a02cd883
Mike TaylorBug 1139560 - Update srcset web-platform expectations. r=jst, a=sledru - cd5b5709b2e4
Ben TurnerBug 1135344 - Don't let IPDL use names that are reserved for compilers. r=froydnj, a=sledru - 8340415e7b27
Jared WeinBug 1149230 - In-content preferences: missing padding between labels and learn more links in Advanced -> Data Choices panel. rs=Gijs, a=sledru - 002faed66e96
Mark HammondBug 1152703 - Prevent desktop reading list sync errors from preventing sync from starting again. r=adw, a=sledru - 8191b45753c7
Blake WintonBug 1149136 - Specify a min-width and min-height to avoid flex making things too small. ui-r=mmaslaney, r=florian, a=sledru - 0a9b3f3e962d
Michael ComellaBug 1153193 - Add EXTRA_DEVICES_ONLY flag to share intents. r=rnewman, a=sledru - 7081bbd2b331
Margaret LeibovicBug 1153262 - Remove length comparison from testReadingListCache. r=gijs, a=sledru - 2ffc047abd90
Michael ComellaBug 1150430 - Set quickshare !visible and !enabled by default. r=mfinkle, a=sledru - b54c44cfa07e
Jared WeinBug 1043612 - Persist the size of resizable in-content subdialogs. r=gijs, a=sledru - 9740c1d817f1
Michael ComellaBug 1152489 - Prevent getMostRecentHomePanel() from being called on null selectedTab. r=mfinkle, a=sledru - afe57494b44d
Mats PalmgrenBug 1143299 - Make frame insertion methods deal with aPrevFrame being on an overflow list. r=roc, a=sledru - 2659ba26dcf2
Gijs KruitboschBug 1152022 - Update Readability to github tip. r=gijs, r=margaret, a=sledru - 333017ad43a9
Andrew McCreightBug 1144649 - Make CCGraph::AddNodeToMap fallible again. r=smaug, a=sledru - 7717f3aa4cf6
L. David BaronBug 1148829 - Backport a safer version of part of Bug 1061364 to make transitions stop running the refresh driver after they've finished. r=bbirtles, a=sledru - 4359c16b7f44
Jeff MuizelaarBug 1153381 - Add a D3D11 ANGLE blacklist. r=mstange, ba=sledru - 05508ccf3ae8

Planet MozillaWillie Cheong: Academics Complete

I’ve written many last exams before. But today I finish writing my last, last exam; graduation awaits. Life is pretty exciting right now. School has been an amazing experience, but being done feels much better.

Goodbye Waterloo; Hello World!

Planet Mozillahappy bmo push day!

the following changes have been pushed to

  • [1152160] “take” button doesn’t update the ui, so it looks like nothing happened
  • [1152163] passing an invalid bug id to the multiple bug format triggers: Can’t call method “name” on an undefined value
  • [1152167] “powered by” logo requests fails: it sets the assignee to dboswell
  • [1150448] Replace the newline with ” – ” when the bug’s id and summary are copied
  • [1152368] BUGZILLA_VERSION in Bugzill::Constants causes error when installing Perl deps for new BMO installation
  • [1090493] Allow ComponentWatching extension to work on either bmo/4.2 or upstream 5.0+
  • [1152662] user story text should wrap
  • [1152818] changing an assignee to or any .bugs address should automatically reset the status from ASSIGNED to NEW
  • [1149406] “project flags” label is visible even if there aren’t any project flags
  • [1031035] xmlrpc can be DoS’d with billion laughs attack
  • [1152118] Shortcut for editing gets triggered even when “ctrl” and “e” are not pressed at the same time
  • [1152360] Add parameter to that generates a cpanfile usable by utilities such as cpanm for installing Perl dependencies
  • [1148490] Custom Budget Request form for FSA program
  • [1154098] Unable to add mentors to bugs
  • [1146767] update relative dates without refreshing the page
  • [1153103] add hooks for legal product disclaimer

discuss these changes on

Filed under: bmo, mozilla

Planet MozillaElasticsearch Meetup

Elasticsearch Meetup Join us as Mozilla hosts an ElasticSearch meetup, featuring a duo inviting you along on a whirlwind dive of the ElasticSearch .NET client

Planet MozillaGaia Tips and Tricks for Gecko Hackers

I’m often assigned Firefox Rendering bugs in bugzilla. By the time a bug gets assigned to me, the reporter had usually exhausted other options and assumed (correctly) that I’m ultimately responsible for fixing Firefox rendering bugs. Of course, I often have to reassign most bugs to more capable individuals.

Some of the hardest bugs to assign are the ones reported by our own Gaia team: the team responsible for building the user experience in Firefox OS. The Gaia engineers take CSS and JavaScript and build powerful mobile apps like the phone dialer and SMS client. When they report bugs, it’s often found within lots of CSS and JS code. I wanted to learn how to effectively reduce the time it takes to resolve rendering issues reported by the Gaia team. It takes a long time to go from a Gaia bug like “scrolling in the gallery app is slow” to find the underlying Gecko bug, for example “rounding issue creates an invalidation rectangle that is too large.”

To do that, I became a Gaia developer for a few days at our Paris office. I reasoned that if I could learn how they work, then I can help my team boil down issues faster and become more responsive to their needs. We already recognize the value of having expert web application developers on staff, but we could do a better job with a better understanding of how they work. With that in mind, I spent the week without any C++ code to look at, and dived into the world of mobile web app development.

I wrote down the steps I took to set up a FirefoxOS build and test environment in an earlier post This time, I’ll list a few of the tips and tricks I learned while I was working with the Gaia developers.

The first and most important tip: You will brick the phone when working on the OS. In fact, you’re probably not trying hard enough if you don’t brick it :) Fastboot lets you connect ADB to the phone when it becomes unresponsive to flash the device with a known good system (like the base image.) Learn how to manually force fastboot on your phone.

Julien showed me how to maintain a Gaia developer profile on your desktop development environment. This set of commands will configure your B2G build to produce the desktop B2G runtime that’s a bit easier to debug than a device build:

# change value of the FIREFOX to point to the full path to the B2G desktop build
 export FIREFOX=/Volumes/firefoxos/B2G/build/dist/
 export PROFILE_FOLDER=gaia-profile DEBUG=1 DESKTOP=0

With a Gaia developer profile, you can switch between B2G desktop and a regular Firefox browser build for testing:

export FIREFOX=/full/path/to/desktop/browser
 $FIREFOX -profile gaia-profile --no-remote app://

The Gaia profile lets you use URL’s like app:// to run the Gaia apps on the desktop browser. This trick alone was a huge time saver! Try it with other URL’s like app://

For a first Gaia development project, I picked up the implementation of the new card view for gaia that is based on an asynchronous panning and zooming (APZC.) Etienne did the initial proof-of-concept and my goal is to rebase/finish/polish it and add some CSS Scroll Snapping features. My initial tests for this feature are very promising. CSS Scroll Snapping is much more responsive than the previous JavaScript-based implementation. I’m still working out some bugs but hope to land my first Gaia pull request soon.

I’ve already been able to apply what I’ve learned to triage bugs like this one. The bug started out described as a problem with how we launch GMail on B2G in Arabic language. Based on the testing tricks I learned from Gaia team, I was able to distill it to a root cause with scrollbar rendering on right-to-left (RTL) languages. I added a simplified test case to the bug that should greatly reduce debugging time, and assigned it to one of our RTL experts. That’s quite a bit better than assigning tough bugs to random developers with the entire OS as the test case!

Thanks to Julien and Ettiene for helping me get up to speed. I highly recommend that any Gecko engineer spend a few days as a Gaia hacker. I’m humbled by the ingenuity these developers have for building the entire OS user experience with only the capabilities offered by the Web. We could all learn a lot in the trenches with these hackers!

Planet Mozilla“why work doesnt happen at work” by Jason Fried on TEDx

While reading “Remote”, I accidentally found this TEDx talk by one of the authors, Jason Fried. Somehow I’d missed this when it first came out in 2010, so stopped to watch it. I’ve now watched this a few times in a row, found it just as relevant today as it was 4-5 years ago, so am writing this blogpost.

The main highlights for me were:

1) work, like sleep, needs solid uninterrupted time. However, most offices are designed to enable interrupts. Open plan layouts. Phones. Casual walk-by interrupts from managers asking for status. Unneeded meetings. They are not designed for uninterrupted focus time. No-one would intentionally plan to have frequently-interrupted-sleep every night and consider it “good”, so why set up our work environments like this?

2) Many people go into the office for the day, attempting to get a few hours uninterrupted work done, only to spend time reacting to interrupts all day, and then lament at the end of the day that “they didn’t get anything done”! Been there, lived through that. As a manager, he extols people to try things like “no-talking-Thursdays”, just to see if people can actually be more productive.

3) The “where do you go when you really want to get work done” part of his presentation nailed it for me. He’s been asking people this question for years, and the answers tend to fall into three categories:

  • place: “the kitchen”, “the spare room”, “the coffee shop”, …
  • moving object: plane, train, car… the commute
  • time: “somewhere really early or really late at night or on the weekend”

… and he noted that no-one said “the office during office hours”!! The common theme is that people use locations where they can focus, knowing they will not get interrupted. When I need to focus, I know this is true for me also.

All of which leads to his premise that organizing how people work together, with most communication done in a less interruptive way is really important for productivity. Anyone who has been at one of my remoties sessions knows I strongly believe this is true – especially for remoties! He also asked why businesses spend so much money on these counter-productive offices.

Aside: I found his “Facebook and twitter are the modern day smoke breaks” comment quite funny! Maybe thats just my sense of humor. Overall, its a short 15min talk, so instead of your next “facebook/twitter/smokebreak”, grab a coffee and watch this. You’ll be glad you did.

Planet MozillaContributing to Rust

I wrote a few things about contributing to Rust. What with the imminent 1.0 release, now is a great time to learn more about Rust and contribute code, tests, or docs to Rust itself or a bunch of other exciting projects.

The main thing I wanted to do was make it easy to find issues to work on. I also stuck in a few links to various things that new contributors should find useful.

I hope it is useful, and feel free to ping me (nrc in #rust-internals) if you want more info.

Planet MozillaMozilla Learning Networks: what’s next?

What has the Mozilla Learning Networks accomplished so far this year? What’s coming next in Q2? This post includes a slide presentation, analysis and interview with Mozilla’s Chris Lawrence, Michelle Thorne and Lainie DeCoursy. It’s a summary of a more detailed report on the quarter here. Join the discussion on #teachtheweb.

What’s the goal?

Establish Mozilla as the best place to teach and learn the web.

Not only the technical aspects of the open web — but also its culture, citizenship and collaborative ethos.

How will we measure that? Through relationships and reach.

2015 goal: ongoing learning activity in 500 cities

In 2015, our key performance indicator (KPI) is to establish ongoing, on-the-ground activity in 500 cities around the world. The key word is ongoing — we’ve had big success in one-off events through programs like Maker Party. This year, we want to grow those tiny sparks into ongoing, year-round activity through clubs and lasting networks.

From one-off events to lasting Clubs and Networks

Maker Party events help active and on-board local contributors. Clubs give them something more lasting to do. Hive Networks grow further into city-wide impact.

What are we working on?

These key initiatives:

  2. Web Clubs
  3. Hive Networks
  4. Maker Party
  5. MozFest
  6. Badges

image will provide a new home for all our teaching offerings — including Maker Party.

What we did: developed the site, which will soft launch in late April.

What’s next: adding dynamic content like blogs, curriculum and community features. Then make it easier for our community to find and connect with each other.


Web Clubs

We shipped the model and tested it in 24 cities. Next up: train 10 Regional Coordinators. And grow to 100 clubs.

This is a new initiative, evolved from the success of Maker Party. The goal: take the sparks of activation created through Maker Party and sustain them year-round, with local groups teaching the web on an ongoing basis — in their homes, schools, libraries, everywhere.

What we did:

  • Established pilot Clubs in 24 cities. With 40 community volunteers.
  • Shipped new Clubs curriculum, “Web Literacy Basics.”
  • Field-tested it. With 40 educators and learners from 24 cities, including Helsinki Pune, Baltimore, Wellington and Cape Town.
  • Developed a community leadership model. With three specific roles: Club Leader, Regional Coordinator, and Organizer. (Learning from volunteer organizing models like Obama for America, Free the Children and Coder Dojo.)

What’s next:

  • Train 10 Regional Coordinators. Each of whom will work to seed 10 clubs in their respective regions.
  • Develop new curriculum. For Privacy, Mobile and “Teach like Mozilla.”

Hive Networks

What we did:

We added four new cities in Q1, bringing our total to 11. Next up: grow to 15.

  • We welcomed 4 new cities into the Hive family: Hive Vancouver, Mombasa, Denver and Bangalore.
  • Made it easier for new cities to join. Clarified how interested cities can become official Hive Learning Communities and shipped new “Hive Cookbook” documentation.

What’s next:

  • Strengthen links between Clubs and new potential Hives. With shared community leadership roles.
  • Document best practices. For building sustainable networks and incubating innovative projects.
  • Ship a fundraising toolkit. To help new Hives raise their own local funding.

Surman Keynote for Day One - Edit.005

Maker Party

A global kick-off from July 15 – 31, seeding local activity that runs year-round.

What we did: created a plan for Maker Party 2015, building off our previous success to create sustained local activity around teaching web literacy.

What’s next: this year Maker Party will start with a big two-week global kick-off campaign, July 15-31. We’ll encourage people to try out activities from the new Clubs curriculum.

Mozilla Festival

This year’s MozFest will focus on leadership development and training

Mark your calendars: MozFest 2015 will take place November 6 – 8 in London.
A key focus this year is on leadership development; we’ll offer training to our Regional Co-ordinators and build skill development for all attendees. Plus run another Hive Global meet-up, following on last year’s success.

What’s next: refine the narrative arc leading up to MozFest. Communicate this year’s focus and outcomes.


What we did: In Q1 our focus was on planning and decision making.

What’s next: improve the user experience for badge issuers and earners.

Community voices

  • “I run two tech programmes in Argentina. I do it outside of my job, and it can be tricky to find other committed volunteers with skills and staying power. I’d love help, resources and community to do it with.” –Alvar Maciel, school teacher, Buenos Aires, Argentina
  • “I always thought I’d visit websites. Not make them! But now I can.” – middle school student from PASE Explorers, NYC afterschool program
  • “Our partnership with Hive makes us fresh, keeps us moving forward rather than doing the same old thing all the time.” –Dr. Michelle Larson, President and CEO, Adler Planetarium, Hive Chicago
  • “We had constant demand from our community members for web literacy classes, and we were finally able to create a great recipe with Web Clubs and curriculum.” –Elio Qoshi, Super Mentor/Mozilla Rep, Albania


The focus this year is on building partnerships that help us: 1) activate more mentors and 2) reach more cities. This builds on the success of partnerships like National Writing Project (NWP) and CoderDojo, and has sparked conversations with new potential partners like the Peace Corps.

Key challenges

  • It’s hard to track sustained engagement offline. We often rely on contributors to self-report their activity — as much of it happens offline, and can’t be tracked in an automated way. How can we incentivize updates and report-backs from community members? How do other organizations tackle this?
  • Establishing new brand relationships. We’ve changed our branding. Our current community of educators grew in deep connection with Webmaker. But in 2015 we made a decision to more closely align learning network efforts directly with the Mozilla brand. How can we best transition the community through this, and simplify our overall branding?
  • Quantifying impact. We’re getting better at demonstrating quantity, as in the numbers of events we host or cities we reach. But those measurements don’t help us measure the net end result or overall impact. How do we get better at that?

Planet MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Internet Explorer blogAnnouncing the 2015 “Project Spartan” Web Summit, May 5th and 6th in Mountain View, CA

In March, the Project Spartan team hosted a small, by-invitation workshop at our Silicon Valley campus with developers attending from popular sites and frameworks based around the Silicon Valley area. This was an invaluable opportunity for us to share our plans and meet face to face with developers that build with the Web platform.

We heard great feedback from attendees and we’re excited to announce that we are coming back very soon! On May 5th and 6th, we’ll be hosting the Project Spartan Web Summit at Microsoft’s Silicon Valley campus in Mountain View, CA. This time we’re turning it up to 10 with two days of content packed with deep technical sessions from the Project Spartan team and fantastic guest speakers from the Web development community, including Kimberly Blessing, Kyle Simpson, and Sara Soueidan.

Register for the Project Spartan Web Summit

This is a great chance to come meet the engineers building our rendering and JavaScript engines and managing our Web properties and developer tools like Modern.IE, RemoteIE, and IEDevChat. In our technical sessions, you’ll get into the nuts and bolts of the latest interoperable Web technologies; learn how to make your site or Web app shine across browsers and platforms with tools like ECMAScript 6, WebAudio, WebGL, Cordova, SVG elements, and much more. We’ll take you under the hood of Project Spartan and its new rendering engine, show you the latest updates to our roadmap, and host open Q&A panels with the Project Spartan leadership.

You’ll also have the opportunity to test-drive Project Spartan on our latest builds alongside engineers from the platform team, who can guide your testing, answer questions, and take your feedback on any problems you encounter.

Good news for those of you not able to make it to our Silicon Valley location: We’ll be live streaming most of our sessions around the world on Channel 9 and recording the remainder. Tune in during the event to watch live, or check in afterwards for on-demand viewing.

We’re excited to see you there! Head on over to the Project Spartan Web Summit event page to learn more and sign up. Registration starts at 10:00am PDT on Tuesday, April 14th.

Kyle Pflug, Program Manager, Project Spartan

Planet MozillaMozilla Science Lab Week in Review, April 6-12

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Blogs & Articles

  • Erin McKiernan put the call out on Twitter last week for examples of collaborations arising from open science & open data, and got a great spectrum (from worm simulations to text mining Phillip K. Dick) of responses; see her summary here.
  • Hackaday interviewed Charles Fracchia of the MIT Media Lab on the need and impact of open hardware in open science. Fracchia makes the observation that reproducibility is well-served by distributing standardized data collection hardware that can be deployed in many labs & conditions.
  • Figshare blogged recently about decisions taken by the US Health & Human Services department obliging its operating divisions to make government funded research data available to the public.
  • Jonathan Rochkind blogged on the general unusability of institutional library paywall & login systems, and discusses potential solutions in the form of LibX, bookmarklets and Zotero & co.
  • Nature Biotechnology is engaging in more proactive editorial oversight to ensure the reproducibility of the computational studies it publishes, by way of ensuring the availability of relevant research objects.
  • Shoaib Sufi blogged for the Software Sustainability Institute on their recent Collaborations Workshop 2015. In it, Sufi highlights some of the trends emerging in the conversation around developing research software, including the cultural battle in research with imposter phenomenon (see also our recent article on this matter), and the rising profile of containerization as a fundamental tool for reproducible research.

Conferences & Meetings

  • OpenCon 2015 has been announced for 14-16 November, in Brussels, Belgium. From the conference’s website, ‘the event will bring together students and early career academic professionals from across the world to learn about the issues, develop critical skills, and return home ready to catalyze action toward a more open system for sharing the world’s information — from scholarly and scientific research, to educational materials, to digital data.‘ Applications for OpenCon open on 1 June; updates are available from their mailing list. Also, here’s Erin McKiernan’s thoughts on OpenCon 2014.
  • Jake VanderPlas gave a great talk on Fast Numerical Computing with NumPy at PyCon 2015 on Friday.
  • The European Space Agency is organizing a conference entitled Earth Observation Science 2.0 at ESRIN, Frascati, Italy, on 12-14 October.  Topics include open science & data, citizen science, data visualization and data science as they pertain to earth observation; submissions are open until 15 May.
  • The French National Natural History Museum is planning three open forums on biodiversity, designed to collect broad-based input to inform the theme and goals of a forthcoming observatory. The project extends the principles of citizen science to include the public in the discussion surrounding not just data collection, but scientific program design.

Tools & Services

  • Harvard’s project has made CC0 the default license for all data deposited therein in their version 4.0, citing the license’s familiarity to the open data community.
  • The US Federal government’s open data portal,, has created a new theme section highlighting climate & human health data. From their website, ‘The Human Health Theme section allows users to access data, information, and decision tools describing and analyzing climate change impacts on public health. Extreme heat and precipitation, air pollution, diseases carried by vectors, and food and water-borne illnesses are just some of the topics addressed in these resources.
  • GitHub is inviting users to participate in a test of their forthcoming support for the new Git large file storage extension to the popular version control system.
  • The Ocean Observation Initiative, a multi-site array of heavily instrumented underwater observatories, is set to come on-line in June. Data from the OOI is slated for open access distribution.

Planet MozillaMy Installed Add-ons – Keyword Search

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’m doing a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

My previous posts have been about:

For this blog post, I’ll talk about Keyword Search.
In Firefox, whenever you do a web search from the location bar, it will use the same search engine as in the search bar. Keyword Search allows you to use a separate search engine for location bar web searches. This is really helpful to people like me who mainly use one search engine (for basic web searches) and others for content-specific use cases.

To set your location bar search engine, go to the add-ons manager.

  1. Beside “Keyword Search“, click Preferences.
  2. Beside “Keyword Search Engine“, select the search engine you want to use.

You can install it via the Mozilla Add-ons site.

Planet MozillaThis Week in Rust 77

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

104 pull requests were merged in the last week, and 6 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Ben Ashford
  • Christopher Chambers
  • Dominick Allen
  • Hajime Morrita
  • Igor Strebezhev
  • Josh Triplett
  • Luke Gallagher
  • Michael Alexander
  • Michael Macias
  • Oak
  • Remi Rampin
  • Sean Bowe
  • Tibor Benke
  • Will Hipschman
  • Xue Fuqiao

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

<frankmcsherry> rust is like a big bucket of solder and wire, with the promise that you can't electrocute yourself.

From #rust.

Thanks to BurntSushi for the tip. Submit your quotes for next week!.

Planet MozillaDarwin Nuke the Refrigerator, Wet Sprocket, etc.

Two more security notes.

First, as a followup, a couple of you pointed out that there is a writeconfig on 10.4 through 10.6 (and actually earlier) in /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources. Yes, there is, and it's even setuid root (I wish Apple wouldn't do that). However, it is not exploitable, at least not by systemsetupusthebomb or a similar notion, because it appears to lack the functionality required for that sort of attack. I should have mentioned this in my prior posting.

Second, Darwin Nuke is now making the rounds, similar to the old WinNuke which plagued early versions of Windows until it was corrected in the Windows 95 days in that you can send a specially crafted packet to an OS X machine and kernel panic it. It's not as easy as WinNuke was, though -- that was as simple as opening a TCP connection to port 139 on the victim machine and sending it nonsense data with the Urgent Pointer flag set in the TCP header. Anyone could do that with a modified Telnet client, for example, and there were many fire-and-forget tools that were even easier. Unless you specifically blocked such connections on ingress, and many home users and quite a few business networks didn't at the time, WinNuke was a great means to ruin someone's day. (I may or may not have done this from my Power Mac 7300 a couple times to kick annoying people off IRC. Maybe.)

Darwin Nuke, on the other hand, requires you to send a specially crafted invalid ICMP packet. This is somewhat harder to trigger remotely as many firewalls and routers will drop this sort of malformed network traffic, so it's more of a threat on an unprotected LAN. Nevertheless, an attacker with a raw socket interface can engineer and transmit such packets, and the technical knowledge required is relatively commonplace.

That said, even on my test network I'm having great difficulty triggering this against the Power Macs; I have not yet been able to do so. It is also not clear if the built-in firewall protects against this attack, though the level at which the attack exists suggests to me it does not. However, the faulty code is indeed in the 10.4 kernel source, so if it's there and in 10.10, it is undoubtedly in 10.5 and 10.6 as well. For that reason, I must conclude that Power Macs are vulnerable. If your hardware (or non-OS X) firewall or router supports it, blocking incoming ICMP will protect you from the very small risk of being hit at the cost of preventing pings and traceroutes into your network (but this is probably what you want anyway).

Even if you do get nailed, the good news (sort of) is that your computer can't be hacked by this method that anyone is aware of; it's a Denial of Service attack, you'll lose your work, you may need to repair the filesystem if it does so at a bad time and that sucks, but it doesn't compromise the machine otherwise. And, because this is in open source kernel code, it should be possible to design a fix and build a new kernel if the problem turns out to be easier to exploit than it appears currently. (Please note I'm not volunteering, at least, not yet.)

So, you can all get out of your fridges now, mmkay?

10.4Fx 38 and IonPower update: 50% of V8 passes and I'm about 20% into the test suite. Right now wrestling with a strange bug with return values in nested calls, but while IonPower progress is slow, it's progress!

Planet MozillaMathml April Meeting

Mathml April Meeting

This is a report about the Mozilla March IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in May 13th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Leia mais...

Planet MozillaThings I’ve Learned This Week (April 6 – April 10, 2015)

It’s possible to synthesize native Cocoa events and dispatch them to your own app

For example, here is where we synthesize native mouse events for OS X. I think this is mostly used for testing when we want to simulate mouse activity.

Note that if you attempt to replay a queue of synthesized (or cached) native Cocoa events to trackSwipeEventWithOptions, those events might get coalesced and not behave the way you want. mstange and I ran into this while working on this bug to get some basic gesture support working with Nightly+e10s (Specifically, the history swiping gesture on OS X).

We were able to determine that OS X was coalescing the events because we grabbed the section of code that implements trackSwipeEventWithOptions, and used the Hopper Disassembler to decompile the assembly into some pseudocode. After reading it through, we found some logging messages in there referring to coalescing. We noticed that those log messages were only sent when NSDebugSwipeTrackingLogic was set to true, we executed this:

defaults write org.mozilla.nightlydebug NSDebugSwipeTrackingLogic -bool YES

In the console, and then re-ran our swiping test in a debug build of Nightly to see what messages came out. Sure enough, this is what we saw:

2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 coalescing scrollevents
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002 adjusted:-0.002
2015-04-09 15:11:55.396 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 call trackingHandler(NSEventPhaseChanged, gestureAmount:-0.002)

This coalescing means that trackSwipeEventWithOptions is only getting a subset of the events that we’re sending, which is not what we had intended. It’s still not clear what triggers the coalescing – I suspect it might have to do with how rapidly we flush our native event queue, but mstange suspects it might be more sophisticated than that. Unfortunately, the pseudocode doesn’t make it too clear.

String templates and toSource might run the risk of higher memory use?

I’m not sure I “learned” this so much, but I saw it in passing this week in this bug. Apparently, there was some section of the Marionette testing framework that was doing request / response logging with toSource and some string templates, and this caused a 20MB regression on AWSY. Doing away with those in favour of old-school string concatenation and JSON.stringify seems to have addressed the issue.

When you change the remote attribute on a <xul:browser> you need to re-add the <xul:browser> to the DOM tree

I think I knew this a while back, but I’d forgotten it. I actually re-figured it out during the last episode of The Joy of Coding. When you change the remoteness of a <xul:browser>, you can’t just flip the remote attribute and call it a day. You actually have to remove it from the DOM and re-add it in order for the change to manifest properly.

You also have to re-add any frame scripts you had specially loaded into the previous incarnation of the browser before you flipped the remoteness attribute.1

Using Mercurial, and want to re-land a patch that got backed out? hg graft is your friend!

Suppose you got backed out, and want to reland your patch(es) with some small changes. Try this:

hg update -r tip
hg graft --force BASEREV:ENDREV

This will re-land your changes on top of tip. Note that you need –force, otherwise Mercurial will skip over changes it notices have already landed in the commit ancestry.

These re-landed changes are in the draft stage, so you can update to them, and assuming you are using the evolve extension2, and commit –amend them before pushing. Voila!

Here’s the documentation for hg graft.

  1. We sidestep this with browser tabs by putting those browsers into “groups”, and having any new browsers, remote or otherwise, immediately load a particular set of framescripts. 

  2. And if you’re using Mercurial, you probably should be. 

Planet MozillaFirefox 38 beta2 to beta3

Yet another busy beta release!

We took many changes for the reading list feature but also landed some improvements for the sharing actions on mobile (this is why we did a beta 3 release on mobile).

We also took a few changes for Thunderbird and Seamonkey as they base their major releases on ESR releases.

  • 108 changesets
  • 227 files changed
  • 2501 insertions
  • 1138 deletions



List of changesets:

Ralph GilesBug 1080995 - Don't use the h264parser gstreamer element. r=kinetik, a=lizzard - 5b6180fc4286
Mike HommeyBug 1147217 - Improve l10n repack error message when locale doesn't contain necessary files. r=mshal, a=NPOTB - 5c4cacd09c9c
Tooru FujisawaBug 1150297 - Move source property to RegExp instance again. r=till, a=sylvestre - f208b7bb88ae
Tom TromeyBug 1150646 - Ensure that memory stats show up in treeherder logs. r=chmanchester, a=test-only - a1d7b2cdd950
Jean-Yves AvenardBug 1100210 - Mark MPEG2 Layer 1,2,3 audio as MP3. r=k17e, a=sledru - a72d76b284ea
Mike HommeyBug 1147283 - Replace mozpack.path with mozpath. r=mshal, a=sledru - d59b572e546f
Mike HommeyBug 1147207 - Add a ComposedFinder class that acts like a FileFinder proxy over multiple FileFinders. r=gps, a=sledru - fc1e894eec2f
Mike HommeyBug 1147207 - Improve SimplePackager manifest consistency check. r=gps, a=sledru - 46262c24ca5b
Mike HommeyBug 1147207 - Allow to give extra l10n directories to r=gps, a=sledru - 7f2d41560360
Mike HommeyBug 1147207 - Use SimplePackager code to find manifest entries and base directories during l10n repack. r=gps, a=sledru - 0c29ab096b90
Jon CoppeardBug 1146696 - Don't assume there are no arenas available after last ditch GC r=terrence a=sylvestre - 484a6aef6a4f
Patrick BrossetBug 1134500 - Fix multiple browser/devtools/animationinspector intermittent tests. r=bgrins, a=test-only - 589aafc2bb13
Matt WoodrowBug 1102612 - Don't attempt to read data from a resource if we've evicted the start position. r=jya, a=sledru - 98ac0c020205
Jan BeichBug 1147845 - Drop redundant check to keep blocked download data on Tier3 platforms as well. r=jaws, a=sledru - 8ff6cc64abe8
Mike de BoerBug 1146921 - Disable the window sharing dropdown item in Loop conversation windows on unsupported platforms. r=Standard8, a=sledru - d384bdaed2fd
Gavin SharpBug 1148562 - Right clicking the reader mode button shouldn't trigger reader mode. r=jaws, a=sledru - 4406ce9ace92
Richard NewmanBug 1151484 - Account for null result when polling on a latch during Reading List sync. r=nalexander, a=sledru - 8a734418a22e
Mark FinkleBug 1149094. r=blassey, a=sledru - d2987ec0e0e7
Margaret LeibovicBug 1150872 - Update toast notification when removing a page from reading list from reader view toolbar. r=mcomella, a=sledru - 3f5d7f277471
JW WangBug 1150277 - Match hostname when removing GMP data. r=cpearce, a=sledru, ba=sledru - 8f0271f2c153
Ryan VanderMeulenBacked out changeset 589aafc2bb13 (Bug 1134500) for a rebase error. - 88bda8094530
Patrick BrossetBug 1134500 - Fix multiple browser/devtools/animationinspector intermittent tests. r=bgrins, a=test-only - 16c909280059
Shih-Chiang ChienBug 1080130 - Force GC to close all used socket immediately. r=jmaher, a=test-only - eb178aedaaad
Jeff MuizelaarBug 1146034 - Cherry pick "Fix struct uniform packing." a=sledru - 9adbbf9a8784
Christoph KerschbaumerBug 1147026 - CSP should ignore query string when checking a resource load. r=dveditz, a=sledru - c2f29d6648e8
Christoph KerschbaumerBug 1147026 - CSP should ignore query string when checking a resource load - tests. r=dveditz, a=sledru - 6d1efbb2c76c
Michael ComellaBug 1130203 - Clean up OverlayDialogButton's initialization. r=mhaigh a=sylvestre - 4f2f00d1331c
Michael ComellaBug 1130203 - Remove header container in share overlay & roughly style text. r=mhaigh a=sylvestre - d6200a67e007
Michael ComellaBug 1130203 - Remove Firefox logo from share overlay. r=mhaigh a=sylvestre - 57a21c5e1100
Michael ComellaBug 1130203 - Move share overlay title styles into styles.xml and revise to match mocks. r=mhaigh a=sylvestre - 8002be97de82
Michael ComellaBug 1130203 - Remove dividers in share overlay. r=mhaigh a=sylvestre - 9a5a28809525
Michael ComellaBug 1130203 - Add @dimen/button_corner_radius and replace corner radius use in code. r=mhaigh a=sylvestre - feb7a6808bfb
Michael ComellaBug 1130203 - Round the corners of the first item in the share overlay. r=mhaigh a=sylvestre - 5dd03a21c376
Michael ComellaBug 1130203 - Set width for share overlay. r=mhaigh a=sylvestre - c3ec8ada4705
Michael ComellaBug 1130203 - Update share overlay text colors to match mocks. r=mhaigh a=sylvestre - ca3650a73fdf
Michael ComellaBug 1130203 - Rename TextAppearance.ShareOverlay to ShareOverlayTextAppearance. r=mhaigh a=sylvestre - 41eba60614f8
Michael ComellaBug 1130302 - Move ShareOverlayButton.Text to ShareOverlayTextAppearance.Button. r=mhaigh a=sylvestre - 8691a7ac4c95
Michael ComellaBug 1130203 - Remove excess LinearLayout from ShareOverlay. r=mhaigh a=sylvestre - b731c0df23aa
Michael ComellaBug 1130203 - Remove unused share overlay layout. r=mhaigh a=sylvestre - 1c2ce96f9359
Michael ComellaBug 1130203 - Clean up style inheritance in share overlay. r=mhaigh a=sylvestre - 49441819b75a
Michael ComellaBug 1130203 - Update share overlay row pressed color & color names. r=mhaigh a=sylvestre - f2cbe1ec6d5a
Michael ComellaBug 1130203 - Update ShareOverlay icon padding & assets. r=mhaigh a=sylvestre - abff0e240078
Michael ComellaBug 1130203 - Clean up share overlay toast styles. r=mhaigh a=sylvestre - 8a05ce8c5ff7
Michael ComellaBug 1130203 - Reset the first item background drawable state onResume. r=mhaigh a=sylvestre - dd5f8068b392
Michael ComellaBug 1130203 - Review: Remove single use styles in share overlay. r=trivial a=sylvestre - ce7199dbb0af
Michael ComellaBug 1130203 - Review: Finish off share overlay nits. r=trivial a=sylvestre - dcadb3572692
Michael ComellaBug 1130203 - Add drop shadow to overlay share dialog result toast. r=margaret a=sylvestre - 994526939c21
Michael ComellaBug 1130203 - Remove retry button in share overlay retry toast. r=margaret a=sylvestre - d49aaecd32a3
Michael ComellaBug 1134484 - Add Fennec color palette to colors.xml. r=liuche a=sylvestre - 3eb3b25437dd
Michael ComellaBug 1130203 - uplift: Add dropshadow assets from Bug 1137921. r=trivial a=sylvestre - e399294c9df3
Michael ComellaBug 1148041 - Have the ShareOverlay text styles inherit from the default TextAppearance. r=liuche a=sylvestre - 0442cb68ed69
Michael ComellaBug 1148041 - Inherit from Gecko theme in share overlay. r=liuche a=sylvestre - 0db186d2534c
Michael ComellaBug 1148197 - Move share overlay margins to child to properly align. r=liuche a=sylvestre - 4db575e80883
Michael ComellaBug 1151089 - Move slide up animations to onResume. r=liuche a=sylvestre - 358448358c21
Michael ComellaBug 1148677 - Use larger shareplane icon. r=liuche a=sylvestre - 5b70a93a7f10
Michael ComellaBug 1132747 - Set the padding for share in the context menu on Lollipop. r=mhaigh a=sylvestre - cbe44fd0d2fc
Ryan VanderMeulenBug 984821 - Disable browser_CTP_iframe.js on Linux and OSX for ongoing intermittent failures. - a1c4c4d43776
AnishBug 1135091 - Convert remaining SpecialPowers.setBoolPref to pushPrefEnv. r=jmaher, r=mwargers, a=test-only - 8d23b1e2cc0f
Sami JaktholmBug 1148770 - Rewrite browser_styleeditor_bug_870339.js to fix intermittent leaks. r=ejpbruel, a=test-only - a7535132fe8e
Mike de BoerBug 1150052 - Report exceptions that occur in MozLoop object APIs directly to the console, so we'll be able to recognize errors better. r=Standard8, a=sledru - 0299772271a8
Michael ComellaBug 1147661 - Add new device assets. r=liuche, a=sledru - 32e9c40ea3f9
Michael ComellaBug 1147661 - Use new device icons in share overlay. r=liuche, a=sledru - 2d8d16d8c2ad
John SchoenickBug 1139554 - Fix srcset parser mishandling bare URLs followed by a comma. r=jst, a=sledru - a9d1df7af6fc
Allison NaaktgeborenBug 1124895 - Add password manager usage data to FHR. r=dolske, r=gfritzsche, a=sledru - ac9862939f3e
Robert LongsonBug 1149516 - Draw continuous stroke if stroke-dasharray = 0. r=jwatt, a=sledru - 1dc6d70e9022
David MajorBug 1137614 - Align the mvsadcost array to work around a possible compiler issue. r=rillian, a=sledru - 58b20f079d4f
Matt WoodrowBug 1151721 - Disable hardware accelerated video decoding for older intel drivers since it gives black frames on youtube. r=ajones, a=sledru - d4e6fe0b0eb5
Jonathan KewBug 1012640 - Part 1: Add checks for IsOriginalCharSkipped() to the gfxSkipChar unit tests. r=roc, a=sledru - 083361a65349
Jonathan KewBug 1012640 - Part 2: Ensure mCurrentRangeIndex is initialized correctly when creating iterator for a gfxSkipChars that begins with a skipped run. r=roc, a=sledru - 3c64e9fdc3d7
Jonathan KewBug 1012640 - Part 3: Reftest for line break after inline element with white-space:nowrap and whitespace inside the element. r=roc, a=sledru - 4c9214ed82b8
Jean-Yves AvenardBug 1149278 - Limit box reads to resource length. r=k17e, a=sledru - de1e5351aad2
Jean-Yves AvenardBug 1151299 - Part 1: Only attempt to decode first frame when available. r=mattwoodrow, a=sledru - 68f61e9c41d2
Jean-Yves AvenardBug 1151299 - Part 2: Clear EOS flag when new data is received. r=mattwoodrow, a=sledru - 01cf08a90d44
Mark FinkleBug 1151469 - Tweak the package manifest to avoid packaging the wrong file. r=rnewman, a=sledru - 2a6a2f558ec2
Richard NewmanBug 1123389 - Allow Android-side reading list service work to ride the trains. r=rnewman a=sledru - e55db32c5ef6
Ryan VanderMeulenBacked out changeset a7535132fe8e (Bug 1148770) for test bustage. - 0f0c47f90ab6
Mark HammondBug 1149880 - Avoid readinglist item races logging unhandled promise exceptions. r=dolske, a=sledru - 115865f14324
Patrick BrossetBug 1139937 - Don't try accessing the computedStyle of pseudo elements on reflow. r=miker, a=sledru - 9a763ea8d781
Blake WintonBug 1149261 - Replace the close icon and adjust the borders. ui-r=mmaslaney, r=jaws, a=sledru - a3c18ef98317
Blake WintonBug 1149649 - Design Polish Updates for the Reader View Footer. ui-r=mmaslaney, r=jaws, a=sledru - f7dc5b7781e2
Blake WintonBug 1148762 - Tweak the css on the reading list sidebar to prevent unecessary scrollbars. r=mstange, a=sledru - de78faf679e7
Gijs KruitboschBug 1148024 - Fix wrapping of privacy pane. r=jaws, a=sledru - 1f70f2dba807
Gijs KruitboschBug 1151252 - Back out content part of the restyle of about:preferences. r=jaws, a=sledru - fa50c9c02b3c
Stephen PohlBug 1151544 - Update Adobe EME's homepage URL in addons manager. r=gfritzsche, a=sledru - 89de3c04af8b
Timothy NikkelBug 1150021 - Backout the patch for Bug 1077085 on beta and aurora. a=sledru - 188117472132
Aaron KlotzBug 1141081 - Add weak reference support to HTMLObjectElement and use it in nsPluginInstanceOwner. r=jimm, a=sledru - bfff2ca94766
Jean-Yves AvenardBug 1151360: Allow playback of extended AAC profile audio track. r=k17e, a=sledru - a24bdacce4cc
Ryan VanderMeulenBug 1129538 - Skip various tests that hit the mProgressTracker abort. a=test-only - 51c5166a338b
Andrea MarchesiniBug 1134224 - test_bug1132395.html must wait until the port is actually available before sending messages. r=ehsan, a=test-only - 982dba6be01c
Dragana DamjanovicBug 1135405 - Use different multicast addresses for each test. r=michal, a=sledru - 8bb13d7a5d2a
Edwin FloresBug 1146192 - Whitelist sched_yield syscall in GMP sandbox on Linux. r=jld, a=sledru - e06c5a9ce450
Honza BambasBug 1124880 - Call PR_Close of UDP sockets on new threads. r=mcmanus, a=sledru - b04842ef36ca
Edwin FloresBug 1142835 - Null check mPlugin on GMPAudioDecoderParent shutdown. r=cpearce, a=sledru - 0ff855a44d9c
Mark HammondBug 1149869 - Prevent duplicate readinglist items from appearing in the sidebar in some cases. r=Unfocused, a=sledru - 881a59941b04
Seth FowlerBug 1148832 - Return early from nsAlertsIconListener::OnLoadComplete if the image has an error. r=baku, a=sledru - bf83a8535bf4
Mark HammondBug 1149896 - Avoid warnings when using sendAsyncMessage on a ReadingListItem object. r=adw, a=sledru - bbbb9f84cf98
Blake WintonBug 1149520 - Move the font-size change to the container, so as not to repaint the toolbar. r=jaws, r=margaret, a=sledru - 6ab02e48d0c2
Andrea MarchesiniBug 1151609 - WebSocket::CloseConnection must be thread-safe. r=smaug, a=sledru - 07f2a01649a4
Mark BannerBug 1152245 - Receiving a call whilst in private browsing or not browser windows open can stop any calls to contacts being made or received. r=mikedeboer, a=sledru - 367745bbac8a
Valentin GosuBug 1099209 - Only track leaked URLs on the main thread. r=honzab, a=sledru - 58dca3f7560a
Mark BannerFix beta specific xpcshell bustage from Bug 1152245. r+a=bustage-fix - d13016a31d6f
Robert O'CallahanBug 1149494 - Part 1: Add a listener directly to the unblocked input stream that reports the size of the first non-empty frame seen. r=pehrsons, a=sledru - d46cb3b3ebb3
Andreas PehrsonBug 1149494 - Part 2: Add mochitest. r=jesup, a=sledru - c821f76bf302
Byron Campen [:bwc]Bug 1151139 - Simplify how we choose which streams to gather stats from. r=mt, a=abillings - e62ca3da49e1
Florian QuèzeBug 1137603 - WebRTC sharing notifications fail to open from the global indicator when the Hello window has been detached. r=mixedpuppy, a=sledru - d4ee3499fe0d
Florian QuèzeBug 1144774 - Add to reading list button is blurry. ui-r=mmaslaney, r=jaws, a=sledru - f377c6831282
Mike de BoerBug 1152391 - appVersionInfo should use UpdateChannel.jsm to fetch update channel information. r=Standard8, a=sledru - 3f5e298cb641
Margaret Leibovicbackout 7d883361e554 for causing Bug 1150251 - ff91cb79a7c8

Planet MozillaNew tutorial - arrays and vectors in Rust

I've just put up a new tutorial on Rust for C++ programmers: arrays and vectors. This covers everything you might need to know about array-like sequences in Rust (well, not everything, but at least some of the things).

As well as the basics on arrays, slices, and vectors (Vec), I dive into the differences in representing arrays in Rust compared with C/C++, describe how to use Rust's indexing syntax with your own collection types, and touch on some aspects of dynamically sized types (DSTs) and fat pointers in Rust.

Planet MozillaUsing JPM Watchpost

jpm watchpost is a feature of jpm which can be used to automatically post changes for an add-on to an instance of Firefox or Fennec running the Extension Auto-Installer extension.

This method can be used to quickly test an extension on the same PC on which an add-on is being developed, or another PC, or on an Anroid device over wifi. It can also be used to rapidly test a lot of new changes.

Basically, jpm watchpost --post-url <url> will watch the folder it is executed in for changes, when a file is updated, added, or removed then a new .xpi file is created for the add-on, which is finally sent to the --post-url with an HTTP POST request, which is what the Extension Auto-Installer extension recieves. The Extension Auto-Installer extension then auto installs the extension, which causes Firefox to disable and remove the old version of the add-on.

For more information on using jpm watchpost please see the documentation.

Planet MozillaJPM Mobile Beta

I annouced the release of jpm beta nine months ago, and since then we’ve most of the SDK modules compatible with jpm, figured out how to run jpm on travis, and now I’d like to annouce the beta release of jpm-mobile!

jpm-mobile provides a means for running Jetpacks (aka Add-on SDK add-ons) on Android devices. This is achieved by communicating through adb, which is required by jpm-mobile.

Installing is easy with npm install jpm-mobile -g, then testing an add-on on Firefox for Android (aka Fennec) is as simple as plugging the device in to your computer and running:

jpm-mobile run --adb /path/to/adb -b fennec

Note that the add-on’s package.json or install.rdf should be marked as supporting Fennec. For the former, the package.json should include something like:

  "engines": {
    "fennec": "*"

Planet MozillaUser Testing the Teach Site

We are soooooo close to releasing the new Teach site.

People seem to dig the bright colors and quirky illustrations throughout the site.

People seem to dig the bright colors and quirky illustrations throughout the site.

In advance of the release, I wanted to conduct some user tests to make sure we’re still on the right track. This week I conducted two user tests with members of the community (yay!). As is always the case with user testing, I learned a lot from observing users interact with the site.

You can see detailed notes here and read my recommendations below.

These recommendations are based on formal user tests with two users as well as feedback from people who’ve been involved or observing the process throughout.  Also, please note that I wasn’t able to test the primary functionality on the site (adding a Club to the map), so these recommendations are more about IA and other content issues.

Findings and Recommendations

Screen Shot 2015-04-10 at 2.48.07 PM



  • People want to see more activities and resources.
  • People expect to be able to sort and filter.
  • Our internal distinction between the Clubs Curriculum (the official curriculum for Clubs; with a strong recommendation for following the prescribed path) and Teaching Activities (more “grab and go”-style) is not intuitive to users.
  • The Teach Like Mozilla content needs to be more integrated into common user flows.


  • Continue with current plan for developing and publishing more approved curriculum and activities.
  • Continue brainstorming work around scalable presentation of curriculum begun in this heartbeat. The ideas discussed so far address sorting and filtering, and make good use of the Web Literacy Map as an organizing tool.
  • As part of that design work, we should also allow users to access all teaching materials from the same page, and provide specific views for “official Clubs curriculum.” I recommend we keep the Teaching Activities page, and remove the Clubs Curriculum sub-page. This content is one of our primary offerings so it belongs at the top level. /cc @iamjessklein
  • We need to offer a solution for sharing resources—e.g. maker tools, other curricula, programs. (Hello, Web Lit Mapper!)
  • We need to design a stronger connection between teaching activities and the Teach Like Mozilla content. A short-term solution might be to link to the TLM page from every individual activity page, but we should also be working towards a better longer-term solution. /cc @laurahilliger

Screen Shot 2015-04-10 at 2.50.46 PM



  • The Clubs Toolkit is not findable, and needs to be supplemented with content targeted towards helping people “get started.”
  • We are not providing enough information for the use case of a person who is deciding in the moment whether to start a Club.


  • Make the Clubs Toolkit more visible on the page.
  • Consider renaming the Clubs Toolkit something like “Getting Started Guide” or “A Club’s First Month” – and editing content to match. /cc @thornet
  • Based on my understanding of the expected pathways to starting Clubs, I do not think we need to make any significant changes to the Clubs page to address the use case of someone coming to the site and deciding in the moment whether or not to start a Club. As I understand it, our plan for growing Clubs makes use of the following scenarios:
    • 1) Someone is “groomed” by staff member, Regional Coordinator, or other community member. By the time they arrive at the site, they have the specific intent of adding their Club.
    • 2) Someone finds out about us through Maker Party, and through a series of communications learns about Clubs and decides to start one. They are coming to the site with the specific intent to add their Club.
    • 3) Someone with an existing program or group wants to be listed in the database. Again, they are coming to the site with the intent to add their Club.

In short, I don’t think we’ve yet seen a reason to have the site serve a “selling” or persuasive function. I *do* think the Clubs page is a natural first stop for someone who is looking to understand how to start a Club. I think the changes recommended in the bullet points above address that.

Screen Shot 2015-04-10 at 2.51.21 PM



  • The copy describing the Events page in the main navigation is misleading, since the content on the Events page is about Maker Party.
  • People may understand throwing a Maker Party as a “first step” to starting a Club, rather than a lower-bar option for people who do not want to start a Club (and perhaps never will).


  • I think we should re-brand what is currently the Events landing page as “Maker Party.” We’ve already sort of done this in that, while the page is called “Events” in the nav, the h1 copy in the hero image is “Host a Maker Party.” I suggest we change the copy in the nav to “MAKER PARTY” and the teaser copy to “Our annual global campaign”. /cc @amirad

Screen Shot 2015-04-10 at 2.52.06 PM



  • Users tend to ignore, not see, or misinterpret the CTAs at the bottom of every page
  • Users do not notice links to the sub-pages in the main navigation


  • We need to design better, more intuitive pathways for viewing secondary pages


I’m going to keep banging this drum: We need to clarify our audience! I think we’ve made good progress in terms of clarifying that our “first line” audience includes educators and activists. But I think we have to take it a step further and clarify who those educators and activists are working with. There are at least two axes that I think are important to be clear about: first, the global nature of our work, and second, the specific age groups of what I’m calling the “end learners,” for lack of a better term.

I think we do a pretty decent job of conveying the global nature of the program through copy and imagery, though obviously implementing our l10n strategy is absolutely fundamental to this.

I think we are less clear when it comes to the age groups we’re targeting with our programs and materials.   For example, I think we ought to specify the appropriate age level for each activity. (And the images, activity titles, and copy should reflect the target audience.)

Questions, comments, disagreements wholeheartedly welcomed!

Planet MozillaQ1 Participation update

I asked two questions about participation back in January: 1. what is radical participation? and 2. what practical steps  can we take right now to bring more of it to Mozilla?. It’s been great to see people across Mozilla digging into these questions. I’m writing to offer an update on what I’ve seen happening.

First, we set ourselves a high bar when we started talking about radical participation at Mozilla late last year. I still believe it is the right bar. The Mozilla community needs more scale and impact than it has today if we want to confront the goliaths who would take the internet down a path of monopoly and control.

However, I don’t think we can invent ‘radical’ in the abstract, even if I sometimes say things that make it sound like I do :). We need to build it as we go, checking along the way to see if we’re getting better at aligning with core Mozilla principles like transparency, distributed leadership, interoperability and generativity. In particular, we need to be building new foundations and systems and ways of thinking that make more radical participation possible. Mitchell has laid out how we are thinking about this exploration in three areas (link).

When I look back at this past quarter, that’s what I see that we’ve done.

As context: we laid out a 2015 plan that included a number of first steps toward more radical participation at Mozilla. The immediate objectives in this plan were to a) invest more deeply in ReMo and our regional communities and b) better connect our volunteer communities to the work of product teams. At the same time, we committed to a longer term objective: c) create a Participation Lab (originally called a task force…more on that name change below) charged with looking for and testing new models of participation.

Progress on our first two objectives

As a way to move the first part of this plan forward, the ReMo Council met in Paris a month or so back. There was a big theme on how to unleash the leadership potential of the Reps program in order to move Mozilla’s core goals forward in ways that take advantage of our community presence around the world. For example, combining the meteoric smartphone growth in India with the local insights of our Indian community to come up with fresh ideas on how to move Firefox for Android towards its growth goal.

We haven’t been as good as we need to be in recent years in encouraging and then actually integrating this sort of ‘well aligned and ambitious thinking from the edge’. Based on reports I’ve heard back, the Paris meeting set us up for more of this kind of thinking. Rosana Ardila and the Council, along with William Quiviger and Brian King, are working on a “ReMo2.0” plan that builds on this kind of approach, that seeks a deeper integration between our ReMo and Regional Community strategies, and that also adds a strong leadership development element to ReMo.

reps council

Reps Council and Peers at the 2015 Paris meet-up

On the second part of our plan, the Participation Team has talked to over 100 people in Mozilla product and functional groups in the past few months. The purpose of these conversations was to find immediate term initiatives that create the sort of ‘help us meet product goals’ and ’empower people to learn and do’ virtuous circle that we’ve been talking about in these discussions about radical participation.

Over 40 possible experiments came out of these conversations. They included everything from leveraging Firefox Hello to provide a new kind of support and mentoring; to taking a holistic, Mozilla-wide approach to community building in our African Firefox OS launch markets; to turning into a hub that lets millions of people play small but active roles in moving our mission forward. I’m interested in these experiments, and how they will feed into our work over the coming quarters—many of them have real potential IMHO.

I’m even more excited about the fact that these conversations have started around very practical ideas about how volunteers and product teams can work more closely together again. It’s just a start, but I think the right questions are being asked by the right people.

Mozilla Participation Lab

The third part of our plan was to set up a ‘Task Force’ to help us unlock bold new thinking. The bold thinking part is still the right thing to aim for. However, as we thought about it, the phrase ‘task force’ seemed too talky. What we need is thoughtful and forceful action that gets us towards new models that we can expand. With that in mind we’ve replaced the task force idea with the concept of a Participation Lab. We’ve hired former Engineers Without Borders CEO George Roter to define and lead the Lab over the next six months. In George’s words:

“The lab is Mozilla, and participation is the topic.”

With this ethos in mind, we have just introduced the Lab as both a way to initiate focused experiments to test specific hypotheses about how participation brings value to Mozilla and Mozillians, and to support Mozillians who have already initiated similar experiments. The Lab will be an engine for learning about what works and what will get us leverage, via the experiments and relationships with people outside Mozilla. I believe this approach will move us more quickly towards our bold new plan—and will get more people participating more effectively along the way. You can learn more about this approach by reading George’s blog post.

A new team and a new approach

There is a lot going on. More than I’ve summarized above. And, more importantly, hundreds of people from across the Mozilla community are involved in these efforts: each of them is taking a fresh look at how participation fits into their work. That’s a good sign of progress.

However, there is only a very small Participation Team staff contingent at the heart of these efforts. George has joined David Tenser (50% of his time on loan from User Success for six months) to help lead the team. Rosana Ardila is supporting the transformation of ReMo along with Rubén and Konstantina. Emma Irwin is figuring out how we help volunteers learn the things they need to know to work effectively on Mozilla projects. Pierros Papadeas and a small team of developers (Nikos, Tasos and Nemo) are building pieces of tech under the hood. Brian King along with Gen and Guillermo are supporting our regional communities, while Francisco Picolini is helping develop a new approach to community events. William Quiviger is helping drive some of the experiments and invest across the teams in ensuring our communities are strong. As Mitchell and I worked out a plan to rebuild from the old community teams, these people stepped forward and said ‘yes, I want to help everyone across Mozilla be great at participation’. I’m glad they did.

The progress this Participation Team is making is evident not just in the activities I outlined above, but also in how they are working: they are taking a collaborative and holistic approach to connecting our products with our people.

One concrete example is the work they did over the last few months on Mozilla MarketPulse, an effort to get volunteers gathering information about on-the-street smartphone prices in FirefoxOS markets. The team not only worked closely with FirefoxOS product marketing team to identify what information was needed, they also worked incredibly well together to recruit volunteers, train them up with the info they needed on FirefoxOS, and build an app that they could use to collect data locally. This may not sound like a big deal, but it is: we often fail to do the kind of end to end business process design, education and technology deployment necessary to set volunteers up for success. We need to get better at this if we’re serious about participation as a form of leverage and impact. The new Participation Team is starting to show the way.

Looking at all of this, I’m hoping you’re thinking: this sounds like progress. Or: these things sound useful. I’m also hoping you’re saying: but this doesn’t sound radical yet!!! If you are, I agree. As I said above, I don’t think we can invent ‘radical’ in the abstract; we need to build it as we go.

It’s good to look back at the past quarter with this in mind. We could see the meeting in Paris as just another ReMo Council gathering. Or, we could think of it—and follow up on it—as if it was the first step towards a systematic way for Mozilla to empower people, pursue goals and create leaders on the ground in every part of the world. Similarly, we could look at MarketPulse as basic app for collecting phone prices. Or, we could see it as a first step towards building a community-driven market insights strategy that lets us outsee— and outsmart—our competitors. It all depends how we see what we’re doing and what we do next. I prefer to see this as the development of powerful levers for participation. What we need to do next is press on these levers and see what happens. That’s when we’ll get the chance to find out what ‘radical’ looks like.
PS. I still owe the world (and the people who responded to me) a post synthesizing people’s suggestions on radical participation. It’s still coming, I promise. :/

Filed under: mozilla

Planet MozillaIntroducing the Mozilla Participation Lab

I’m excited to introduce the Mozilla Participation Lab, an initiative across Mozilla to architect a strategy and new approaches to participation.

As Mitchell articulated, people around Mozilla are deeply invested in the question: how can participation add even more value to the products and communities we build that are advancing the open web?

Across Mozilla there’s a flurry of activity aimed at answering this question and increasing participation. Mitchell framed the scope of this exploration as including three broad areas: First, strengthening the efforts of those who devote the most energy to Mozilla. Second, connecting people more closely to Mozilla’s mission and to each other. And third, thinking about organizational structure and practices that support participation.

The Mozilla Participation Lab is designed to strengthen and augment the efforts and energies that Mozillians are devoting to this exploration in the months ahead. If you count yourself as one of those Mozillians who is working on this problem, my hope is that you’ll see how the Mozilla Participation Lab can be relevant for you.

First, let’s back up for some context…

In January, Mitchell and Mark along with the Participation Team laid out a Participation Plan for Mozilla that articulated an ambitious vision for participation in 2017:

  • Many more people working on Mozilla activities in ways that make Mozilla more effective than we can imagine today.
  • An updated approach to how people around the world are helping to build, improve and promote our products and programs.
  • A steady flow of ideas and execution for programs, products, and initiatives around the world—new and diverse activities that move the mission forward in concrete ways.
  • Ways for people to participate in our mission directly through our products—there is integration of participation into the use and value proposition.
  • Ultimately: more Mozilla activities than employees can track, let alone control.

While this vision describes where Mozilla wants to be, how we’re going to get there still needs to be figured out. The how is an important and explicit goal in the participation plan for 2015: Develop a bold long-term plan for radical participation at Mozilla.

This is the goal you’ve heard Mitchell and Mark talking about, and they’ve hired me to get this work going over the next 6 months.

Initially, they talked about this goal being pursued by a task force—a group of people who could go away and “figure this out”. But as we started to build this out, a task force didn’t feel right.

Mozilla Participation Lab

What is the Mozilla Participation Lab? Concretely, the Lab will have three related sets of activities.

1) Focused experiments.

The Participation Team will initiate experiments, after consulting and coordinating with product/functional teams and volunteers, around particular hypotheses about where participation can bring value and impact in Mozilla. All of these experiments will be designed to move a top-line goal of Mozilla (the product side of the virtuous circle), and give volunteers/participants a chance to learn something, have impact or get some other benefit (the people side of the virtuous circle). If the experiments work, we’ll start to see an impact on our product goals and increased volunteer engagement.


These experiments will be built in a way that will assess whether the hypotheses are true, what’s required for participation to have impact, and what the return on investment is for our key products and programs, and for Mozillians.

For example, many in Mozilla have articulated a belief that participation can enable local content to make our products better and more relevant, and so we are working on a series of experiments in West Africa alongside the launches of the Orange Klif. If these are successful, they will have had an impact on Firefox OS adoption while building vital, sustainable communities of volunteers.

In order to identify these experiments, our team has already talked with Mozilla staff and volunteers from all over the organization, plus Mozilla’s leadership (staff and volunteers). Here’s a long list of rough ideas that came out of these conversations; we obviously need to make some choices! Our aim to is settle on and launch a first set of focused experiments over the next couple of weeks.

2) Distributed experiments.

I’ve had conversations with roughly 100 Mozillians over the past couple of months and realized that, in true Mozilla distributed style, we’re already trying out new approaches to participation all over the world. Buddy Up, TechSpeakers, Mozilla Hispano, Clubs, Marketpulse are just a few of many many examples. I’m also confident that there will be many more initiatives in the coming months.

My hope is that many of these initiatives will be part of the Participation Lab. This will be different than the focused experiments above in two ways. First, the Participation Team won’t be accountable for results; the individual initiative leaders will be. Second, they can probably be lighter-weight experiments; whereas the focused experiments are likely to be resource intensive.

How does an initiative fit? If it meets two simple criteria: (1) it is testing out a set of hypotheses about how participation can bring value and impact to our mission and to Mozillians, and (2) we can work together to apply a systematic methodology for learning and evaluation.

Of course, it’s the leaders of these initiatives who can choose to be part of the Lab—I hope you do! To be upfront, this could mean a bit of extra work, but you can also access some resources and have an influence on our participation strategy. I think it’s worthwhile:

  1. We will work together to apply a systematic learning and experimenting methodology (documented here).
  2. You can unlock support from the Participation Team. This could be in the form of strategic or design advice; specific expertise (for example, volunteer engagement, building metrics or web development); helping you gather best practices from other organizations; or small amounts of money. We do have limited staff and volunteer time, so may need to make some choices depending on the number of initiatives that are part of the Lab.
  3. Your initiative will make a significant contribution to Mozilla’s overall participation strategy moving forward.
3) Outside ideas.

We will bring together experts and capture world-leading ideas about participation from outside of Mozilla. This is a preliminary list of people we are aiming to reach out to.

Who’s involved?

In short, a broad set of Mozillians will be supported by a smaller team of staff and volunteers from the Participation Team. This team will coordinate various experiments in the Lab, curate the learning, build processes to ensure that all of this is working in the open in a way that any Mozillian can engage with, and make recommendations to Mozilla leaders and community members.

What’s the result, and by when?

The primary outputs of the Lab are:

  1. A series of participation initiatives that result in more impactful and fulfilling participation toward reaching Mozilla’s goals. (Read more below about how what you’re working on right now can fit into this.)
  2. An evidence-based analysis of the effectiveness of specific participatory activities.
  3. Recommendations on how we might expand or generalize the activities that provided the most value to Mozilla and Mozillians.
  4. A preliminary assessment of the organizational changes we might consider in order to gain an even greater strategic advantage from participation.
  5. A set of learning resources and best practices packaged in a way that teams across Mozilla will be able to use to strengthen our collective participation efforts.
  6. Possibly, a series of strategic choices and opportunities for Mozilla leaders and community members to consider.

The first set of activities will take place primarily in Q2, wrapping up by early July, at which point we will assess what’s next for the Lab.

How is this relevant for you?

You have the opportunity to participate in the Lab and in shape the way forward for participation in Mozilla. Here’s how:

1) Be part of the team. Do you want to have a big hand in shaping how Mozilla moves ahead on participation?

In the coming couple of weeks we’ll be starting some focused experiments. If these are problems you’re also excited about (or are already tackling), please get in touch. We’re certain that coders, marketers, project managers, designers, educators, facilitators, writers, evaluators, and more can make a big difference.

Also, if you’re interested being part of the learning team that is tracking and synthesizing lessons from inside and outside Mozilla, please get in touch.

2) Are you already running or planning a new participation initiative, or have an idea you’d like to get off the ground? Could you use some help from the Lab (and hopefully volunteers or other resources)? I’d love to have a conversation about whether your initiative can be part of the Participation Lab and how we can help.

3) Can you think of someone we should be talking to, a book or article to read, or a community to engage? Pass it along. Or better yet, help us to get in touch with people outside of Mozilla or summarize the key lessons for participation.

4) Follow along. We’d like many Mozillians to share their feedback and ideas. We’ll be working out in the open with a home base on this wiki page.

Please get in touch! Reply to this post or send me an email: groter <at>

Let’s together use this Lab as a way to architect an approach to participation that will have a massive positive impact on the web and on people’s lives!

Planet MozillaMobile Minded

Imagining the future of New Tab for Firefox Android.

New Tab on Firefox for Android - CONCEPT

For over a year, the Content Services team has been busy evolving New Tab beyond a simple directory of recent, frequently visited sites. Once Firefox 39 lands on desktops later this summer, New Tab will include an updated interface, better page controls, and suggested content from our partners. With any luck, these and future products releases for the desktop browser will facilitate more direct, deeper relationships between brands and users. Most importantly (to me, anyway), richer controls on New Tab will also offer users more customization and better utility.

While this ongoing project work has certainly kept me busy, I can’t help but think about “the next big thing” whenever I have the chance. Lately, my mind has been preoccupied with a question that’s easy to ask, but much more difficult to answer:

How could Suggested Sites and more advanced controls work on mobile?

Providing Firefox Desktop users with more control over the sites they see on New Tab is relatively straightforward. The user is likely seated, focused entirely on the large screen in front of them, and is using a mouse pointer to activate hover states. These conditions are appropriate for linear, deliberate interactions. Therefore, New Tab on desktop can take advantage of the inherent screen real estate and mouse precision to support advanced actions like editing or adding sites. And since New Tab is literally one page, users can’t get really get “lost”.

Mobile is altogether different. The user may be standing, sitting, or on the move. Their attention is divided. Screens are physically smaller, yet still support resolutions comparable to larger desktop displays. More importantly, there aren’t any hover states, and mobile interactions are imprecise (which is maybe why we call them “gestures”). Because of this imprecision on handheld screens, a tap often launches another view or state that may the user to another destination – and after a few taps, the user may find themselves down a navigational rabbit hole that’s cumbersome to climb out of. Combined, these factors sometimes make it hard to perform complex actions on a mobile device. Likewise, any action made by the user should be minimal, simple to perform, and always contextual.

Taking all of the above into consideration, the following is an early peek at my vision for the New Tab experience on Firefox Android, with user control in mind.


New layout

New Tab on Android: DefaultNew Tab on either desktop or mobile devices has always been about one thing: Helping users navigate the Web more efficiently.

Today, New Tab shows a two-column grid of rectangles depicting Websites they recently visited. While it may make the destination page easier to see, this is an inefficient use of space.

By shrinking the rectangles, more of them can fit onto the page; and by showing a logo instead of a Web page (when possible), identifying individual sites becomes easier too. These smaller “tiles” could even be grouped, just as the user would group apps on their device home screen.

Some folks may also be interested in discovering something entirely new on the Web. The future New Tab could serve suggested content for these users, based on their browsing history (and with permission, of course). But instead of commandeering a tile, suggestions could be delivered natively, and in line with the user’s history list.

Quick and painless suggestions

New Tab on Android: Suggested contentViewing suggested content in other applications typically launches a new app or another tab in the user’s browser. Yet it only takes a second or two for the user to decide if the content is actually interesting to them. Personally, I think it would be better to give users a preview of the content, and then give them the option of dismissing it or continuing on without leaving the page they’re on.

Shown above, I image that after tapping a suggested item, New Tab could slide away to the left, revealing a preview of the suggested content beneath. If the user scrolls to view more content, a button then slides into view at the bottom of the screen, taking them the destination page suggested on-tap. If they aren’t interested in reading further, they would simply tap the navigation bar (below the search bar) to return to New Tab. Meanwhile, they never actually “left” the original screen.

Drag-and-drop Web addresses

New Tab on Android: Drag a site onto pageHowever, if the user does find the suggested content interesting, then they should be able to add the destination site directly to New Tab. One solution may be allowing users to drag-and-drop a Web address from the search bar and into New Tab. Perhaps by dragging the address onto another tile, users could even create a new group of related sites.

New Tab on Android: Adding a group

If a user doesn’t care for a particular suggestion, however, then deleting it – or any item on New Tab, for that matter – should be as easy as dragging it off either edge of the screen. Borrowing from another popular email application, swiping an item would reveal the word “delete” beneath, further reinforcing the action being performed. Naturally, this may sometimes happen by accident. As such, a temporary button could appear that allows the user to retrieve the item previously listed, then disappear after a few seconds.

DIY tiles

New Tab for Android: Edit site appearanceAlternatively, a user could add a new site directly from New Tab. Tapping the “+” button would launch a native keyboard and other controls, allowing them to search for a URL, define the tile’s appearance, or opt-out of related content suggestions. For extra clarification – and a little fun – the user would literally “build” their tile in real-time. Selecting any URL from the search bar dropdown would update the example tile shown, displaying a logo by default. Or, the user may choose instead to show an image of the destination homepage, or the last page they visited.

Next steps?

What I’ve proposed should be taken with a few grains of salt. For one, I believe that limiting the need for new, fancy gestures encourages adoption and usage. Likewise, many of these interactions aren’t especially novel. In fact, most of them are intended to mimic native functions a user may find elsewhere on his or her Android device. My ultimate goal here was to introduce new features available on Firefox that won’t require a steep learning curve.

For another, the possibilities for New Tab on mobile devices are numerous, and exciting to think about – but any big changes are a long ways away. By the time a new big update for Firefox on Android lands, this post will probably to totally irrelevant. But in the meantime, I hope to plant a few seeds that will take root and develop further as my team, and many others at Mozilla, contemplate the future of Firefox for the mobile Web.

Internet Explorer blogProject Spartan now available in the Windows 10 Technical Preview for phones

As Gabe just announced on the Windows Blog, the latest Windows 10 Technical Preview build for phones is now available. With it, we are delighted to announce that Project Spartan for Windows phones is available to preview in this build.

It's been just over a week since Project Spartan was available in the last desktop flight, and the feedback we've received has been tremendous—both the kudos and many ways we can improve. We are humbled that so many people have taken the time to send thousands of pieces of feedback, and to add thousands of votes on features at our desktop Project Spartan Feature Suggestion Box. Thank you!

This build, like last week's desktop build, is a very early look at software that we’re actively developing. You'll still find Internet Explorer 11 as the default browser in this preview – in a later update, Project Spartan will be the only browser included on Windows phones. Project Spartan can be found in the All Apps list, and you can pin it to the start screen yourself.

Project Spartan for phones has the same new rendering engine available on the desktop, with the same support for the latest standards and the same strong focus on interoperability with the modern Web.

You'll find that this build has our new design for our Windows phone version, with the address bar on top, and a small actions bar at the bottom (a bug in this build means that the actions menu is larger than its intended to be). We’ve heard your early feedback about the position of the address bar, and we are looking closely at the design. We encourage you to try out, see what you think, and continue to share your feedback with us.

You'll also find early versions of the mobile reading view and reading list. Other features will light up in future flights.

Please try out the new phone build and send us feedback. We are listening. In addition to the feedback option in the app itself and the Insider Hub, you can also add suggestions and vote in our new Project Spartan for Windows phone page on uservoice.

Kyle Pflug, Program Manager, Project Spartan

Planet MozillaThe Joy of Coding (Ep. 9): More View Source Hacking!

In this episode1, I continued the work we had started in Episode 8, by trying to make it so that we don’t hit the network when viewing the source of a page in multi-process Firefox.

It was a little bit of a slog – after some thinking, I decided to undo some of the work we had done in the previous episode, and then I set up the messaging infrastructure for talking to the remote browser in the view source window.

I also rebased and landed a patch that we had written in the previous episode, after fixing up some nits2.

Then, I (re)-learned that flipping the “remote” attribute of a browser is not enough in order for it to run out-of-process; I have to remove it from the DOM, and then re-add it. And once it’s been re-added, I have to reload any frame scripts that I had loaded in the previous incarnation of the browser.

Anyhow, by the end of the episode, we were able to view the source from a remote browser inside a remote view source browser!3 That’s a pretty big deal!

Episode Agenda


Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes

  1. A note that I also tried an experiment where I keep my camera running during the entire session, and place the feed into the bottom right-hand corner of the recording. It looks like there were some synchronization issues between audio and video, which are a bit irritating. Sorry about that! I’ll see what I can do about that. 

  2. and dropping a nit having conversed with :gabor about it 

  3. We were still loading it off the network though, so I need to figure out what’s going on there in the next episode. 

Planet MozillaWebmaker Demos April 10 2015

Webmaker Demos April 10 2015 Webmaker Demos April 10 2015

Planet MozillaA Compact Representation Of Captured Stack Frames For Spidermonkey

Last year, I implemented a new, compact representation for captured JavaScript stacks in SpiderMonkey1. This was done to support the allocation tracking I built into SpiderMonkey's Debugger API2 3, which saves the stack for potentially every Object allocation in the observed globals. That's a lot of stacks, and so it was important to make sure that we incurred as little overhead to memory usage as we could. Even more so, because we'd like to affect memory usage as little as possible while simultaneously instrumenting and measuring it.

The key observation is that while there may be many allocations, there are many fewer allocation sites. Most allocations happen at a few places. Thus, much information is duplicated across stacks. For example, if we run the esprima JavaScript parser on its own JavaScript source, there are approximately 54,700 total Object allocations, but just ~1,400 unique JS stacks at allocation time. There are only ~200 allocation sites if you consider only the youngest stack frame.

Consider the example below:

function a() {

function b() {

function c() { new Object; }
function d() { new Object; }
function e() { new Object; }

Disregarding compiler optimizations removing allocations, arguments objects, as well as engine internal allocations, calling the a function allocates three Objects. With arrows representing the "called by" relationship from a younger frame to its older, parent frame, this is the set of stacks that existed during an Object allocation:

c -> b -> a
d -> b -> a
e -> b -> a

Instead of duplicating all these saved stack frames, we use a technique called hash consing. Hash consing lets us share older tail stack frames between many unique stacks, similar to the way a trie shares string prefixes. With hash consing, SpiderMonkey internally represents those stacks like this:

c -> b -> a
d ---|
e ---'

Each frame is stored in a hash table. When saving new stacks, we use this table to find pre-existing saved frames. If such an object is already extant, it is reused; otherwise a new saved frame is allocated and inserted into the table. During the garbage collector's sweep phase, we remove old entries from this hash table that are no longer used.

I just landed a series of patches to make Error objects use these hash cons'd stacks internally, and lazily stringify it when the stack property is accessed4. Previously, every time an Error object was created, the stack string was eagerly computed. This change can result in significant memory savings for JS programs that allocate many Error objects. In one Real World™ test case5, we dropped memory usage from 1.2GB down to 167MB. Additionally, we use almost half of the memory that V8 uses on this same test case.

Many thanks to Jim Blandy for guidance and reviews. Thanks also to Boris Zbarsky, Bobby Holley, and Jason Orendorff for insight into implementing the security wrappers needed for passing stack frame objects between privileged and un-privileged callers. Thanks to Paolo Amadini for adding support for tracing stacks across asynchronous operations (more about this in the future). Thanks again to Boris Zbarsky for making DOMExceptions use these hash cons'd stacks. Another thanks to Jason Orendorff for reviewing my patches to make the builtin JavaScript Error objects use hash cons'd stacks behind the scenese.

In the future, I would like to make capturing stacks even faster by avoiding re-walking stack frames that we already previously walked capturing another stack6. The cheaper capturing a stack is, the more ubiquitous it can be throughout the the platform, and that puts more context and debugging information into developers' hands.

Planet Mozillasystemsetupusthebomb

This article has been superseded: Power Macs are vulnerable after all.

Oh, Apple. Ohhh, Apple. Today's rookie mistake is a system process called writeconfig that, through a case of the infamous confused deputy problem (it exists to allow certain operations by System Preferences and its command line equivalent systemsetup to be performed by admin users that are not root), can be coerced to allow any user to create arbitrary files with arbitrary permissions -- including setuid -- as root. That's, to use the technical term, bad.

This problem exists in 10.10, and is fixed in 10.10.3, but Apple will not fix it for 10.9 (or 10.8, or 10.7; the reporters confirmed it as far back as 10.7.2), citing technical limitations. Thanks, Apple!

The key is a privileged process called writeconfig which can be tricked into writing files anywhere using a cross-process attack. You would ask, reasonably, why such a process would exist in the first place, and the apparent reason is to allow these later versions of systemsetup et al to create user-specific Apache webserver configurations for guest users. If systemsetup doesn't have this functionality in your version of Mac OS X, then this specific vulnerability, at least, does not exist.

Fortunately, 10.6 and earlier do not support this feature; for that matter, there's no ToolLiaison or WriteConfigClient Objective-C class to exploit either. In fact, systemsetup isn't even in /usr/sbin in non-Server versions of OS X prior to 10.5: it's actually in /System/Library/CoreServices/RemoteManagement/, as a component of Apple Remote Desktop. I confirmed all this on my local 10.4 and 10.6 systems and was not able to replicate the issue with the given proof of concept or any reasonable variation thereof, so I am relieved to conclude that Power Macs and Snow Leopard do not appear to be vulnerable to this exploit. All your PowerPC-base systems are still belong to you.

Meanwhile, on the TenFourFox 38 front, IonPower is almost passing the first part of V8. Once I get Richards, DeltaBlue and Crypto working the rest of it should bust wide open. Speed numbers are right in line with what I'd expect based on comparison tests on my 2014 i7 MacBook Air. It's gonna be nice.

Planet MozillaBay Area Rust Meetup April 2015

Bay Area Rust Meetup April 2015 Rust meetup for April.


Updated: .  Michael(tm) Smith <>