Planet Mozilla2014: Engineering Operations Year in Review

On the first day of Mozlandia, Johnny Stenback and Doug Turner presented a list of key accomplishments in Platform Engineering/Engineering Operations in 2014.

I have been told a few times recently that people don’t know what my teams do, so in the interest of addressing that, I thought I’d share our part of the list. It was a pretty damn good year for us, all things considered, and especially given the level of organizational churn and other distractions.

We had a bit of organizational churn ourselves. I started the year managing Web Engineering, and between March and September ended up also managing the Release Engineering teams, Release Operations, SUMO and Input Development, and Developer Services. It’s been a challenging but very productive year.

Here’s the list of what we got done.

Web Engineering

  • Migrate crash-stats storage off HBase and into S3
  • Launch Crash-stats “hacker” API (access to search, raw data, reports)
  • Ship fully-localized Firefox Health Report on Android
  • Many new crash-stats reports including GC-related crashes, JS crashes, graphics adapter summary, and modern correlation reports
  • Crash-stats reporting for B2G
  • Pluggable processing architecture for crash-stats, and alternate crash classifiers
  • Symbol upload system for partners
  • Migrate to modern, flexible backend
  • Prototype services for checking health of the browser and a support API
  • Solve scaling problems in Moztrap to reduce pain for QA
  • New admin UI for Balrog (new update server)
  • Bouncer: correctness testing, continuous integration, a staging environment, and multi-homing for high availability
  • Grew Air Mozilla community contributions from 0 to 6 non-staff committers
  • Many new features for Air Mozilla including: direct download for offline viewing of public events, tear out video player, WebRTC self publishing prototype, Roku Channel, multi-rate HLS streams for auto switching to optimal bitrate, search over transcripts, integration with Mozilla Popcorn functionality, and access control based on Mozillians groups (e.g. “nda”)


  • Modeless, explorable UI with all-new JS
  • Case-insensitive searching
  • Proof-of-concept Rust analysis
  • Improved C++ analysis, with lots of new search types
  • Multi-tree support
  • Multi-line selection (linkable!)
  • HTTP API for search
  • Line-based searching
  • Multi-language support (Python already implemented, Rust and JS in progress)
  • Elasticsearch backend, bringing speed and features
  • Completely new plugin API, enabling binary file support and request-time analysis


  • Offline SUMO app in Marketplace
  • SUMO Community Hub
  • Improved SUMO search with Synonyms
  • Instant search for SUMO
  • Redesigned and improved SUMO support forums
  • Improved support for more products in SUMO (Thunderbird, Webmaker, Open Badges, etc.)
  • BuddyUP app (live support for FirefoxOS) (in progress, TBC Q1 2015)


  • Dashboards for everyone infrastructure: allowing anyone to build charts/dashboards using Input data
  • Backend for heartbeat v1 and v2
  • Overhauled the feedback form to support multiple products, streamline user experience and prepare for future changes
  • Support for Loop/Hello, Firefox Developer Edition, Firefox 64-bit for Windows
  • Infrastructure for automated machine and human translations
  • Massive infrastructure overhaul to improve overall quality

Release Engineering

  • Cut AWS costs by over 70% during 2014 by switching builds to spot instances and using intelligent bidding algorithms
  • Migrated all hardware out of SCL1 and closed datacenter to save $1 million per year (with Relops)
  • Optimized network transfers for build/test automation between datacenters, decreasing bandwidth usage by 50%
  • Halved build time on b2g-inbound
  • Parallelized verification steps in release automation, saving over an hour off the end-to-end time required for each release
  • Decommissioned legacy systems (e.g. tegras, tinderbox) (with Relops)
  • Enabled build slave reboots via API
  • Self-serve arbitrary builds via API
  • b2g FOTA updates
  • Builds for open H.264
  • Built flexible new update service (Balrog) to replace legacy system (will ship first week of January)
  • Support for Windows 64 as a first class platform
  • Supported FX10 builds and releases
  • Release support for switch to Yahoo! search
  • Update server support for OpenH264 plugins and Adobe’s CDM
  • Implement signing of EME sandbox
  • Per-checkin and nightly Flame builds
  • Moved desktop firefox builds to mach+mozharness, improving reproducibility and hackability for devs.
  • Helped mobile team ship different APKs targeted by device capabilities rather than a single, monolithic APK.

Release Operations

  • Decreased operating costs by $1 million per year by consolidating infrastructure from one datacenter into another (with Releng)
  • Decreased operating costs and improved reliability by decommissioning legacy systems (kvm, redis, r3 mac minis, tegras) (with Releng)
  • Decreased operating costs for physical Android test infrastructure by 30% reduction in hardware
  • Decreased MTTR by developing a simplified releng self-serve reimaging process for each supported build and test hardware platforms
  • Increased security for all releng infrastructure
  • Increased stability and reliability by consolidating single point of failure releng web tools onto a highly available cluster
  • Increased network reliability by developing a tool for continuous validation of firewall flows
  • Increased developer productivity by updating windows platform developer tools
  • Increased fault and anomaly detection by auditing and augmenting releng monitoring and metrics gathering
  • Simplified the build/test architecture by creating a unified releng API service for new tools
  • Developed a disaster recovery and business continuation plan for 2015 (with RelEng)
  • Researched bare-metal private cloud deployment and produced a POC

Developer Services

  • Ship Mozreview, a new review architecture integrated with Bugzilla (with A-team)
  • Massive improvements in hg stability and performance
  • Analytics and dashboards for version control systems
  • New architecture for try to make it stable and fast
  • Deployed treeherder (tbpl replacement) to production
  • Assisted A-team with Bugzilla performance improvements

I’d like to thank the team for their hard work. You are amazing, and I look forward to working with you next year.

At the start of 2015, I’ll share our vision for the coming year. Watch this space!

Planet MozillaThanks to Our Amazing Supporters: A New Goal

We set a goal for ourselves of $1.75 million to raise during our year-end fundraising campaign this year. I’m excited to report that today—thanks to 213,605 donors representing  more than 174 countries who gave an average gift of $8 to … Continue reading

Planet MozillaFirefox video playback's skip-to-next-keyframe behavior

One of the quirks of Firefox's video playback stack is our skip-to-next-keyframe behavior. The purpose of this blog post is to document the tradeoffs skip-to-next-keyframe makes.

The fundamental question that skip-to-next-keyframe answers is, "what do we do when the video stream decode can't keep up with the playback speed?

Video playback is a classic producer/consumer problem. You need to ensure that your audio and video stream decoders produce decoded samples at a rate no less that the rate at which the audio/video streams need to be rendered. You also don't want to produce decoded samples at a rate too much greater than the consumption rate, else you'll waste memory.

For example, if we're running on a low end PC, playing a 30 frames per second video, and the CPU is so slow that it can only decode an average of 10 frames per second, we're not going to be able to display all video frames.

This is also complicated by our video stack's legacy threading model. Our first video decoding implementation did the decoding of video and audio streams in the same thread. We assumed that we were using software decoding, because we were supporting Ogg/Theora/Vorbis, and later WebM/VP8/Vorbis, which are only commonly available in software.

The pseudo code for our "decode thread" used to go something like this:
while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
  if (!HaveEnoughVideoDecoded()) {
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {

This was an unfortunate design, but it certainly made some parts of our code much simpler and easier to write.

We've recently refactored our code, so it no longer looks like this, but for some of the older backends that we support (Ogg, WebM, and MP4 using GStreamer on Linux), the pseudocode is still effectively (but not explicitly or obviously) this. MP4 on Windows, MacOSX, and Android in Firefox 36 and later now decode asynchronously, so we are not limited to decoding only on one thread.

The consequence of decoding audio and video on the same thread only really bites on low end hardware. I have an old Lenovo x131e netbook, which on some videos can take 400ms to decode a Theora keyframe. Since we use the same thread to decode audio as video, if we don't have at least 400ms of audio already decoded while we're decoding such a frame, we'll get an "audio underrun". This is where we don't have enough audio decoded to keep up with playback, and so we end up glitching the audio stream. This sounds is very jarring to the listener.

Humans are very sensitive to sound; the audio stream glitching is much more jarring to a human observer than dropping a few video frames. The tradeoff we made was to sacrifice the video stream playback in order to not glitch the audio stream playback. This is where skip-to-next-keyframe comes in.

With skip-to-next-keyframe, our pseudo code becomes:

while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
  if (!HaveEnoughVideoDecoded()) {
    bool skipToNextKeyframe =
      (AmountOfDecodedAudio < LowAudioThreshold()) ||

  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {

We also monitor how long a video frame decode takes, and if a decode takes longer than the low-audio-threshold, we increase the low-audio-threshold.

If we pass a true value for skipToNextKeyframe to the decoder, it is supposed to give up and skip its decode up to the next keyframe. That is, don't try to decode anything between now and the next keyframe.

Video frames are typically encoded as a sequence of full images (called "key frames", "reference frames", or  I-frames in H.264) and then some number of frames which are "diffs" from the key frame (P-Frames in H.264 speak). (H.264 also has B-frames which are a combination of diffs of frames frames both before and after the current frame, which can lead the encoded stream to be muxed out-of-order).

The idea here is that we deliberately drop video frames in the hope that we give time back to the audio decode, so we are less likely to get audio glitches.

Our implementation of this idea is not particularly good.

Often on low end Windows machines playing HD videos without hardware accelerated video decoding, you'll get a run of say half a second of video decoded, and then we'll skip everything up to the next keyframe (a couple of seconds), before playing another half a second, and then skipping again, ad nasuem, giving a slightly weird experience. Or in the extreme, you can end up with only getting the keyframes decoded, or even no frames if we can't get the keyframes decoded in time. Or if it works well enough, you can still get a couple of audio glitches at the start of playback until the low-audio-threshold adapts to a large enough value, and then playback is smooth.

The FirefoxOS MediaOmxReader also never implemented skip-to-next-keyframe correctly, our behavior there is particularly bad. This is compounded by the fact that FirefoxOS typically runs on lower end hardware anyway. The MediaOmxReader doesn't actually skip decode to the next keyframe, it decodes to the next keyframe. This will cause the video decode to hog the decode thread for even longer; this will give the audio decode even less time, which is the exact opposite of what you want to do. What they should do is skip the demux of video up to the next keyframe, but if I recall correctly there was bugs in the Android platform's video decoder library that FirefoxOS is based on that caused this to be unreliable.

All these issues occur because we share the same thread for audio and video decoding. This year we invested some time refactoring our video playback stack to be asynchronous. This enables backends that support it to do their decoding asynchronously, on another own thread. So since audio decodes on a separate thread to video, we should have glitch-free audio even when the video decode can't keep up, even without engaging skip-to-next-keyframe. We still need to do something like skipping the video decode when the video decode is falling behind, but it can probably engage less aggressively.

I did a quick test the other day on a low end Windows 8.0 tablet with an Atom Z2760 CPU with skip-to-next-keyframe disabled and async decoding enabled, and although the video decode falls behind and gets out of sync with audio (frames are rendered late) we never glitched audio.

So I think it's time to revisit our skip-to-next-keyframe logic, since we don't need to sacrifice video decode to ensure that audio playback doesn't glitch.

When using async decoding we still need some mechanism like skip-to-next-keyframe to ensure that when the video decode falls behind it can catch up. The existing logic to engage skip-to-next-keyframe also performs that role, but often we enter skip-to-next-keyframe and start dropping frames when video decode actually could keep up if we just gave it a chance. This often happens when switching streams during MSE playback.

Now that we have async decoding, we should experiment with modifying the HaveRunOutOfDecodedVideoFrames() logic to be more lenient, to avoid unnecessary frame drops during MSE playback. One idea would be to only engage skip-to-next-keyframe if we've missed several frames. We need to experiment on low end hardware.

Planet MozillaGlobal Posting Privileges on the Mozilla Discussion Forums

Have you ever tried to post a message to a Mozilla discussion forum, particularly one you haven’t posted to before, and received back a “your message is held in a queue for the moderator” message?

Turns out, if you are subscribed to at least one forum in its mailing list form, you get global posting privileges to all forums via all mechanisms (mail, news or Google Groups). If you aren’t so subscribed, you have to be whitelisted by the moderator on a per-forum basis.

If this sounds good, and you are looking for a nice low-traffic list to use to get this privilege, try mozilla.announce.

Planet MozillaCan Mozilla be trusted with privacy?

A year ago I would have certainly answered the question in the title with “yes.” After all, who else if not Mozilla? Mozilla has been living the privacy principles which we took for the Adblock Plus project and called our own. “Limited data” is particularly something that is very hard to implement and defend against the argument of making informed decisions.

But maybe I’ve simply been a Mozilla contributor way too long and don’t see the obvious signs any more. My colleague Felix Dahlke brought my attention to the fact that Mozilla is using Google Analytics and Optimizely (trusted third parties?) on most of their web properties. I cannot really find a good argument why Mozilla couldn’t process this data in-house, insufficient resources certainly isn’t it.

And then there is Firefox Health Report and Telemetry. Maybe I should have been following the discussions, but I simply accepted the prompt when Firefox asked me — my assumption was that it’s anonymous data collection and cannot be used to track behavior of individual users. The more surprised I was to read this blog post explaining how useful unique client IDs are to analyze data. Mind you, not the slightest sign of concern about the privacy invasion here.

Maybe somebody else actually cared? I opened the bug but the only statement on privacy is far from being conclusive — yes, you can opt out and the user ID will be removed then. However, if you don’t opt out (e.g. because you trust Mozilla) then you will continue sending data that can be connected to a single user (and ultimately you). And then there is this old discussion about the privacy aspects of Firefox Health Reporting, a long and fruitless one it seems.

Am I missing something? Should I be disabling all feedback functionality in Firefox and recommend that everybody else do the same?

Side-note: Am I the only one who is annoyed by the many Mozilla bugs lately which are created without a description and provide zero context information? Are there so many decisions being made behind closed doors or are people simply too lazy to add a link?

Planet MozillaWeb Literacy Lensing: Identity


Ever since version 1 of the Web Literacy Map came out, I’ve been waiting to see people take it and adjust it or interpret it for specific educational endeavors that are outside the wheelhouse of “teach the web”. As I’ve said before, I think the web can be embedded into anything, and I want to see the anything embedded into the web. I’ve been wanting to see how people put a lens on top of the web literacy map and combine teaching the web with educating a person around Cognitive Skill X. I’ve had ideas, but never put them out into the world. I was kind of waiting for someone to do it for me (ahem Web Literacy community :P Lately I’ve been realizing that I work to develop socio-emotional skills while I teach the web, and I wanted to see if I could look at the Web Literacy Map from a personal, but social (e.g. psychosocial) angle. What, exactly, does web literacy mean in the context of Identity?


First things first - there’s a media education theory (in this book) suggesting that technology has complicated our “identity”. It’s worth mentioning because it’s interesting, and I think it’s worth noting that I didn’t consider all the nuances of these various identities in thinking about how the Web Literacy Map becomes the Web Literacy Map for Identity. We as human beings have multiple, distinct identities we have to deal with in life. We have to deal with who we are with family vs with friends vs alone vs professionally regardless of whether or not we are online, but with the development of the virtual space, the theory suggests that identity has become even more complicated. Additionally, we now have to deal with:
  • The Real Virtual: an anonymous online identity that you try on. Pretending to be a particular identity online because you are curious as to how people react to it? That’s not pretending, really, it’s part of your identity that you need answers to curiosities.
  • The Real IN Virtual: an online identity that is affiliated with an offline identity. My name is Laura offline as well. Certain aspects of my offline personality are mirrored in the online space. My everyday identity is (partially) manifested online.
  • The Virtual IN Real: a kind of hybrid identity that you adopt when you interact first in an online environment and then in the physical world. People make assumptions about you when they meet you for the first time. Technology partially strips us of certain communication mannerisms (e.g. Body language, tone, etc), so those assumptions are quite different if you met through technology and then in real life.
  • The Virtual Real: an offline identity from a compilation of data about a particular individual. Shortly: Identity theft.
So, back to the Web Literacy Map: Identity - As you can gather from a single theory about the human understanding of “self”, Identity is a complicated topic anyway. But I like thinking about complicated problems. So here’s my first thinking about how Identity can be seen as a lens on top of the Web Literacy Map. webliteracy-lens-identity

Exploring Identity (and the web)

Navigation – Identity is personal, so maybe part of web literacy is about personalizing your experience. Perhaps skills become more granular when we talk about putting a lens on the Map? Example granularity: common features of the browser skill might break down into “setting your own homepage” and “pinning apps and bookmarks”. Web Mechanics - I didn’t find a way to lens this competency. It’s the only one I couldn’t. Very frustrating to have ONE that doesn’t fit. What does that say about Web Mechanics or the Web Literacy Map writ large? Search – Identity is manifested, so your tone and mood might dictate what you search for and how you share it. Are you a satirist? Are you funny? Are you serious or terse? Search is a connective competency under this lens because it connects your mood/tone to your manifestation of identity. Example skill modification/addition: Locating or finding desired information within search results ——> using specialized search machines to find desired emotional expression. (e.g. GIPHY!) Credibility – Identity is formed through beliefs and faith, and I wouldn’t have a hard time arguing that those things influence your understanding of credible information. If you believe something and someone confirms your belief, you’ll likely find that person more credible than someone who rejects your belief. Example skill modification/addition: Comparing information from a number of sources to judge the trustworthiness of content ——> Comparing information from a number of sources to judge the trustworthiness of people Security - Identity is influenced heavily by relationships. Keeping other people’s data secure seems like part of the puzzle, and there’s something about the innate need to keep people who have influenced your identity positively secure. I don’t have an example for this one off the top of my head, but it’s percolating. [caption id="attachment_2514" align="aligncenter" width="500"]braindump braindump[/caption]

Building Identity (and the web)

Composing for the Web, Remixing, and Coding/Scripting allow us to be expressive about our identities. The expression is the WHY of any of this, so directly connected to your own identity. It connects into your personality, motivations, and a mess of thinking skills we need to function in our world. Skills underneath these competencies could be modified to incorporate those emotional and psychological traits of that expression. Design and AccessibilityValues are inseparable from our identities. I think design and accessibility is a competency that radiates a persons values. It’s ok to back burner this if you’re being expressive for the sake of being expressive, but if you have a message, if you are being expressive in an effort to connect with other people (which, let’s face it, is part of the human condition), design and accessibility is a value. Not sure how I would modify the skills… Infrastructure - I was thinking that this one pulled in remembrance as a part of identity. Exporting data, moving data, understanding the internet stack and how to adequately use it so that you can keep a record of your or someone else’s online identity has lots of implications for remembrance, which I think influences who we are as much as anything else. Example skill modification/addition: “Exporting and backing up your data from web services” might lead to “Analyzing historical data to determine identity shifts” That's all for now. I've thought a little about the final strand, but I'm going to save it for next year. I would like to hear what you all think. Is this a useful experiment for the Web Literacy Map? Does this kind of thinking help hone in on ways to structure learning activities that use the web? Can you help me figure out what my brain is doing? Happy holidays everyone ;)

Planet MozillaWhy is Slow

At Mozilla, I often hear statements like Mercurial is slow. That's a very general statement. Depending on the context, it can mean one or more of several things:

  • My Mercurial workflow is not very efficient
  • hg commands I execute are slow to run
  • hg commands I execute appear to stall
  • The Mercurial server I'm interfacing with is slow

I want to spend time talking about a specific problem: why (the server) is slow.

What Isn't Slow

If you are talking to over HTTP or HTTP (, there should not currently be any server performance issues. Our Mercurial HTTP servers are pretty beefy and are able to absorb a lot of load.

If is slow, chances are:

  • You are doing something like cloning a 1+ GB repository.
  • You are asking the server to do something really expensive (like generate JSON for 100,000 changesets via the pushlog query interface).
  • You don't have a high bandwidth connection to the server.
  • There is a network event.

Previous Network Events

There have historically been network capacity issues in the datacenter where is hosted (SCL3).

During Mozlandia, excessive traffic to essentially saturated the SCL3 network. During this time, requests to were timing out: Mercurial traffic just couldn't traverse the network. Fortunately, events like this are quite rare.

Up until recently, Firefox release automation was effectively overwhelming the network by doing some clownshoesy things.

For example, gaia-central was being cloned all the time We had a ~1.6 GB repository being cloned over a thousand times per day. We were transferring close to 2 TB of gaia-central data out of Mercurial servers per day

We also found issues with pushlogs sending 100+ MB responses.

And the build/tools repo was getting cloned for every job. Ditto for mozharness.

In all, we identified a few terabytes of excessive Mercurial traffic that didn't need to exist. This excessive traffic was saturating the SCL3 network and slowing down not only Mercurial traffic, but other traffic in SCL3 as well.

Fortunately, people from Release Engineering were quick to respond to and fix the problems once they were identified. The problem is now firmly in control. Although, given the scale of Firefox's release automation, any new system that comes online that talks to version control is susceptible to causing server outages. I've already raised this concern when reviewing some TaskCluster code. The thundering herd of automation will be an ongoing concern. But I have plans to further mitigate risk in 2015. Stay tuned.

Looking back at our historical data, it appears that we hit these network saturation limits a few times before we reached a tipping point in early November 2014. Unfortunately, we didn't realize this because up until recently, we didn't have a good source of data coming from the servers. We lacked the tooling to analyze what we had. We lacked the experience to know what to look for. Outages are effective flashlights. We learned a lot and know what we need to do with the data moving forward.

Available Network Bandwidth

One person pinged me on IRC with the comment Git is cloning much faster than Mercurial. I asked for timings and the Mercurial clone wall time for Firefox was much higher than I expected.

The reason was network bandwidth. This person was performing a Git clone between 2 hosts in EC2 but was performing the Mercurial clone between and a host in EC2. In other words, they were partially comparing the performance of a 1 Gbps network against a link over the public internet! When they did a fair comparison by removing the network connection as a variable, the clone times rebounded to what I expected.

The single-homed nature of in a single datacenter in northern California is not only bad for disaster recovery reasons, it also means that machines far away from SCL3 or connecting to SCL3 over a slow network aren't getting optimal performance.

In 2015, expect us to build out a geo-distributed so that connections are hitting a server that is closer and thus faster. This will probably be targeted at Firefox release automation in AWS first. We want those machines to have a fast connection to the server and we want their traffic isolated from the servers developers use so that hiccups in automation don't impact the ability for humans to access and interface with source code.

NFS on SSH Master Server

If you connect to or, you are hitting a pool of servers behind a load balancer. These servers have repository data stored on local disk, where I/O is fast. In reality, most I/O is serviced by the page cache, so local disks don't come into play.

If you connect to ssh://, you are hitting a single, master server. Its repository data is hosted on an NFS mount. I/O on the NFS mount is horribly slow. Any I/O intensive operation performed on the master is much, much slower than it should be. Such is the nature of NFS.

We'll be exploring ways to mitigate this performance issue in 2015. But it isn't the biggest source of performance pain, so don't expect anything immediately.

Synchronous Replication During Pushes

When you hg push to, the changes are first made on the SSH/NFS master server. They are subsequently mirrored out to the HTTP read-only slaves.

As is currently implemented, the mirroring process is performed synchronously during the push operation. The server waits for the mirrors to complete (to a reasonable state) before it tells the client the push has completed.

Depending on the repository, the size of the push, and server and network load, mirroring commonly adds 1 to 7 seconds to push times. This is time when a developer is sitting at a terminal, waiting for hg push to complete. The time for Try pushes can be larger: 10 to 20 seconds is not uncommon (but fortunately not the norm).

The current mirroring mechanism is overly simple and prone to many failures and sub-optimal behavior. I plan to work on fixing mirroring in 2015. When I'm done, there should be no user-visible mirroring delay.

Pushlog Replication Inefficiency

Up until yesterday (when we deployed a rewritten pushlog extension, the replication of pushlog data from master to server was very inefficient. Instead of tranferring a delta of pushes since last pull, we were literally copying the underlying SQLite file across the network!

Try's pushlog is ~30 MB. mozilla-central and mozilla-inbound are in the same ballpark. 30 MB x 10 slaves is a lot of data to transfer. These operations were capable of periodically saturating the network, slowing everyone down.

The rewritten pushlog extension performs a delta transfer automatically as part of hg pull. Pushlog synchronization now completes in milliseconds while commonly only consuming a few kilobytes of network traffic.

Early indications reveal that deploying this change yesterday decreased the push times to repositories with long push history by 1-3s.


Pretty much any interaction with the Try repository is guaranteed to have poor performance. The Try repository is doing things that distributed versions control systems weren't designed to do. This includes Git.

If you are using Try, all bets are off. Performance will be problematic until we roll out the headless try repository.

That being said, we've made changes recently to make Try perform better. The median time for pushing to Try has decreased significantly in the past few weeks. The first dip in mid-November was due to upgrading the server from Mercurial 2.5 to Mercurial 3.1 and from converting Try to use generaldelta encoding. The dip this week has been from merging all heads and from deploying the aforementioned pushlog changes. Pushing to Try is now significantly faster than 3 months ago.


Many of the reasons for slowness are known. More often than not, they are due to clownshoes or inefficiencies on Mozilla's part rather than fundamental issues with Mercurial.

We have made significant progress at making faster. But we are not done. We are continuing to invest in fixing the sub-optimal parts and making faster yet. I'm confident that within a few months, nobody will be able to say that the servers are a source of pain like they have been for years.

Furthermore, Mercurial is investing in features to make the wire protocol faster, more efficient, and more powerful. When deployed, these should make pushes faster on any server. They will also enable workflow enhancements, such as Facebook's experimental extension to perform rebases as part of push (eliminating push races and having to manually rebase when you lose the push race).

Planet MozillaClientID in Telemetry submissions

A new functionality landed recently that allows to group Telemetry sessions by profile ID. Being able to group sessions by profile turns out be extremely useful for several reasons. For instance, as some users tend to generate an enourmous amount of sessions daily, analyses tend to be skewed towards those users.

Take uptime duration; if we just consider the uptime distribution of all sessions collected in a certain timeframe on Nightly we would get a distribution with a median duration of about 15 minutes. But that number isn’t really representative of the median uptime for our users. If we group the submissions by Client ID and compute the median uptime duration for each group, we can build a new distribution that is more representative of the general population:


And we can repeat the exercise for the startup duration, which is expressed in ms:

startupOur dashboards are still based on the session distributions but it’s likely that we will provide both session and user based distributions in our next-gen telemetry dash.


Please keep in mind that:

  • Telemetry does not collect privacy-sensitive data.
  • You do not have to trust us, you can verify what data Telemetry is collecting in the about:telemetry page in your browser and in aggregate form in our Telemetry dashboards.
  • Telemetry is an opt-in feature on release channels and a feature you can easily disable on other channels.
  • The new Telemetry Client ID does not track users, it tracks Telemetry performance & feature-usage metrics across sessions.

Planet MozillaInput status: December 18th, 2014


It's been 3 months since the last status report. Crimey! That's not great, but it's even worse because it makes this report crazy long.

First off, lots of great work done by Adam Okoye, L. Guruprasad, Bhargav Kowshik, and Deshraj Yadav! w00t!

Second, we did a ton of stuff and broke 1,000 commits! w00t!

Third, I've promised to do more frequent status reports. I'll go back to one every two weeks.



High-level summary:

  • Lots of code quality work.
  • Updated ElasticUtils to 0.10.1 so we can upgrade our Elasticsearch cluster.
  • Heartbeat v2.
  • Overhauled the generic feedback form.
  • remote-troubleshooting data capture.
  • contribute.json file.
  • Upgrade to Django 1.6.
  • Upgrade to Python 2.7!!!!!
  • Improved and added to pre-commit and commit-msg linters.

Landed and deployed:

  • ce95161 Clarify source and campaign parameters in API
  • 286869e bug 788281 Add update_product_details to deploy
  • bbedc86 bug 788281 Implement basic events API
  • 1c0ff9f bug 1071567 Update ElasticUtils to 0.10.1
  • 7fd52cd bug 1072575 Rework smart_timedelta
  • ce80c56 bug 1074276 Remove abuse classification prototype
  • 7540394 bug 1075563 Fix missing flake8 issue
  • 11e4855 bug 1025925 Change file names (Adam Okoye)
  • 23af92a bug 1025925 Change all instances of to (Adam Okoye)
  • ae28c60 bug 1025925 Change instances of of util relating to fjord/base/ (Adam Okoye)
  • bc77280 Add Adam Okoye to CONTRIBUTORS
  • 545dc52 bug 1025925 Change test module file names
  • fc24371 bug 1041703 Drop prodchan column
  • 9097b8f bug 1079376 Add error-email -> response admin view
  • d3cfdfe bug 1066618 Tweak Gengo account balance warning
  • a49f1eb bug 1020303 Add rating column
  • 55fede0 bug 1061798 Reset page number Resets page number when filter checkbox is checked (Adam Okoye)
  • 1d4fd00 bug 854479 Fix ui-lightness 404 problems
  • a9bf3b1 bug 940361 Change size on facet calls
  • c2b2c2b bug 1081413 Move url validation code into fjord_utils.js Rewrote url validation code that was in generic_feedback.js and added it to fjord_utils.js (Adam Okoye)
  • 4181b5e bug 1081413 Change code for url validation (Adam Okoye)
  • 2cd62ad bug 1081413 Add test for url validation (Adam Okoye)
  • f72652a bug 1081413 Correct operator in test_fjord_utils.js (aokoye)
  • c9b83df bug 1081997 Fix unicode in smoketest
  • cba9a2d bug 1086643 bug 1086650 Redo infrastructure for product picker version
  • e8a9cc7 bug 1084387 Add on_picker field to Product
  • 2af4fca bug 1084387 Add on_picker to management forms
  • 1ced64a bug 1081411 Create format test (Adam Okoye)
  • 00f8a72 Add template for mentored bugs
  • e95d0f1 Cosmetic: Move footnote
  • d0cb705 Tweak triaging docs
  • d5b35a2 bug 1080816 Add A/B for ditching chart
  • fa1a47f Add notes about running tests with additional warnings
  • ddde83c Fix mimetype -> content_type and int division issue
  • 2edb3b3 bug 1089650 Add a contribute.json file (Bhargav Kowshik)
  • d341977 bug 1089650 Add test to verify that the JSON is valid (Bhargav Kowshik)
  • dcb9380 Add Bhargav Kowshik to CONTRIBUTORS
  • 7442513 Fix throttle test
  • f27e31c bug 1072285 Update Django, django-nose and django-cache-machine
  • dd74a3c bug 1072285 Update django-adminplus
  • ececdf7 bug 1072285 Update requirements.txt file
  • 6669479 bug 1093341 Tweak Gengo account balance warning
  • f233aab bug 1094197 Fix JSONObjectField default default
  • 11193d7 Tweak chart display
  • 9d311ca Make journal.Record not derive from ModelBase
  • f778c9d Remove all Heartbeat v1 stuff
  • e5f6f4d Switch to
  • cab7050 bug 1092296 Implement heartbeat v2 data model
  • 5480c42 bug 1097352 Response view is viewable by all
  • 46b5897 bug 1077423 Overhaul generic feedback form dev
  • da31b47 bug 1077423 Update smoke tests for generic feedback form dev
  • e84094b Fix l10n email template
  • d6c8ea9 Remove gettext calls for product dashboards
  • e1a0f74 bug 1098486 Remove under construction page
  • 032a660 Fix script history table
  • 19cec37 Fix JSONObjectField
  • 430c462 Improve display_history for l10n_status
  • d6c18c6 Windows NT 6.4 -> Windows 10
  • 73a4225 bug 1097847 Update django-grappelli to 2.5.6
  • 4f3b9c7 bug 1097847 Fix custom views in admin
  • 3218ea3 Fix JSONObjectfield.value_to_string
  • 67c6bf9 Fix RecordManager.log regarding instances
  • a5e8610 bug 1092299 Implement Heartbeat v2 API
  • 17226db bug 1092300 Add Heartbeat v2 debugging views
  • 11681c4 Rework env view to show python and django version
  • 9153802 bug 1087387 Add feedback_response.browser_platform
  • f5d8b56 bug 1087387 bug 1096541 Clean up feedback view code
  • c9c7a81 bug 1087391 Fix POST API user-agent inference code
  • 4e93fc7 bug 1103024 Gengo kill switch
  • de9d1c7 Capture the user-agent
  • 4796e4e bug 1096541 Backfill browser_platform for Firefox dev
  • f5fe5cf bug 1103141 Add experiment_version to HB db and api
  • 98c40f6 bug 1103141 Add experiment_version to views
  • 0996148 bug 1103045 Create a menial hb survey view
  • 965b3ee bug 1097204 Rework product picker
  • 6907e6f bug 1097203 Add link to SUMO
  • e8f3075 bug 1093843 Increase length of product fields
  • 2c6d24b bug 1103167 Raise GET API throttle
  • d527071 bug 1093832 Move feedback url documentation
  • 6f4eb86 Abstract out python2.6 in deploy script
  • f843286 Fix to take pythonbin as arg
  • 966da77 Add celery health check
  • 1422263 Add space before subject of celery health email
  • 5e07dbd [heartbeat] Add experiment1 static page placeholders
  • 615ccf1 [heartbeat] Add experiment1 static files
  • d8822df [heartbeat] Add SUMO links to sad page
  • 3ee924c [heartbeat] Add twitter thing to happy page
  • d87a815 [heartbeat] Change thank you text
  • 06e73e6 [heartbeat] Remove cruft, fix links
  • 8208a72 [heartbeat] Fix "addons"
  • 2eca74c [heartbeat] Show profile_age always because 0 is valid
  • 4c4598b bug 1099138 Fix "back to picker" url
  • b2e9445 Add note about "Commit to VCS" in l10n docs
  • 9c22705 Heartbeat: instrument email signup, feedback, NOT Twitter (Gregg Lind)
  • 340adf9 [heartbeat] Fix DOCTYPE and ispell pass
  • 486bf65 [heartbeat] Change Thank you text
  • d52c739 [heartbeat] Switch to use Input GA account
  • f07716b [heartbeat] Fix favicons
  • eff9d0b [heartbeat] Fixed page titles
  • 969c4a0 [heartbeat] Nix newsletter form for a link
  • dce6f86 [heartbeat] Reindent code to make it legible
  • dad6d82 bug 1099138 Remove [DEV] from title
  • 4204b43 fixed typo in getting_started.rst (Deshraj Yadav)
  • 7042ead bug 1107161 Fix hb answers view
  • a024758 bug 1107809 Fix Gengo language guesser
  • 808fa83 bug 1107803 Rewrite Response inference code
  • d9e8ffd bug 1108604 Tweak paging links in hb data view
  • 00e8628 bug 1108604 Add sort and display ts better in hb data view
  • 17b908a bug 1108604 Change paging to 100 in hb data view
  • 39dc943 bug 1107083 Backfill versions
  • fee0653 bug 1105512 Rip out old generic form
  • b5bb54c Update grappelli in requirements.txt file
  • f984935 bug 1104934 Add ResponseTroubleshootingInfo model
  • c2e7fd3 bug 950913 Move 'TRUNCATE_LENGTH' and make accessable to other files (Adam Okoye)
  • b6f30e1 bug 1074315 Ignore deleted files for linting in pre-commit hook (L. Guruprasad)
  • 4009a59 Get list of .py files to lint using just git diff (L. Guruprasad)
  • c81da0b bug 950913 Access TRUNCATE_LENGTH from generic_feedback template (Adam Okoye)
  • 9e3cec6 bug 1111026 Fix hb error page paging
  • b89daa6 Dennis update to master tip
  • 61e3e18 Add django-sslserver
  • 93d317b bug 1104935 Add remote.js
  • ad3a5cb bug 1104935 Add browser data section to generic form
  • cc54daf bug 1104935 Add browserdata metrics
  • 31c2f74 Add jshint to pre-commit hook (L. Guruprasad)
  • 68eae85 Pretty-print JSON blobs in hb errorlog view
  • 8588b42 bug 1111265 Restrict remote-troubleshooting to Firefox
  • b0af9f5 Fix sorby error in hb data view
  • 8f622cf bug 1087394 Add browser, browser_version, and browser_platform to response view (Adam Okoye)
  • c4b6f85 bug 1087394 Change Browser Platform label (Adam Okoye)
  • eb1d5c2 Disable expansion of $PATH in the provisioning script (L. Guruprasad)
  • 59eebda Cosmetic test changes
  • aac733b bug 1112210 Tweak remote-troubleshooting capture
  • 6f24ce7 bug 1112210 Hide browser-ask by default
  • 278095d bug 1112210 Note if we have browser data in response view
  • 869a37c bug 1087395 Add fields to CSV output (Adam Okoye)

Landed, but not deployed:

  • 4ee7fd6 Update the name of the pre-commit hook script in docs (L. Guruprasad)
  • d4c5a09 bug 1112084 create requirements/dev.txt (L. Guruprasad)
  • 4f03c48 bug 1112084 Update provisioning script to install dev requirements (L. Guruprasad)
  • 03c5710 Remove instructions for manual installation of flake8 (L. Guruprasad)
  • a36a231 bug 1108755 Add a git commit message linter (L. Guruprasad)

Current head: f0ec99d

Rough plan for the next two weeks

  1. PTO. It's been a really intense quarter (as you can see) and I need some rest. Maybe a nap. Plus we have a deploy freeze through to January, so we can't push anything out anyhow. I hope everyone else gets some rest, too.

That's it!

Planet MozillaGetting “Cannot read property ‘contents’ of undefined” with grunt-contrib-less? Read this…

The last day or so was spent pulling my hair from my head and other regions from my facial region. Why you ask? Because of this line, spewed back at me in the Terminal: Warning: Cannot read property ‘contents’ of undefined The showed itself after I set-up a new project with much the same set … Continue reading Getting “Cannot read property ‘contents’ of undefined” with grunt-contrib-less? Read this…

Planet MozillaThird Wave and Telecommuting

I have been reading Tim Bray’s blogpost on how he started to work in Amazon, and I got ignited by the comment by len and particularly by this (he starts quoting Tim):

“First, I am to­tally sick of working remotely. I want to go and work in rooms with other people working on the same things that I am.”

And that says a lot. Whatever the web has enabled in terms of access, it has proven to be isolating where human emotions matter and exposing where business affairs matter. I can’t articulate that succinctly yet, but there are lessons to be learned worthy of articulation. A virtual glass of wine doesn’t afford the pleasure of wine. Lessons learned.

Although I generally agree with your sentiment (these are really not your Friends, except if they already are), I believe the situation with the telecommuting is more complex. I have been telecommuting for the past eight years (or so, yikes, the time fly!) and I do like it most of the time. However, it really requires special type of personality, special type of environment, special type of family, and special type of work to be able to do it well. I know plenty of people who do well working from home (with occasional stay in the coworking office) and some who just don’t. It has nothing to do with IQ or anything like that. Just for some people it works, and I have some colleagues who left Red Hat just because they cannot work from home and the nearest Red Hat office was just too far from them.

However, this trivial statement makes me think again about stuff which is much more profound in my opinion. I am a firm believer in the coming of what Alvin and Heidi Toffler called “The Third Wave”. That after the mainly agricultural and mainly industrial societies the world is changing, so that “much that once was is lost” and we don’t know exactly what is coming. One part of this change is substantial change in the way we organize our work. It really sounds weird but there were times when there were no factories, no offices, and most people were working from their homes. I am not saying that the future will be like the distant past, it never is, but the difference makes it clear to me that what is now is not the only possible world we could live in.

I believe that the standard of people leaving their home in the morning to work will be in future very very diminished. Probably some parts of the industrial world will remain around us (after all, there are still big parts of the agricultural world around us), but I think it might have the same impact (or so little impact) as the agricultural world has on the current world. If the trend of the offices dissolution will continue (and I don’t see the reason why it wouldn’t, in the end all those office buildings and commuting is terrible waste of money) we can expect really massive change in almost everything: ways we build homes (suddenly your home is not just the bedroom to survive night between two workshifts), transportation, ways we organize our communities (suddenly it does matter who is your neighbor), and of course a lot of social rules will have to change. I think we are absolutely not-prepared for this and also we are not talking about this enough. But we should.

Planet MozillaReps Weekly Call – December 18th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.



  • Privacy Day & Hello Plan.
  • End of the year! What should we do?
  • Mozlandia videos.

Note: Due holiday dates, next weekly call will be January 8.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next year!

Planet MozillaFirefox Automation report – week 45/46 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 45 and 46.


In our Mozmill-CI environment we had a couple of frozen Windows machines, which were running with 100% CPU load and 0MB of memory used. Those values came from the vSphere client, and didn’t give us that much information. Henrik checked the affected machines after a reboot, and none of them had any suspicious entries in the event viewer either. But he noticed that most of our VMs were running a very outdated version of the VMware tools. So he upgraded all of them, and activated the automatic install during a reboot. Since then the problem is gone. If you see something similar for your virtual machines, make sure to check that used version!

Further work has been done for Mozmill CI. So were finally able to get rid of all the traces for Firefox 24.0ESR since it is no longer supported. Further we also setup our new Ubuntu 14.04 (LTS) machines in staging and production, which will soon replace the old Ubuntu 12.04 (LTS) machines. A list of the changes can be found here.

Beside all that Henrik has started to work on the next Jenkins v1.580.1 (LTS) version bump for the new and more stable release of Jenkins. Lots of work might be necessary here.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 45 and week 46.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 45 and week 46.

Planet MozillaOPW Week Two – Coming to a Close

Ok so technically it’s the first full week because OPW started last Tuesday December 9th, but either way – my first almost two weeks of OPW are coming to a close. My experience has been really good so far and I think I’m starting to find my groove in terms of how and where I work best location wised. I also think I’ve been pretty productive considering this is the first internship or job I’ve had (not counting the Ascend Project) in over 5 years.

So far I’ve resolved three bugs (some of which were actually feature requests) and have a fourth pull request waiting. I’ve been working entirely in Django so far primarily editing model and template files. One of the really nice things that I’ve gotten out of my project is learning while and via working on bugs/features.

Four weeks ago I hadn’t done any work with Django and realized that it would likely behoove me to dive into some tutorials (originally my OPW project with Input was going to be primarily in JavaScript but in reality doing things in Python and Django makes a lot more sense and thus the switch despite the fact that I’m still working with Input). I started a three or four basic tutorials and completed one and a half of them – in short, I didn’t have a whole lot of experience with Django ten days ago. Despite that. all of the the looking through and editing of files that I’ve done has really improved my skills both in terms of syntax and also in terms of being able to find information – both where to take information from and also where to put it. I look forward to all of the new things that I will learn and put in to practice.

Planet MozillaA/B Testing: ‘Sequential form’ vs ‘Simple PayPal’

This post is one of a series where we’re sharing things we’ve learned while running A/B tests during our End of Year fundraising campaign. At Mozilla we strive to ‘work open’, to make ourselves more accountable, and to encourage others … Continue reading

Planet MozillaRemixable Quilts for the Web

Atul Varma set up a quilt for MozCamp Asia 2012 which I thought was a fantastic tool for that type of event. It provided an engaging visualization, was collaboratively created, and allowed a quick and easy way to dive further into the details about the participating groups.

Screenshot 2014-12-18 15.23.19I wanted to use it for a couple of projects, but the code was tied pretty closely to that specific content and layout.

This week I finally got around to moving the code over to Mozilla Webmaker, so it could be easily copied and remixed. I made a couple of changes:

  • Update font to Open Sans
  • Make it easy and clear how to re-theme the colors
  • Allow arbitrary content in squares

The JS code is still a bit too complex for what’s needed, but it works on Webmaker now!

Screenshot 2014-12-18 15.09.42

View my demo quilt. Hit the “remix” button to clone it and make your own.

The source for the core JS and CSS is at

Planet MozillaSelf Iteration

How the ultimate insider/outsider found his heart at Mozilla.

Change can be either scary or inspiring. It’s scary when it’s all happening to “you”; when you’re on the inside. Every new inundation of the unexpected can feel like an outright, personal attack. But when you’re an outsider, it’s much easier to see the stage, the actors, and your part in the overall story.

The difference is perspective.


I joined Mozilla in the summer of this year mostly as an outsider. Professionally, my career was birthed, built and cultivated within the “agency” world. In other words, I worked for clients who paid me sell their stuff to other people. Sometimes business were trying to sell services to other businesses. Other times, they were trying to sell things to consumers. Either way, I was the designer responsible for creating the tools that would help them do so. With few exceptions, all this worked happened within a team of other agency people, all held accountable to the same bottom line. And while we may have fought and argued over “user value” or the integrity of the “experience”, each of us were paid to further our client’s goals – not our personal ones. This is the very definition of an insider.

Yet even though I worked on the agency side, I was also very much an outsider. For one thing, agencies are never cause-driven, they’re revenue-driven. Being the kind of guy who asks “why” too much, the blind pursuit of profit isn’t exactly enticing. Up until I joined The Project, the best I could do was defend the interests of users to clients who wanted to make their bosses happy. Their bosses would all want to know if they would profit. And, to be perfectly honest, I just didn’t care much about making yet another nameless executive even MORE money. Eventually, the entire agency/advertising world had totally gutted my sense of purpose in life.

But what else was I supposed to do that still allowed me to practice the craft I loved so dearly?


Like most other folks, I joined Mozilla because I believed in the mission, the values, and the products. The fact that I now have a real opportunity to do good — for the sake of doing good — absolutely helps with the “purpose” part. Of course, this motivation alone doesn’t qualify me to be an official “Mozillian”. Why? Because like at any other office, I’m wasn’t on the inside just because I showed up to work. Perhaps a few had overheard that I was the guy working with the Content Services (CS) Team to design ad tiles. Otherwise, I was utterly unknown; an outsider.

So, when the CS Team asked me to explore how advertising could work on New Tab, I immediately thought of two things. First were the hundreds of millions of Firefox users it would effect. Second, was my swift death if I screwed it up. Naturally, therefore, the proposition sounded like Mission Impossible. It’s one thing to make big decisions that fundamentally changes a popular product; it’s another thing entirely to implement something that truly makes users happier than they were before. And when you start talking about advertising, corporate partnerships, and revenue generation, people tend to react strongly to very idea of change.


I had a lot of time to think since those first conversations. I also learned a lot more about the industry. In short, advertising on the Web is generally a terrible experience, and yet it pays for the majority of the free content and services we all enjoy. It’s a game rigged to reward those with the deepest pockets and access to troves of user data, and yet it’s all completely invisible to the users themselves. It’s not a system everyone particularly likes (or agrees with), but it’s the one we’ve currently got, and it’s not going anywhere anytime soon.

To change the game, however, you have to get into it. Mozilla can’t just sit on the sidelines and say “you’re doing it wrong.” From the beginning, we all knew that advertising on New Tab would stir emotions and invite criticism. But the more I thought about things, the more I saw it all as an extraordinary opportunity, and less of a problem to solve.

After all, the organization does have a mission to fulfill and users around the world who count on Mozilla to make meaningful change on the Web. It takes time, people, and a lot of money to achieve any measurable success. Although money itself is never our bottom line, the more of it Mozilla can generate, then the more we can invest directly in our Firefox products and the Web community as a whole. More importantly, if we can do that through an advertising model that respects user’s privacy, allows for more user control, and rewards content providers for the actual value they provide… then why not try? In doing so, we might even be able to make the Web a more equal place for everybody – ordinary users, content providers, and business or technology partners alike.


I understood enough about the organization and what it stood for to plant a flag, but that’s about it. It was very unclear, especially in the beginning, how this experiment would be received by Firefox users, my peers, or the community. As it turned out, my fears were unfounded. Nobody at Mozilla kicked me out of the clubhouse. Members of the community filed a few bugs. Most crucially, users seemed to be okay with the first release of Enhanced Tiles on New Tab. While there was certainly some blood left on the mat, nobody lost anything important… like an eye, or our values.

In fact, it’s a shared obsession for a Web that’s open to all – one that respects user control and sovereignty – that binds so many different people together at Mozilla. Six months ago, I felt like an awkward middle-schooler trying to blend into the wallpaper. Today, I’m actually a part of something much, much bigger than “me”. Now it’s “us”. And when everyone is in the game for the same reasons, big challenges quickly become new opportunities.

Of course I want to win. I want Firefox, Mozilla, and the mission to succeed. The competitive instinct in me will never die. Only now, there’s so much more at stake, and there’s so much more to be done. The path forward is going to be a serious battle (our competitors and the ad industry writ large aren’t exactly rooting for us), and anyone with skin in the game is going to bleed a little more. Personally, I might even get my ass kicked and my teeth knocked out trying to create a New Tab experience that redefines the relationship users have with advertisers. But that’s okay. I finally have a purpose worthy of the scars.

5.0 – Release

The real test is yet to come. As more of the experiences we design make their way onto the New Tab page in Firefox, hundreds of millions of users around the world will judge for themselves whether or not Mozilla has real vision, or has their back. With the power of their fingertips, ordinary people we’ll never meet will determine what’s valuable to them – not a client.

Honestly, I wouldn’t want it any other way. Because that’s what being a Mozillian is all about: Ensuring the Web is an open platform for opportunity and choice.

More change is coming. Only this time, users will have a more direct role in shaping the future of Firefox, and perhaps even how the Web itself works.

For once, it feels good to be on the inside.

Planet MozillaUnclear Intentions

Starting something is always the hardest part.

Anybody who’s built a technology product intimately understands the law of inertia: getting something started is always more difficult than sustaining momentum.

In the beginning of any new endeavor, who you are and what you’re trying to accomplish is entirely unknown. Until you’ve declared your intentions – or can point to something of value that you’ve produced – it’s difficult to generate much interest. People within your own organization are reluctant to cooperate. Business partners or vendors have all the bargaining power. And the average user has no reason to care. Meanwhile, you need human and financial capital, supplies, services and whatever else required to build something truly awesome.

Personal stuff can be equally challenging. Starting out a career, a business, or even recovery from a major injury, takes a lot of time and determination to build enough momentum before it “feels” like any progress has been made. (Sometimes the effort only seems worth it in retrospect.)

Then there’s the mundane stuff; things that are important, but not critical. We may know that something needs to happen — at some point — but just thinking about it summons immediate fatigue and a twinge of depression. They tend to be the kinds of things we put off for a long as possible, like working out and eating healthier.

Or starting a blog.


As a designer, the biggest challenges I face often start with a blank canvass – literally, albeit a digital one. By this point in my career, designing is the easy part; thinking through all the dependencies is the real challenge. Long gone are the days of simple websites. Today, creative folks are working with engineers, data scientists and business development consultants to build entire technology platforms. Things get complicated quickly.

But for various reasons, I’m infinitely more confident launching a new project than I am starting a personal blog.

For many years, I resisted writing for general consumption on the Web. Does the world need yet another voice shouting into the mob? What would I have to say that’s any different than what everyone else has already said? What if I don’t want to write about just one topic, or write very often? How would I even promote a blog since my online life is decidedly unsocial? More importantly, what would be my raison d’être should people actually start reading posts?

It’s never been clear to me why my voice would matter, or to whom.


Last month, my boss encouraged me to write a post for our team’s blog.

After spending several days trying to write something technical, I gave up on the first draft. Any given feature had a back story just as important as the feature itself. It wasn’t possible to write something about the specifics without getting into the “why” behind the “what”.

As it turned out, the Why was far more important to me, personally, than the particulars of anything I happened to be working on. So I wrote something much more personal instead: my journey at Mozilla, and how I’ve changed as a person and as a designer.

Only now, there wasn’t anywhere to post it because official websites and related blogs are for official announcements – not for introspective posts about things that are entirely unofficial. This is, of course, very understandable. However, my quandary remains: where the hell do I post something that’s work related, but not about actual work?


My job affords me the opportunity to work with some of the most talented, intelligent people on the planet. One such individual recently asked me how I felt about blogging. He shares many of my hangups about writing for an audience on the Web, but still manages to be the most effective communicator I’ve met. So I asked him why he writes, how often, and what about.

Surprisingly, he only writes on occasion, and about diverse topics that range from Web technology to sports. Sometimes it’s a shout out to hard working bus drivers. But he doesn’t write because there’s some underlying expectation or pressure to do so. He writes because he wants to. (This is crazy talk to somebody like me, who’s been trained to think about the commercial value of everything.)

All feelings of acute narcissism aside, he discovered for himself that expressing a unique, personal perspective – even if only on occasion – makes him that much more knowable. It’s not about “personal branding”, social influence scoring, or networking with industry insiders in the hopes of securing a new job one day. It’s far more pragmatic. As Communications Director, his success is dependent on his ability to work with others. Simply by putting himself “out there”, folks are often a more willing to share and listen to new ideas because there’s something to relate to. Meanwhile, he has a platform that allows him to be himself without the limitations inherent in writing for official channels. It seemed reasonable.

Regarding the story I’d written, there still wasn’t a place to post it or an audience to address. Nevertheless, he gave me the only advice he could:

Start a personal blog.


Hello world. Dammit.

Planet Mozillahyper

Rust is a shiny new systems language that the lovely folks at Mozilla are building. It focuses on complete memory-safety, and being very fast. It’s speed is equivalent to C++ code, but you don’t have to manage pointers and the like; the language does that for you. It also catches a lot of irritating runtime errors at compile time, thanks to it’s fantastic type system. That should mean less crashes.

All of this sounds fantastic, let’s use it to make server software! It will be faster, and crash less. One speed bump: there’s no real rust http library.

rust-http and Teepee

There were 2 prior attempts at HTTP libraries, but the former (rust-http) has been ditched by it’s creator, and isn’t very “rust-like”. The latter, Teepee, started in an excellent direction, but life has gotten in the way for the author.1

For the client-side only, there exists curl-rust, which are just bindings to libcurl. Ideally, we’d like to have the all of the code written in Rust, so we don’t have to trust that the curl developers have written perfectly memory-safe code.

So I started a new one. I called it hyper, cause, y’know, hyper-text transfer protocol.

embracing types

The type system in Rust is quite phenomenal. Wait, what? Did I just say that? Huh, I guess I did. I know, I know, we hate wrestling with type systems. I can’t touch any Java code without cursing the type system. Thanks to Rust’s type inference, though, it’s not irritating at all.

In contrast, I’ve gotten tired of stringly-typed languages; chief among them is JavaScript. Everything is a string. Even property lookups. document.onlood = onload; is perfectly valid, since it just treats onlood as a string. You know a big problem with strings? Typos. If you write JavaScript, you will write typos that aren’t caught until your code is in production, and you see that an event handler is never triggered, or undefined is not a function.

I’m done with that. But if you still want to be able to use strings in your rust code, you certainly can. Just use something else besides hyper.

Now then, how about some examples. It’s most noticeable when using headers. In JavaScript, you’d likely do something like:

req.headers['content-type'] = 'application/json';

Here’s how to do the same using hyper:

req.headers.set(ContentType(Mime(Application, Json, vec![])));

Huh, interesting. Looks like more code. Yes, yes it is. But it’s also code that has been checked by the compiler. It has made sure there are no typos. It also has made sure you didn’t try to see the wrong format to a header. To get the header back out:

match req.headers.get() {
    Some(&ContentType(Mime(Application, Json, _))) => "its json!",
    Some(&ContentType(Mime(top, sub, _))) => "we can handle top and sub",
    None => "le sad"

Here’s an example that makes sure the format is correct:

// ...
match req.headers.get() {
    Some(&Date(ref tm)) => {
        // tm is a Tm instance, without you dealing with
        // the various allowed formats in the HTTP spec.
    // ...

Yea, yea, there is a stringly-typed API, for those rare cases you might need it, but it’s purposefully not easy to use. You shouldn’t use it. Maybe you think of a reason you might maybe have a good reason; no you don’t. Don’t use it. Let the compiler check for errors before you hit production.

Let’s look at status codes. Can you tell me what exactly this response means, without looking it up?

res.status = 307;

How about this instead:

res.status = StatusCode::MovedTemporarily;

Clearly better. You’ve seen code like this:

if res.status / 100 == 4 {}

What if we could make it better:

if res.status.is_client_error() {}

Message WriteStatus

I’ve been bitten by this before, I can only bet you have also: trying to write headers after they’ve already been sent. Hyper makes this a compile-time check. If you have a Request<Fresh>, then there exists header_mut() methods to get a mutable reference to the headers, so you can add some. You can’t accidently write to a Request<Fresh>, since it doesn’t implement Writer. When you are ready to start writing the body, you must specifically convert to a Request<Streaming> using req.start().

Likewise, a Request<Streaming> does not contain a headers_mut() accessor. You cannot change the headers once streaming has started. You can still inspect them, if that’s needed, but no setting! The compiler will make sure you don’t have that mistake in your code.


Both the Server and the Client are generic over NetworkStreams. The default is to use HttpStream, which can handle HTTP over TCP, and HTTPS using openssl. This design also allows something like Servo to implement a ServoStream or something, which could handle HTTPS using NSS instead.


These are some high level goals for the library, so you can see the direction:

  • Be fast!
    • The benchmarks preach that we’re already faster than both rust-http and libcurl. And we all know science doesn’t lie.
  • Embrace types.
    • See the above post for how we’re doing this.
  • Provide an excellent http server library for rust webdev.
    • Currently used by Iron, Rustless, Sserve, and others
  • Provide an excellent http client that can be used in place of curl.

The first step for hyper was get the streams and types working correctly and quickly. With the factory working underneath, it allows others to write specific implementations without re-doing all of HTTP, such as implementing the XHR2 spec in Servo. Work now has been on providing ergonomic Client and Server implementations.

It looks increasingly likely that hyper will be available to use on Rust-1.0-day.3There will be an HTTP library for Rust 1.0!

  1. Teepee provided excellent inspiration in some of the design, and all that credit should go to it’s creator, Chris Morgan. He’s continued to provide insight in the development of hyper, so <3! 

  2. Yes, it differs. It’s been a delight to see that developers are never content with an existing spec

  3. Rust 1.0 will ship with only stable APIs and features. Some features will only be accessible by the nightlies, and not likely to be stabilized for 1.0. Hyper doesn’t depend on these, and so should be compilable using rustc v1.0.0

Planet MozillaGoogle Concedes Google Code Not Good Enough?

Google recently released an update to End-to-End, their communications security tool. As part of the announcement, they said:

We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.

They didn’t specifically say how it was hosted before, but a look at the original announcement tells us it was here – on Google Code. And indeed, when you visit that link now, it says “Project “end-to-end” has moved to another location on the Internet”, and offers a link to the Github repo.

Is Google admitting that Google Code just doesn’t cut it any more? It certainly doesn’t have anything like the feature set of Github. Will we see it in the next round of Google spring-cleaning in 2015?

Planet MozillaThe Benefits of Fellowship

In just a few weeks, the application window to be a 2015 Ford-Mozilla Open Web Fellow will close. In its first year, the Fellows program will place emerging tech leaders at five of the world’s leading nonprofits fighting to keep the Internet as a shared, open and global resource.

We’ve already seen hundreds of applicants from more than 70 countries apply, and we wanted to answer one of the primary questions we’ve heard: why should I be a Fellow?

Fellowships offer unique opportunities to learn, innovate and gain credentials.

Fellowships offer unique opportunities to learn. Representing the notion that ‘the community is the classroom’, Ford-Mozilla Open Web Fellows will have a set of experiences in which they can learn and have an impact while working in the field. They will be at the epicenter of informing how public policy shapes the Internet. They will be working and collaborating together with a collection of people with diverse skills and experiences. They will be learning from other fellows, from the host organizations, and from the broader policy and advocacy ecosystem.

Fellowships offer the ability to innovate in policy and technology. The Fellowship offers the ability to innovate, using technology and policy as your toolset. We believe that the phrase ‘Move fast. Break things.’ is not reserved for technology companies – it is a way of being that Fellows will get to experience first-hand at our host organizations and working with Mozilla.

The Ford-Mozilla Fellowship offers a unique and differentiating credential. Our Fellows will be able to reference this experience as they continue in their career. As they advance in their chosen fields, alums of the program will be able to draw upon their experience leading in the community and working in the open. This experience will also enable them to expand their professional network as they continue to practice at the intersection of technology and policy.

We’ve also structured the program to remove barriers and assemble a Fellowship class that reflects the diversity of the entire community.

This is a paid fellowship with benefits to allow Fellows to focus on the challenging work of protecting the open Web through policy and technology work. Fellows will receive a $60,000 stipend for the 10-month program. In addition, we’ve created a series of supplements including support for housing, relocation, childcare, healthcare, continuing education and technology. We’re also offering visa assistance in order to ensure global diversity in participants.

In short, the Ford-Mozilla Open Web Fellowship is a unique opportunity to learn, innovate and gain credentials. It’s designed to enable Fellows to focus on the hard job of protecting the Internet.

More information on the Fellowship benefits can be found at Good luck to the applicants of the 2015 Fellowship class.

The Ford-Mozilla Open Web Fellows application deadline is December 31, 2014. Apply at

Planet MozillaBulgaria Web Summit

I will be speaking at the Bulgaria Web Summit 2015 in Sofia, Bulgaria, 18th of april.

Planet MozillaFirefox Automation report – week 43/44 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 43 and 44.


In preparation for the QA-wide demonstration of Mozmill-CI, Henrik reorganized our documentation to allow everyone a simple local setup of the tool. Along that we did the remaining deployment of latest code to our production instance.

Henrik also worked on the upgrade of Jenkins to latest LTS version 1.565.3, and we were able to push this upgrade to our staging instance for observation. Further he got the Pulse Guardian support implemented.

Mozmill 2.0.9 and Mozmill-Automation 2.0.9 have been released, and if you are curious what is included you want to check this post.

One of our major goals over the next 2 quarters is to replace Mozmill as test framework for our functional tests for Firefox with Marionette. Together with the A-Team Henrik got started on the initial work, which is currently covered in the firefox-greenlight-tests repository. More to come later…

Beside all that work we have to say good bye to one of our SoftVision team members.October the 29th was the last day for Daniel on the project. So thank’s for all your work!

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 43 and week 44.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 43 and week 44.

Planet MozillaInitial support for git pushes to mercurial, early testers needed

This push to try was not created by mercurial.

I just landed initial support for pushing to mercurial from git. Considering the scary fact that it’s possible to screw up a repository with bundles with missing content (and, guess what, I figured out the hard way), I have restricted it to local mercurial repositories until I am more confident.

As such, I would need volunteers to use and test it on local mercurial repositories. On top of being limited to local mercurial repositories, it doesn’t support pushing merges that would have been created by git, nor does it support pushing a root commit (one with no parent).

Here’s how you can use it:

$ git clone
$ export PATH=$PATH:$(pwd)/git-remote-hg
$ git clone hg::/path/to/mercurial-repository
$ # work work, commit, commit
$ git push

[ Note: you can still pull from remote mercurial repositories ]

This will push to your local repository, where it would be useful if you could check the push didn’t fuck things up.

$ cd /path/to/mercurial-repository
$ hg verify

That’s the long, thorough version. You may just want to simply do this:

$ cd /path/to/mercurial-repository
$ hg log --stat

Hopefully, you won’t see messages like:

abort: data/build/mozconfig.common.override.i@56d6fdb13666: no match found!

Update: You can also add the following to /path/to/mercurial-repository/.hg/hgrc, which should prevent corruptions from entering the mercurial repository at all:

validate = True

Update 2: The above setting is now unnecessary, git-remote-hg will set it itself for its push session.

Then you can push with mercurial.

$ hg push

Please note that this is integrated in git in such a way that it’s possible to pass refspecs to git push and do other fancy stuff. Be aware that there are still rough edges on that part, but that your commits will be pushed, even if the resulting state under refs/remotes/ is not very consistent.

I’m planning a replay of several repositories to fully validate pushes don’t send broken bundles, but it’s going to take some time before I can set things up. I figured I’d rather crowdsource until then.

Planet Mozillamach sub-commands

mach - the generic command line dispatching tool that powers the mach command to aid Firefox development - now has support for sub-commands.

You can now create simple and intuitive user interfaces involving sub-actions. e.g.

mach device sync
mach device run
mach device delete

Before, to do something like this would require a universal argument parser or separate mach commands. Both constitute a poor user experience (confusing array of available arguments or proliferation of top-level commands). Both result in mach help being difficult to comprehend. And that's not good for usability and approachability.

Nothing in Firefox currently uses this feature. Although there is an in-progress patch in bug 1108293 for providing a mach command to analyze C/C++ build dependencies. It is my hope that others write useful commands and functionality on top of this feature.

The documentation for mach has also been rewritten. It is now exposed as part of the in-tree Sphinx documentation.

Everyone should thank Andrew Halberstadt for promptly reviewing the changes!

Planet MozillaGetting Tiles Data From Firefox

Following the launch of Tiles in November, I wanted to provide more information on how data is transmitted into and from Firefox.  Last week, I described how we get Tiles data into Firefox differently from the usual cookie-identified requests.  In this post, I will describe how we report on users’ interactions with Tiles.

As a reminder, we have three kinds of Tiles: the History Tiles, which were implemented in Firefox in 2012, Enhanced Tiles, where we have a custom creative design for a Tile for a site that a user has an existing relationship with, and Directory Tiles, where we place a Tile in a new tab page for users with no browsing history in their profile.  Enhanced and Directory Tiles may both be sponsored, involving a commercial relationship, or they may be Mozilla projects or causes, such as our Webmaker initiative.


We need to be able to report data on user’s interactions with Tiles for two main reasons:

  • to determine if the experience is a good one
  • to report to our commercial partners on volumes of interactions by Firefox users

And we do these things in accordance with our data principles both to set the standards we would like the industry to follow and, crucially, to maintain the trust of our users.


Unless a user has opted out by switching to Classic or Blank, Firefox currently sends a list of the Tiles on a user’s new tab page to Mozilla’s servers, along with data about the user’s interaction with the Tiles, e.g., view, click, or pin.

Directory and Enhanced Tiles are identified by a Tile id, (e.g., “Firefox for Android” Tile has an id of 499 for American English-speaking users while “Firefox pour Android” has an id of 510 for French-speaking users).  History Tiles do not have an id, so we can only know that the user saw a history screenshot but not what page — except for early release channel Telemetry related experiments, we do not currently send URL information for Tiles, although of course we are able to infer it for the Directory and Enhanced Tiles that we have sent to Firefox.


Our implementation of Tiles uses the minimal actionable dataset, and we protect that data with with multiple layers of security.  This means:

  • cookie-less requests
  • encrypted transmission
  • aggressive cleaning of data

We also break up the data into smaller pieces that cannot be reconstructed to the original data.  When our server receives a list of seen Tiles from an IP address, we record that the specific individual Tiles were seen and not the whole list.

Sample POST graphic

Sample POST from opening a new tab


With the data aggregated across many users, we can now calculate how many total times a given Tile has been seen and visited.  By dividing the number of clicks by the number of views, we get a click-through-rate (CTR) that represents how valuable users find a particular tile, as well as a pin-rate and a block-rate.  This is sufficient for us to determine both if we think a Tile is useful for a user and also for us to report to a commercial partner.


Calculating the CTR for each tile and comparing them helps us decide if a Tile is useful to many users.  We can already see that the most popular tiles are “Customize Firefox” and “Firefox for Android” (Tile 499, remember) both in terms of clicks and pins.

For an advertiser, we create reports from our aggregated data, and they in turn can see the traffic for their URLs and are able to measure goal conversions on their back end.  Since the Firefox 10th anniversary announcement, which included Tiles and the Firefox Developer Edition, we ran a Directory Tile for the Webmaker initiative.  After 25 days, it had generated nearly 1 billion views, 183 thousand clicks, and 14 thousand pins.

Webmaker Tile

The Webmaker Tile (static and rollover states)

The Webmaker team, meanwhile, are able to see the traffic coming in (as the Tile directs traffic to a distinct URL), and they are able to give attribution to the Tile and track conversions from there:

Webmaker Dashboard’s Analytics dashboard: 182,488 sessions and 3,551 new Webmaker users!


We started with a relatively straightforward implementation to be able to measure how users are interacting with Tiles.  But we’ve already gotten some good ideas on how to make things even better for improved accuracy with less data.  For example, we currently cannot accurately measure how many unique users have seen a given Tile, and traditionally unique identifiers are used to measure that, but HyperLogLog has been suggested as a privacy-protecting technique to get us that data.  A separate idea is that we can use statistical random sampling that doesn’t require all Firefox users to send data while still getting the numbers we need. We’ll test sampling through Telemetry experiments to measure site popularity, and we’ll share more when we get those results.

We would love to hear your thoughts on how we treat users data to find the Tiles that users want.  And if you have ideas on how we can improve our data collection, please send them over as well!

Ed Lee on behalf of the Tiles team.

Planet MozillaDavid, Goliath and empires of the web

People in Mozilla have been talking a lot about radical participation recently. As Mitchell said at recently, participation will be key to our success as we move into ’the third era of Mozilla’ — the era where we find ways to be successful beyond the desktop browser.


This whole conversation has prompted me to reflect on how I think about radical participation today. And about what drew me to Mozilla in the first place more than five years ago.

For me, a big part of that draw was an image in my mind of Mozilla as the David who had knocked over Microsoft’s Goliath. Mozilla was the successful underdog in a fight I really cared about. Against all odds, Mozilla shook the foundation of a huge empire and changed what was possible with the web. This was magnetic. I wanted to be a part of that.

I started to think about this more the other day: what does it really mean for Mozilla to be David? And how do we win against future Goliaths?

Malcom Gladwell wrote a book last year that provides an interesting angle on this. He said: we often take the wrong lesson from David and Goliath story, thinking that it’s surprising that such a small challenger could fell such a large opponent.

Gladwell argues that Goliath was much more vulnerable that we think. He was large. But he was also slow, lumbering and had bad eyesight. Moreover, he used the most traditional fighting techniques of his time: the armour and brute force of infantry.

David, on the other hand, actually had a significant set of strategic advantages. He was nimble and good with a sling. A sling used properly, by the way, is a real weapon: it can project a rock at the speed of a .45 caliber pistol. Instead of confronting Goliath with brute force, he used a different and surprising technique to knock over his opponent. He wasn’t just courageous and lucky, he was smart.

Most other warriors would have seen Goliath as invincible. Not David: he was playing the game by his own rules.

In many ways, the same thing happened when we took on Microsoft and Internet Explorer. They didn’t expect the citizens of the web to rally against them: to build — and then choose by the millions — an unknown browser. Microsoft didn’t expect the citizens of the web to sling a rock at their weak spot, right between their eyes.


As a community, radical participation was our sling and our rock. It was our strategic advantage and our element of surprise. And it is what shook the web loose from Microsoft’s imperial grip on the web.

Of course, participation still is our sling. It is still part of who were are as an organization and a global community. And, as the chart above shows, it is still what makes us different.

But, as we know, the setting has changed dramatically since Mozilla first released Firefox. It’s not just — or even primarily — the browser that shapes the web today. It’s not just the three companies in this chart that are vying for territorial claim. With the internet growing at breakneck speed, there are many Goliaths on many fronts. And these Goliaths are expanding their scope around the world. They are building empires.

Screen Shot 2014-12-09 at 4.46.59 AM

This has me thinking a lot about empire recently: about how the places that were once the subjects of the great European empires are by and large the same places we call “emerging markets”. These are the places where billions of people will be coming online for the first time in coming years. They are also the places where the new economic empires of the digital age are most aggressively consolidating their power.

Consider this: In North America, Android has about 68% of smartphone market share. In most parts of Asia and Africa, Android market share is in the 90% range – give or take a few points by country. That means Google has a near monopoly not only on the operating system on these markets, but also on the distribution of apps and how they are paid for. Android is becoming the Windows 98 of emerging economies, the monopoly and the control point; the arbiter of what is possible.

Also consider that Facebook and WhatsApp together control 80% of the messaging market globally, and are owned by one company. More scary: as we do market research with new smartphone users in countries like Bangladesh and Kenya. We usually ask people: do you use the internet: do you use the internet on you phone? The response is often: “what’s the Internet?” “What do you use you phone for?”, we ask. The response: “Oh, Facebook and WhatsApp.” Facebook’s internet is the only internet these people know of or can imagine.

It’s not the Facebooks and Googles of the world that concern me, per se. I use their products and in many cases, I love them. And I also believe they have done good in the world.

What concerns me is that, like the European powers in the 18th and 19th centuries, these companies are becoming empires that control both what is possible and what is imaginable. They are becoming monopolies that exert immense control over what people can do and experience on the web. And over what the web – and human society as a whole – may become.

One thing is clear to me: I don’t want this sort of future for the web. I want a future where anything is possible. I want a future where anything is imaginable. The web can be about these kinds of unlimited possibilities. That’s the web that I want everyone to be able to experience, including the billions of people coming online for the first time.

This is the future we want as a Mozilla. And, as a community we are going to need to take on some of these Goliaths. We are going to need reach down into our pocket and pull out that rock. And we are going to need to get some practice with our sling.

The truth is: Mozilla has become a bit rusty with it. Yes, participation is still a key part of who we are. But, if we’re honest, we haven’t relied on it as much of late.

If we want to shake the foundations of today’s digital empires, we need to regain that practice and proficiency. And find new and surprising ways to use that power. We need to aim at new weak spots in the giant.

We may not know what those new and surprising tactics are yet. But there is an increasing consensus that we need them. Chris Beard has talked recently about thinking differently about participation and product, building participation into the actual features and experience of our software. And we have been talking for the last couple of years about the importance of web literacy — and the power of community and participation to get people teaching each other how to wield the web. These are are the kinds of directions we need to take, and the strategies we need to figure out.

It’s not only about strategy, of course. Standing up to Goliaths and using participation to win are also about how we show up in the world. The attitude each of us embodies every day.

Think about this. Think about the image of David. The image of the underdog. Think about the idea of independence. And, then think of the task at hand: for all of us to bring more people into the Mozilla community and activate them.

If we as individuals and as an organization show up again as a challenger — like David — we will naturally draw people into what we’re doing. It’s a part of who we are as Mozillians, and its magnetic when we get it right

Filed under: mozilla, poetry, webmakers

Planet MozillaActualizados los canales de Firefox y Thunderbird

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 34.0.5, Thunderbird 31.3.0, Firefox Mobile 34.0

Beta: Firefox 35.0b4, Firefox Mobile 35.0b4

Aurora/Developer Edition: Firefox 36.0a2, Firefox Mobile36.0a2 (está ubicado en el canal Nightly)

Nightly: Firefox 37 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Este es un momento ideal para actualizarse antes de fin de año y llevar lo último de Firefox y Thunderbird a nuestros amigos de donde vivimos.

Ir a Descargas

Planet MozillaHow to Consume Structured Test Results

You may not know that most of our test harnesses are now outputting structured logs (thanks in large part to :chmanchester's tireless work). Saying a log is structured simply means that it is in a machine readable format, in our case each log line is a JSON object. When streamed to a terminal or treeherder log, these JSON objects are first formatted into something that is human readable, aka the same log format you're already familiar with (which is why you may not have noticed this).

While this might not seem all that exciting it lets us do many things, such as change the human readable formats and add metadata, without needing to worry about breaking any fragile regex based log parsers. We are now in the process of updating much of our internal tooling to consume these structured logs. This will let us move faster and provide a foundation on top of which we can build all sorts of new and exciting tools that weren't previously possible.

But the benefits of structured logs don't need to be constrained to the Tools and Automation team. As of today, anyone can consume structured logs for use in whatever crazy tools they can think of. This post is a brief guide on how to consume structured test results.

A High Level Overview

Before diving into code, I want to briefly explain the process at a high level.

  1. The test harness is invoked in such a way that it streams a human formatted log to stdout, and a structured log to a file.
  2. After the run is finished, mozharness uploads the structured log to a server on AWS using a tool called blobber. Mozharness stores a map of uploaded file names to blobber urls as a buildbot property. The structured logs are just one of several files uploaded via blobber.
  3. The pulse build exchange publishes buildbot properties. Though the messages are based on buildbot events and can be difficult to consume directly.
  4. A tool called pulsetranslator consumes messages from the build exchange, cleans them up a bit and re-publishes them on the build/normalized exchange.
  5. Anyone creates a NormalizedBuildConsumer in pulse, finds the url to the structured log and downloads it.

Sound complicated? Don't worry, the only step you're on the hook for is step 5.

Creating a Pulse Consumer

For anyone not aware, pulse is a system at Mozilla for publishing and subscribing to arbitrary events. Pulse has all sorts of different applications, one of which is receiving notifications whenever a build or test job has finished.

The Setup

First, head on over to and create an account. You can sign in with Persona, and then create one or more pulse users. Next you'll need to install the mozillapulse python package. First make sure you have pip installed, then:

$ pip install mozillapulse

As usual, I recommend doing this in a virtualenv. That's it, no more setup required!

The Execution

Creating a pulse consumer is pretty simple. In this example we'll download all logs pertaining to mochitests on mozilla-inbound and mozilla-central. This example depends on the requests package, you'll need to pip install it if you want to run it locally:

import json
import sys
import traceback

import requests

from mozillapulse.consumers import NormalizedBuildConsumer

def run(args=sys.argv[1:]):
    pulse_args = {
        # a string to identify this consumer when logged into
        'applabel': 'mochitest-log-consumer',

        # each message contains a topic. Only messages that match the topic specified here will
        # be delivered. '#' is a wildcard, so this topic matches all messages that start with
        # 'unittest'.
        'topic': 'unittest.#',

        # durable queues will store messages inside pulse even if your consumer goes offline for
        # a bit. Otherwise, any messages published while the consumer is not explicitly
        # listeneing will be lost forever. Keep it set to False for testing purposes.
        'durable': False,

        # the user you created on
        'user': 'ahal',

        # the password you created for the user
        'password': 'hunter1',

        # a callback that will get invoked on each build event
        'callback': on_build_event,

    pulse = NormalizedBuildConsumer(**pulse_args)

    while True:
        except KeyboardInterrupt:
            # without this ctrl-c won't work!
        except IOError:
            # sometimes you'll get a socket timeout. Just call listen again and all will be
            # well. This was fairly common and probably not worth logging.
            # it is possible for rabbitmq to throw other exceptions. You likely
            # want to log them and move on.

def on_build_event(data, message):
    # each message needs to be acknowledged. This tells the pulse queue that the message has been
    # processed and that it is safe to discard. Normally you'd want to ack the message when you know
    # for sure that nothing went wrong, but this is a simple example so I'll just ack it right away.

    # pulse data has two main properties, a payload and metadata. Normally you'll only care about
    # the payload.
    payload = data['payload']
    print('Got a {} job on {}'.format(payload['test'], payload['tree']))

    # ignore anything not from mozilla-central or mozilla-inbound
    if payload['tree'] not in ('mozilla-central', 'mozilla-inbound'):

    # ignore anything that's not mochitests
    if not payload['test'].startswith('mochitest'):

    # ignore jobs that don't have the blobber_files property
    if 'blobber_files' not in payload:

    # this is a message we care about, download the structured log!
    for filename, url in payload['blobber_files'].iteritems():
        if filename == 'raw_structured_logs.log':
            print('Downloading a {} log from revision {}'.format(
                   payload['test'], payload['revision']))
            r = requests.get(url, stream=True)

            # save the log
            with open('mochitest.log', 'wb') as f:
                for chunk in r.iter_content(1024):

    # now time to do something with the log! See the next section.

if __name__ == '__main__':

A Note on Pulse Formats

Each pulse publisher can have its own custom topics and data formats. The best way to discover these formats is via a tool called pulse-inspector. To use it, type in the exchange and routing key, click Add binding then Start Listening. You'll see messages come in which you can then inspect to get an idea of what format to expect. In this case, use the following:

Pulse Exchange: exchange/build/normalized
Routing Key Pattern: unittest.#

Consuming Log Data

In the last section we learned how to obtain a structured log. Now we learn how to use it. All structured test logs follow the same structure, which you can see in the mozlog documentation. A structured log is a series of line-delimited JSON objects, so the first step is to decode each line:

lines = [json.loads(l) for l in log.splitlines()]
for line in lines:
    # do something

If you have a large number of log lines, you'll want to use a generator. Another common use case is registering callbacks on specific actions. Luckily, mozlog provides several built-in functions for dealing with these common cases. There are two main approaches, registering callbacks or creating log handlers.


The rest depends on what you're trying to accomplish. It now becomes a matter of reading the docs and figuring out how to do it. Below are several examples to help get you started.

List all failed tests by registering callbacks:

from mozlog.structured import reader

failed_tests = []
def append_if_failed(log_item):
    if 'expected' in log_item:

with open('mochitest.log', 'r') as log:
    iterator =
    action_map = { 'test_end': append_if_failed }
    reader.each_log(iterator, action_map)


List the time it took to run each test using a log handler:

import json

from mozlog.structured import reader

class TestDurationHandler(reader.LogHandler):
    test_duration = {}
    start_time = None

    def test_start(self, item):
        self.start_time = item['timestamp']

    def test_end(self, item):
        duration = item['timestamp'] - self.start_time
        self.test_duration[item['test']] = duration

handler = TestDurationHandler()
with open('mochitest.log', 'r') as log:
    iterator =
    reader.handle_log(iterator, handler)

print(json.dumps(handler.test_duration, indent=2))

How to consume the log is really up to you. The built-in methods can be helpful, but are by no means required. Here is a more complicated example that receives structured logs over a socket, and spawns an arbitrary number of threads to process and execute callbacks on them.

If you have questions, comments or suggestions, don't hesitate to speak up!

Finally, I'd also like to credit Ahmed Kachkach an intern who not only worked on structured logging in mochitest over the summer, but also created the system that manages pulse users and queues.

Planet MozillaRemoving “Legacy” vouches on

We announced changes to our vouching system on on July 29. These changes require you to receive a new vouch by December 18 to keep your vouched status. On that day we will remove “Legacy” vouches, which are vouches that do not have a description and were made before July 29. This is the last step in having the site fully transition to the improved vouching system that gives a shared understanding of vouching and describes each vouch.

Being “vouched” means you have made a meaningful contribution to the Project and and because of that, have access to special content like all profiles on, certain content on Air Mozilla and Mozilla Moderator, and you receive messages that are sent to vouched Mozillians. Having to get re-vouched means our community directory and vouching overall, is more meaningful.

Since we first announced this change, 3,600 out of the 6000 Mozillians have been revouched. Cheers! That also means about 2,400 will not unless they get vouched by a Mozillian who has vouching permissions by December 18.

Here’s what you need to do:

– Check your profile to see if you have a new vouch (anything other than a “Legacy vouch”). All Summit 2013 participants and paid staff have already received a new vouch. If you don’t have a new vouch, ask someone who knows your contributions to vouch for you.

– Help those who have made meaningful contributions, get a new vouch (if they need one). You can vouch for others if you have three vouches or more on your profile.

All “Legacy vouches” (those before July 29) will be removed on December 18, and only contributors with a new (non-Legacy) vouch will remain vouched. Losing your vouched status means you will not be able to access vouched Mozillians content or get Mozillians email communications until someone vouches for you.

You can learn more on the Vouching FAQ wiki page.

Planet MozillaLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Planet MozillaSearching Bugzilla

BMO currently supports five—count ‘em, five—ways to search for bugs. Whenever you have five different ways to perform a similar function, you can be pretty sure the core problem is not well understood. Search has been rated, for good reason, one of the least compelling features of Bugzilla, so the BMO team want to dig in there and make some serious improvements.

At our Portland get-together a couple weeks ago, we talked about putting together a vision for BMO. It’s a tough problem, since BMO is used for so many different things. We did, however, manage to get some clarity around search. Gerv, who has been involved in the Bugzilla project for quite some time, neatly summarized the use cases. People search Bugzilla for only two reasons:

  • to find a set of bugs, or
  • to find a specific bug.

That’s it. The fact that BMO has five different searches, though, means either we didn’t know that, or we just couldn’t find a good way to do one, or the other, or both.

We’ve got the functionality of the first use case down pretty well, via Advanced Search: it helps you assemble a set of criteria of almost limitless specificity that will result in a list of bugs. It can be used to determine what bugs are blocking a particular release, what bugs a particular person has assigned to them, or what bugs in a particular Product have been fixed recently. Its interface is, admittedly, not great. Quick Search was developed as a different, text-based approach to Advanced Search; it can be quicker to use but definitely isn’t any more intuitive. Regardless, Advanced Search fulfills its role fairly well.

The second use of Search is how you’d answer the question, “what was that bug I was looking at a couple weeks ago?” You have some hazy recollection of a bug. You have a good idea of a few words in the summary, although you might be slightly off, and you might know the Product or the Assignee, but probably not much else. Advanced Search will give you a huge, useless result set, but you really just want one specific bug.

This kind of search isn’t easy; it needs some intelligence, like natural-language processing, in order to give useful results. Bugzilla’s solutions are the Instant and Simple searches, which eschew the standard Bugzilla::Search module that powers Advanced and Quick searches. Instead, they do full-text searches on the Summary field (and optionally in Comments as well, which is super slow). The results still aren’t very good, so BMO developers tried outsourcing the feature by adding a Google Search option. But despite Google being a great search engine for the web, it doesn’t know enough about BMO data to be much more useful, and it doesn’t know about new nor confidential bugs at all.

Since Bugzilla’s search engines were originally written, however, there have been many advances in the field, especially in FLOSS. This is another place where we need to bring Bugzilla into the modern world; MySQL full-text searches are just not good enough. In the upcoming year, we’re going to look into new approaches to search, such as running different databases in tandem to exploit their particular abilities. We plan to start with experiments using Elasticsearch, which, as the name implies, is very good at searching. By standing up an instance beside the main MySQL db and mirroring bug data over, we can refer specific-bug searches to it; even though we’ll then have to filter based on standard bug-visibility rules, we should have a net win in search times, especially when searching comments.

In sum, Mozilla developers, we understand your tribulations with Bugzilla search, and we’re on it. After all, we all have a reputation to maintain as the Godzilla of Search Engines!

Planet MozillaFirefox Automation report – week 41/42 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 41 and 42.

With the beginning of October we also have some minor changes in responsibilities of tasks. While our team members from SoftVision mainly care about any kind of Mozmill tests related requests and related CI failures, Henrik is doing all the rest including the framework and the maintenance of Mozmill CI.


With the support for all locales testing in Mozmill-CI for any Firefox beta and final release, Andreea finished her blacklist patch. With that we can easily mark locales not to be tested, and get rid of the long white-list entries.

We spun up our first OS X 10.10 machine in our staging environment of Mozmill CI for testing the new OS version. We hit a couple of issues, especially some incompatibilities with mozrunner, which need to be fixed first before we can get started in running our tests on 10.10.

In the second week of October Teodor Druta joined the Softvision team, and he will assist all the others with working on Mozmill tests.

But we also had to fight a lot with Flash crashes on our testing machines. So we have seen about 23 crashes on Windows machines per day. And that all with the regular release version of Flash, which we re-installed because of a crash we have seen before was fixed. But the healthy period did resist long, and we had to revert back to the debug version without the protect mode. Lets see for how long we have to keep the debug version active.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 41 and week 42.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 41 and week 42.

Planet MozillaDevelopers First

A while back we developed Marketplace Payments. The first version of those was for Firefox OS and it was tough. There were lots of thing happening at once: building out a custom API with a payment provide, a backend to talk to our payment provider through multiple security hoops, integrating the relatively new Persona, working on the Trusted UI and mozPay and so on.

At the moment we are prototyping and shipping desktop payments as part of our final steps in Marketplace Payments. One thing that came clear a while ago was that desktop payments are much, much, much easier to use, test and debug.

Desktop payments are easier for the developers who work on payments. That means they are easier to get team members working on, easier to demo, easier to record, easier to debug, easier to test and so on. That dramatically decreases the development time.

In the meantime we've also built out things that make this much easier: a Docker development environment that sets things up correctly and a fake backend so you don't need to process money to test things out.

Hindsight is wonderful thing, but at the time we were actively discouraged from doing desktop development. "Mobile first" and "Don't slow down mobile development".

But inadvertently we slowed down mobile development by not being developer first.

Planet MozillaUsing Gmail filters to identify important Bugzilla mail in 2014

Many email filtering systems are designed to siphon each email into a single destination folder. Usually you have a list of rules which get applied in order, and as soon as one matches an email the matching process ends.

Gmail’s filtering system is different; it’s designed to add any number of labels to each email, and the rules don’t get applied in any particular order. Sometimes it’s really useful to be able to apply multiple labels to an email, but if you just want to apply one in a fashion that emulates folders, it can be tricky.

So here’s a non-trivial example of how I filter bugmail into two “folders”. The first “folder” contains high-priority bugmail.

  • Review/feedback/needinfo notifications.
  • Comments in bugs that I filed or am assigned to or am CC’d to.
  • Comment in secure bugs.
  • Comments in bugs in the DMD and about:memory components.

For the high priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field I put:

and in the “Has the words:” field I put:

“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”

For the low priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field put:

and in the “Doesn’t have:” field put:

(“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”)

(I’m not certain if the parentheses are needed here. It’s otherwise identical to the contents in the previous case.)

I’ve modified them a few times and they work very well for me. Everyone else will have different needs, but this might be a useful starting point.

This is just one way to do it. See here for an alternative way. (Update: Byron Jones pointed out that my approach assumes that the wording used in email bodies won’t change, and so the alternative is more robust.)

Finally, if you’re wondering about the “in 2014″ in the title of this post, it’s because I wrote a very similar post four years ago, and my filters have evolved slightly since then.

Dev.OperaResponsive web design presentations at Fluent and State of the Browser

Earlier this year, I had the chance to speak at Fluent in San Francisco, and at State of the Browser in London. At both events, I talked about various new CSS features that have become available in mobile browsers and allow you to create compelling responsive designs.

The topics I covered in these talks included:

In the introduction to both presentations, I also went for a short trip down memory lane, and talked about the fascinating work of Karl Gerstner, a Swiss graphic designer and type pioneer who published an influential book called “Programme entwerfen” (1964), exploring the role of design in the early days of the computer era. The photo below shows you some of his experiments with programmatic design, and, to me at least, they are an early precursor of the whole “responsive design” concept. Visionary stuff!

<figure class="figure"> Boîte à musique example <figcaption class="figure__caption">Karl Gerstner’s “Boîte à musique” example. From “Programme entwerfen”, 1964.</figcaption> </figure>

You can find the full videos and slides of these talks below.

<figure class="figure"> <iframe allowfullscreen="" class="figure__media" height="388" src="" width="690"></iframe> <figcaption class="figure__caption">“Responsive Web Design with @viewport and Other New CSS Features” video (Fluent 2014)</figcaption> </figure> <figure class="figure"> <iframe allowfullscreen="" class="figure__media" height="388" src="" width="690"></iframe> <figcaption class="figure__caption"> “Intro to @viewport & other new Responsive Web Design CSS features” video (State of the Browser 2014)</figcaption> </figure> <figure class="figure"> <iframe allowfullscreen="" class="figure__media" height="436" src="" width="690"></iframe> <figcaption class="figure__caption"> “Intro to @viewport & other new Responsive Web Design CSS features” slides (State of the Browser 2014)</figcaption> </figure>


Planet MozillaDennis v0.6 released! Line numbers, double vowels, better cli-fu, and better output!

What is it?

Dennis is a Python command line utility (and library) for working with localization. It includes:

  • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
  • a template linter for finding problems in strings in .pot files that make translator's lives difficult
  • a statuser for seeing the high-level translation/error status of your .po files
  • a translator for strings in your .po files to make development easier

v0.6 released!

Since v0.5, I've done the following:

  • Rewrote the command line handling using click and added an exception handler.
  • Merged the lint and linttemplate commands. Why should you care which file you're linting when the linter can figure it out for you?
  • Added the whimsical double vowel transform.
  • Added line numbers in the lint output. This will make it possible to find those pesky problematic strings in your .po/.pot files.
  • Add a line reporter to the linter.

Getting pretty close to what I want for a 1.0, so I'm pretty excited about this version.

Denise update

I've updated Denise with the latest Dennis and moved it to a better url. Lint your .po/.pot files via web service using

Where to go for more

For more specifics on this release, see here:

Documentation and quickstart here:

Source code and issue tracker here:

Source code and issue tracker for Denise (Dennis-as-a-service):

6 out of 8 employees said Dennis helps them complete 1.5 more deliverables per quarter.

Planet Mozillawhat’s new in xpcom

I was talking to somebody at Mozilla’s recent all-hands meeting in Portland, and in the course of attempting to provide a reasonable answer for “What have you been doing lately?”, I said that I had been doing a lot of reviews, mostly because of my newfound duties as XPCOM module owner. My conversational partner responded with some surprise that people were still modifying code in XPCOM at such a clip that reviews would be a burden. I responded that while the code was not rapidly changing, people were still finding reasons to do significant modifications to XPCOM code, and I mentioned a few recent examples.

But in light of that conversation, it’s good to broadcast some of the work I’ve had the privilege of reviewing this year.  I apologize in advance for not citing everybody; in particular, my reviews email folder only goes back to August, so I have a very incomplete record of what I’ve reviewed in the past year.  In no particular order:

Planet MozillaManaging Firefox with Group Policy and PolicyPak

A lot of people ask me how to manage Firefox using Windows Group Policy. To that end, I have been working with a company called PolicyPak to help enhance their product to have more of the features that people are asking for (not just controlling preferences.) It's taken about a year, but the results are available for download now.

You can now manage the following things (and more) using PolicyPak, Group Policy and Firefox:

  • Set and lock almost all preference settings (homepage, security, etc) plus most settings in about:config
  • Set site specific permissions for pop-ups, cookies, camera and microphone
  • Add or remove bookmarks on the toolbar or in the bookmarks folder
  • Blacklist or whitelist any type of add-on
  • Add or remove certificates
  • Disable private browsing
  • Turn off crash reporting
  • Prevent access to local files
  • Always clear saved passwords
  • Disable safe mode
  • Remove Firefox Sync
  • Remove various buttons from Options

If you want to see it in action, you can check out these videos.

And if you've never heard of PolicyPak, you might have heard of the guy who runs it - Jeremy Moskowitz. He's a Group Policy MVP and literally wrote the book on Group Policy.

On a final note, if you decide to purchase, please let them know you heard about it from me.

Planet MozillaLeaving Mozilla as staff

December 31 will be my last day as paid staff on the Community Building Team at Mozilla.

One year ago, I settled into a non-stop flight from Raleigh, NC to San Francisco and immediately fell asleep. I was exhausted; it was the end of my semester and I had spent the week finishing a difficult databases final, which I emailed to my professor as soon as I reached the hotel, marking the completion of my coursework in Library Science and the beginning of my commitment to Mozilla.

The next week was one of the best of my life. While working, hacking, and having fun, I started on the journey that has carried me through the past exhilarating months. I met more friendly faces than I could count and felt myself becoming part of the Mozilla community, which has embraced me. I’ve been proud to call myself a Mozillian this year, and I will continue to work for the free and open Web, though currently in a different capacity as a Rep and contributor.

I’ve met many people through my work and have been universally impressed with your intelligence, drive, and talent. To David, Pierros, William, and particularly Larissa, Christie, Michelle, and Emma, you have been my champions and mentors. Getting to know you all has been a blessing.

I’m not sure what’s next, but I am happy to start on the next step of my career as a Mozillian, a community mentor, and an open Web advocate. Thank you again for this magical time, and I hope to see you all again soon. Let me know if you find yourself in Boston! I will be happy to hear from you and pleased to show you around my hometown.

If you want to reach out, find me on IRC: jennierose. All the best wishes for a happy, restful, and healthy holiday season.

Planet MozillaOne step closer to git push to mercurial

In case you missed it, I’m working on a new tool to use mercurial remotes in git. Since my previous post, I landed several fixes making clone and pull more reliable:

  • Of 247316 unique changesets in the various mozilla-* repositories, now only two (but both in fact come from the same patch, one of the changesets being a backport to aurora of the other) are “corrupted” because their mercurial date have a timezone with a second.
  • Of 23542 unique changesets in the canonical mercurial repository, only three are “corrupted” because their raw mercurial data contains, for an unknown reason, a whitespace after the timezone.

By corrupted, here, I mean that the round-trip hg->git->hg doesn’t lead to matching their sha1. They will be fixed eventually, but I haven’t decided how yet, because they’re really edge cases. They’re old enough that they don’t really matter for push anyways.

Pushing to mercurial, however, is still not there, but it’s getting closer. It involves several operations:

  • Negotiating with the mercurial server what it doesn’t have that we do.
  • Creating mercurial changesets, manifests and files for local git commits that were not imported from mercurial.
  • Creating a bundle of the mercurial changesets, manifests and files that we have that the server doesn’t.
  • Pushing that bundle to the server.

The first step is mostly covered by the pull code, that does a similar negotiation. I now have the third step covered (although I cheated around the “corruptions” mentioned above):

$ git clone hg::
Cloning into 'hg'...
Checking connectivity... done.
$ cd hg
$ git hgbundle > ../hg.hg
$ mkdir ../hg2
$ cd ../hg2
$ hg init
$ hg unbundle ../hg.hg
adding changesets
adding manifests
adding file changes
added 23542 changesets with 44305 changes to 2272 files
(run 'hg update' to get a working copy)
$ hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
2272 files, 23542 changesets, 44305 total revisions

Note: that hgbundle command won’t actually exist. It’s just an intermediate step allowing me to work incrementally.

In case you wonder what happens when the bundle contains bad data, mercurial fortunately rejects it:

$ cd ../hg
$ git hgbundle-corrupt > ../hg.hg
$ mkdir ../hg3
$ cd ../hg3
$ hg unbundle ../hg.hg
adding changesets
transaction abort!
rollback completed
abort: integrity check failed on 00changelog.i:3180!

Planet MozillaPriv8 is out!

Download page: click here

What is priv8? This is a Firefox addon that uses part of the security model of Firefox OS to create sandboxed tabs. Each sandbox is a completely separated world: it doesn’t share cookies, storage, and a lots of other stuff with the rest of Firefox, but just with other tabs from the same sandbox.

Each sandbox has a name and a color, therefore it will be always easy to identify which tab is sandboxed.

Also, these sandboxes are permanent! So, when you open one of them the second time, maybe after a restart, that sandbox will still have the same cookies, same storage, etc - as you left the previous time.

You can also switch between sandboxes using the context menu for the tab.

Here an example: with priv8 you can read your gmail webmail in a tab, and another gmail webmail in another tab at the same time. Still, you can be logged in on Facebook in a tab and not in the others. This is nice!

Moreover, if you are a web developer and you want to test a website using multiple accounts, priv8 gives you the opportunity to have each account in a sandboxed tab. Much easier then have multiple profiles or login and logout manung>ally every time!

Is it stable? I don’t know :) It works but more test must be done. Help needed!

Known issues?

  • doesn’t work from a sandbox
  • e10s is not supported yet.
  • The UI must be improved.


The manager

This is the manager, where you can “manage” your sandboxes.

The panel

The panel is always accessible from the firefox toolbar.

Context menu

The context menu allows you to switch between sandboxes for the current tab. This will reload the tab after the switch.

3 gmail tabs

3 separate instances of Gmail at the same time.

License: Priv8 is released under Mozilla Public License.
Source code: bakulf :: priv8

Planet Mozillahappy bmo push day!

the following changes have been pushed to

  • [1063818] Updates to
  • [1111954] Updates to Spreadsheet Data in
  • [1092578] Decide if an email needs to be encrypted at the time it is generated, not at the time it is sent
  • [1107275] Include Build.PL file for bmo/4.2 to install Perl dependencies (useful for Travis CI, etc.)
  • [829358] Changing the name of a private attachment in an unhidden bug results in the name change being sent unencrypted
  • [1104291] The form.web.bounty page does not say it’s a bounty form
  • [1105585] Fix bug bounty form to validate its input more and relax the restriction on the paid field to include -+? suffix
  • [1105155] Indicate that an existing comment has been modified for tracking flags with prefill text
  • [1105745] changes made via the bounty form are not emailed immediately
  • [1111862] HTML code injection in review history page

discuss these changes on

Filed under: bmo, mozilla

Planet MozillaSubmission

254 pages, eleven and a half years of my life.

Me submitting my thesis at the Monash Institute of Graduate Research office.

My thesis front cover: Authoring and Publishing Adaptive Diagrams.

Now for the months-long wait for the examiners to review it.

Planet MozillaFirefox.html screencast

Firefox.html screencast. Contribute:

Youtube video:

Planet WebKitWeb Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Dev.OperaOpera goes to BlinkOn 3

Recently, the Opera Web Technology team attended BlinkOn 3, the twice-yearly conference for Blink contributors. (Videos are available on YouTube.) As you may know, Opera is a prolific committer to Blink, and has the only non-Google API owner and Chromium Security Group members.

Here are a few notes and observations from our team.

Mobile first?

A year ago, a “mobile first” policy was formed. Now that has been replaced by a policy about enabling web developers to do whatever they want. Part of that is to keep landing new APIs that give more and more access to hardware/OS features and to avoid having slow code that prevents web apps from being smooth.

A main Google Blink focus for the next year will be never slipping on 60fps and its per-frame timing budget.

<figure class="figure"> <iframe allowfullscreen="" class="figure__media" height="315" src="" width="560"></iframe> <figcaption class="figure__caption">BlinkOn 3: State of Blink (Keynote)</figcaption> </figure>

60 frames per second

Ian Vollick held an interesting talk about adding javascript hooks on the compositor thread (UIWorker) to implement snap points, pull-to-refresh, etc. in a smooth way without having to go via the main thread.

Slimming paint seems to move along nicely. Looks similar to our “Eliminate-paint-traversal” project in Presto.

From the animations talk: inspector support, new features from Web Animations which have been implemented and will be implemented. They stressed that there are still only two properties (opacity, transforms) being animated on the compositor thread so we need to continue improve performance in the style → render tree → layout → composite → paint pipeline for main thread animations. They specifically highlighted our optimization for skipping style matching during animations as an example of such an improvement.

Project Warden meeting: it’s an ongoing project to improve hackability of Blink and it spans various unrelated tasks. They’re looking for volunteers to take on a re-design of line layout, in particular for security (tends to crash), but also for performance.

We had interesting chats with Nat Duca about style performance, and we discussed the state of the style part.

The Blink scheduler has promising results. That was something we talked about in BlinkOn2 (based on ideas from Presto). The idea is to have some code that actively decides what to do next (i.e. handle input events) instead of doing everything in the order they were queued or in 100 parallel threads. The main problem will still be to get rid of long running tasks on the main thread or that hogs the active CPU core and we’ll never be able to handle input within the 100 ms we need to. But the Android team in London has added metrics and some code and already there are measurable improvements which indicate that this is the right idea.

“UI workers” is currently an interesting experiment on allowing some programmability/scripting to the compositor thread.


Oilpan (replacing a messy ref-counted DOM environment with a garbage collected one) is also proceeding, but it’s a big knife in the middle of the Chromium guts, so lots of nervousness and attempts to figure out hard requirements before it can be enabled for performance-sensitive code. Oilpan presented preliminary performance numbers and plans for shipping. It was given a fair hearing, but has been given fairly hard targets and a deadline (end of 2015Q1) to meet in order to ship.

We met with the Oilpan team to set out how to go about making that happen, and I think we now have a workable plan in place. Oilpan is something we want to see Blink move to and adopt fully. It is just a more solid foundation to build a quality engine on top of. The project is now run by developers from the Google Tokyo office, with help from Opera. Optimizing overall performance and bounding/controlling garbage collector pause times (i.e., you don’t want your UI and animations to stutter due to Oilpan) are the remaining work items. During BlinkOn3, it was decided that Oilpan will only be fully adopted by Blink (“shipped”) when there are zero regressions on performance metrics, which cover pause times and overall fps / runs per sec. This needs to happen by the end of 2015Q1.


Work/effort on V8 is picking up. One aspect is bringing up its language support to ES6, which has been talked about as a goal for a while but seems to be happening now. (ES6 will be published/finalized in June 2015.)

Crankshaft is being replaced by TurboFan. An initial goal for TurboFan is to address the “asm.js use case”, meaning it wants to derive just as good type information and generated code as an asm.js validator and backend would when given asm.js input. But not without recognizing the asm.js type rules specifically, just by being a better all-round backend for JS/ES6.


  • The ServiceWorkers shipping efforts are going to plan.
  • The merge of Chromium and Blink repos is moving forwards again (Having them divided costs a lot of time). It’s gated on even more bot capacity.
  • Work on Skia proceeds: more high level optimizations and more work on DisplayLists.
  • Reducing the number of code paths for shaping and rendering text. There are now ~2 left. Goal is to remove the Latin-only code path as soon as the generic code path is fast enough. Optimizations in this area have made for instance Russian text rendering twice as fast (apparently it was very slow for non-ASCII before).
  • Yandex wants to improve search in text so that it can find words better, and so that alternative word forms are found (e.g. search for “glanced”, get a hit on “glancing”).
  • Out of process iframes proceeds. It’s a security feature but it complicates the code. They have been working for a long time but there seems to still be a lot more to do.

Planet MozillaVideo killed the radio star

One of the personal experiments I'm considering in 2015 is a conscious movement away from video-based participation in open source communities. There are a number of reasons, but the main one is that I have found the preference for "realtime," video-based communication media inevitably leads to ever narrowing circles of interaction, and eventually, exclusion.

I'll speak about Mozilla, since that's the community I know best, but I suspect a version of this is happening in other places as well. At some point in the past few years, Mozilla (the company) introduced a video conferencing system called Vidyo. It's pretty amazing. Vidyo makes it trivial to setup a virtual meeting with many people simultaneously or do a 1:1 call with just one person. I've spent hundreds of hours on Vidyo calls with Mozilla, and other than the usual complaints one could level against meetings in general, I've found them very productive and useful, especially being able to see and hear colleagues on the other side of the country or planet.

Vidyo is so effective that for many parts of the project, it has become the default way people interact. If I need to talk to you about a piece of code, for example, it would be faster if we both just hopped into Vidyo and spent 10 minutes hashing things out. And so we do. I'm guilty of this.

I'm talking about Vidyo above, but substitute Skype or Google Hangouts or or some cool WebRTC thing your friend is building on Github. Video conferencing isn't a negative technology, and provides some incredible benefits. I believe it's part of what allows Mozilla to be such a successful remote-friendly workplace (vs. project). I don't believe, however, that it strengthens open source communities in the same way.

It's possible on Vidyo to send an invitation URL to someone without an account (you need an account to use it, by the way). You have to be invited, though. Unlike irc, for example, there is no potential for lurking (I spent years learning about Mozilla code by lurking on irc in #developers). You're in or you're out, and people need to decide which it will be. Some people work around this by recording the calls and posting them online. The difficulty here is that doing so converts what was participation into performance--one can watch what happened, but not engage it, not join the conversation and therefore the decision making. And the more we use video, the more likely we are to have that be where we make decisions, further making it difficult for those not in the meeting to be part of the discussion.

Even knowing that decisions have been made becomes difficult in a world where those decisions aren't sticky, and go un-indexed. If we decided in a mailing list, bug, irc discussion, Github issue, etc. we could at least hope to go back and search for it. So too could interested members of the community, who may wish to follow along with what's happening, or look back later when the details around how the decision came to be become important.

I'll go further and suggest that in global, open projects, the idea that we can schedule a "call" with interested and affected parties is necessarily flawed. There is no time we can pick that has us all, in all timezones, able to participate. We shouldn't fool ourselves: such a communication paradigm is necessarily geographically rooted; it includes people here, even though it gives the impression that everyone and anyone could be here. They aren't. They can't be. The internet has already solved this problem by privileging asynchronous communication. Video is synchronous.

Not everything can or should be open and public. I've found that certain types of communication work really well over video, and we get into problems when we do too much over email, mailing lists, or bugs. For example, a conversation with a person that requires some degree of personal nuance. We waste a lot of time, and cause unnecessary hurt, when we always choose open, asynchronous, public communication media. Often scheduling an in person meeting, getting on the phone, or using video chat would allow us to break through a difficult impasse with another person.

But when all we're doing is meeting as a group to discuss something public, I think it's worth asking the question: why aren't we engaging in a more open way? Why aren't we making it possible for new and unexpected people to observe, join, and challenge us? It turns out it's a lot easier and faster to make decisions in a small group of people you've pre-chosen and invited; but we should consider what we give up in the name of efficiency, especially in terms of diversity and the possibility of community engagement.

When I first started bringing students into open source communities like Mozilla, I liked to tell them that what we were doing would be impossible with other large products and companies. Imagine showing up at the offices of Corp X and asking to be allowed to sit quietly in the back of the conference room while the engineers all met. Being able to take them right into the heart of a global project, uninvited, and armed only with a web browser, was a powerful statement; it says: "You don't need permission to be one of us."

I don't think that's as true as it used to be. You do need permission to be involved with video-only communities, where you literally have to be invited before taking part. Where most companies need to guard against leaks and breaches of many kinds, an open project/company needs to regularly audit to ensure that its process is porous enough for new things to get in from the outside, and for those on the inside to regularly encounter the public.

I don't know what the right balance is exactly, and as with most aspects of my life where I become unbalanced, the solution is to try swinging back in the other direction until I can find equilibrium. In 2015 I'm going to prefer modes of participation in Mozilla that aren't video-based. Maybe it will mean that those who want to work with me will be encouraged to consider doing the same, or maybe it will mean that I increasingly find myself on the outside. Knowing what I do of Mozilla, and its expressed commitment to working open, I'm hopeful that it will be the former. We'll see.

Planet MozillaCan curl avoid to be in a future funnily named exploit that shakes the world?

During this year we’ve seen heartbleed and shellshock strike (and a  few more big flaws that I’ll skip for now). Two really eye opening recent vulnerabilities in projects with many similarities:

  1. Popular corner stones of open source stacks and internet servers
  2. Mostly run and maintained by volunteers
  3. Mature projects that have been around since “forever”
  4. Projects believed to be fairly stable and relatively trustworthy by now
  5. A myriad of features, switches and code that build on many platforms, with some parts of code only running on a rare few
  6. Written in C in a portable style

Does it sound like the curl project to you too? It does to me. Sure, this description also matches a slew of other projects but I lead the curl development so let me stay here and focus on this project.

cURLAre we in jeopardy? I honestly don’t know, but I want to explain what we do in our project in order to minimize the risk and maximize our ability to find problems on our own before they become serious attack vectors somewhere!

previous flaws

There’s no secret that we have let security problems slip through at times. We’re right now working toward our 143rd release during our around 16 years of life-time. We have found and announced 28 security problems over the years. Looking at these found problems, it is clear that very few security problems are discovered quickly after introduction. Most of them linger around for several years until found and fixed. So, realistically speaking based on history: there are security bugs still in the code, and they have probably been present for a while already.

code reviews and code standards

We try to review all patches from people without push rights in the project. It would probably be a good idea to review all patches before they go in for real, but that just wouldn’t work with the (lack of) man power we have in the project while we at the same time want to develop curl, move it forward and introduce new things and features.

We maintain code standards and formatting to keep code easy to understand and follow. We keep individual commits smallish for easier review now or in the future.

test cases

As simple as it is, we test that the basic stuff works. We don’t and can’t test everything but having test cases for most things give us the confidence to change code when we see problems as we then remain fairly sure things keep working the same way as long as the test go through. In projects with much less test coverage, you become much more conservative with what you dare to change and that also makes you more vulnerable.

We always want more test cases and we want to improve on how we always add test cases when we add new features and ideally we should also add new test cases when we fix bugs so that we know that we don’t introduce any such bug again in the future.

static code analyzes

We regularly scan our code base using static code analyzers. Both clang-analyzer and coverity are good tools, and they help us by pointing out code that look wrong or suspicious. By making sure we have very few or no such flaws left in the code, we minimize the risk. A static code analyzer is better than run-time tools for cases where they can check code flows that are hard to repeat in my local environment.


bike helmetValgrind is an awesome tool to detect memory problems in run-time. Leaks or just stupid uses of memory or related functions. We have our test suite automatically use valgrind when it runs tests in case it is present and it helps us make sure that all situations we test for are also error-free from valgrind’s point of view.


Building and testing curl on a plethora of platforms non-stop is also useful to make sure we don’t depend on behaviors of particular library implementations or non-standard features and more. Testing it all is basically the only way to make sure everything keeps working over the years while we continue to develop and fix bugs. We would course be even better off with more platforms that would test automatically and with more developers keeping an eye on problems that show up there…

code complexity

Arguably, one of the best ways to avoid security flaws and bugs in general, is to keep the source code as simple as possible. Complex functions need to be broken down into smaller functions that are possible to read and understand. A good way to identify functions suitable for fixing is pmccabe,

essential third parties

curl and libcurl are usually built to use a whole bunch of third party libraries in order to perform all the functionality. In order to not have any of those uses turn into a source for trouble we must of course also participate in those projects and help them stay strong and make sure that we use them the proper way that doesn’t lead to any bad side-effects.

You can help!

All this takes time, energy and system resources. Your contributions and help will be appreciated where ever among these tasks that you can insert any. We could do more of all this, more often and more thorough if we only were more people involved!

Internet Explorer blogClasses in JavaScript: Exploring the Implementation in Chakra

The Windows 10 Technical Preview November build and RemoteIE includes many changes and additions to Internet Explorer from the next edition of the JavaScript standard, ECMA-262 6th Edition (ES6). ECMAScript 6 will be a substantial advance for JavaScript language, and currently the technical preview is the most compliant implementation. This post explores where ECMAScript is going and gives a peek into the implementation of a new language feature: ES6 Classes.

ECMAScript Evolution to Meet the Advancing Web

The web has evolved substantially in the 5 years since ES5 was ratified. Apps today are becoming more complex and demanding. Microsoft, working with the ECMA working group - TC39, has been hard at work on the next substantial update to the language. The 6th edition of ECMA-262 (ES6) is perhaps the biggest update to the language in its history.

The order we implement ECMAScript proposals depends on a few principles including whether it is necessary to ensure interoperability with other engines, enables new types of apps, provides developers with compelling value beyond saving keystrokes, has a stable specification has multiple reference implementations, and has a comprehensive test suite (called Test262) to ensure interoperability. Overall we do our best to make data driven decisions and deliver features that will give the most value to developers. Let us know what you find most valuable on User Voice or Connect!

ES6 Classes

Many features in ES6 aim to reduce time writing code and increase expressiveness of common patterns in JavaScript. Because they do not add new fundamental capabilities to the runtime and essentially simplify the boilerplate code, these kinds of features are collectively referred to as ‘syntactic sugar’.

ES6 Classes in JavaScript is one such feature and remains one of the most commonly requested features on

To those of you familiar with object-oriented languages, classes should be a familiar concept. However classes in JavaScript are a little different from other languages. With ES5, class functionality is commonly implemented using functions, prototypes, and instances. For example:

<script src="" type="text/javascript"></script>

By contrast, the ES6 version below is much more readable and concise. For programmers from a classical inheritance background, the functionality is easier to understand without in depth knowledge of how JavaScript's prototypal inheritance model works.

<script src="" type="text/javascript"></script>

Now that we’ve seen a simple example of what the syntax looks like, let’s have a look at the other syntactic features of ES6 classes.

<script src="" type="text/javascript"></script>

The example includes a basic class construct with instance, static, getter and setter methods. A class constructor is a special method which can be customized or omitted. Omitting a constructor generates the default constructor which merely forwards arguments to its super as shown above. Classes provide inheritance through the extends keyword. A subclass has the super keyword, which allows use of the superclass properties and methods.

Implementation challenges

In a previous post, we explored the architecture of the Chakra engine in detail (see ‘Announcing key advances to JavaScript performance in Windows 10 Technical Preview) using the diagram below. The parts requiring the most modification to support classes are highlighted in green.

Parts of the Chakra engine requiring the most modification to support ES6 Classes

When implementing new language features, syntactic sugar features often map directly to older language constructs, as shown in the first code example. Classes are no exception. Underneath, classes use the same function, prototype, and instance constructs that have always been available, and Chakra uses these constructs to implement classes.

Not all of the sugar maps to pre-existing language features, however. The super keyword is one example. super ended up taking longer than the rest of classes due to its complex usage scenarios. These uses are not always apparent to the end user, but part of the fun of being a compiler developer is to come up with the corner cases that break things. For example, the use of eval() in a class makes the implementation more complex:

<script src="" type="text/javascript"></script>

In the example above, both Child methods require a reference to Parent to allow execution of super(). In the case of method(), the parser is able to detect the use of super() at compile time. Chakra marks method() as having a super keyword reference during parsing, so when it reaches bytecode generation it knows to generate special bytecode to make a reference to Parent available. At runtime, this reference becomes available as Chakra consumes the bytecode at the beginning of the function, performing checks on whether it is valid to use super at that location and point in time.

In the case of methodEval(), there is a call to eval() which drastically changes the implementation. Chakra knows this is a class method, but until it executes the eval(), it has no idea if there is a super reference, or if there will ever be a super reference. This poses a design problem that we felt had two options:

  1. Should Chakra maintain a reference to Parent that is available in case it is needed by super somewhere inside the eval(), or
  2. Should Chakra wait until it is somewhere in the eval() to fetch the reference?

The first option, pre-emptively loading super, adds unnecessary bytecode to the method which could adversely affect performance.

The second option, to fetch the Parent reference when Chakra needs it, sounds simple at first. However it can’t predict the complexity of the eval(). For example, Chakra could be several levels deep in function calls in the eval(), or perhaps be in several levels of the eval() itself. Finding the Parent reference at this point could be more costly than if it had been stashed it away previously.

There isn’t necessarily a correct answer here, and the answer may change as we learn more about how programmers use super. In the current technical preview, we implemented super by pre-emptively loading the reference in situations where the trade-off in performance is justifiable. For example, using super in a subclass method is more likely to happen than in a top level class method (regardless of whether the use is valid or invalid.)

In the time since the release of the technical preview, the ES6 specification has been updated to allow super in more places. Being on the bleeding edge of implementation as standards evolve often means revisiting implementations, and the ES6 specification is no different.

Classes are a core part of the ES6 language feature set. We anticipate they will help new JavaScript programmers learn the language more rapidly when they have experience with other object oriented languages. We also anticipate that many seasoned JavaScript programmers will welcome the terseness.

Moving forward with ECMAScript

While ECMAScript 6 makes great progress, both the runtime capabilities and expressivity will continue to evolve. ECMAScript 7 should arrive more rapidly than any previous ECMAScript version because it and future editions of the standard will be moving to a more regular “train model”. In this model, proposals are built in isolation from one another and move through stages of ratification as they mature. Eventually, a proposal may make it on to a “train”, where the proposal is integrated with the other proposals on the train to form the final document that will be ratified. This model is useful for language standards because we get cohesive, complete specifications developed in an agile fashion that we don’t have to wait years for. It also gives some flexibility to make sure we get a design right. It’s a lot easier to take a bit more time when it means shipping in the specification next year rather than five years from now. For example, when the committee got substantive negative feedback on the subclassing built-ins machinery, it was relatively easy to pull it out of ES6 and get it ready for making the ES7 train. (You may also be interested to know that we were analyzing a subclassable built-ins implementation to complement classes, but as a result of the change we could not include it in this release.)

This model should also be much friendlier to JavaScript developers because new features come to the language yearly rather than every half decade. Also, browsers can pick up smaller chunks and begin implementing them sooner in the process, which should help get interoperable implementations more rapidly. The end result of this new process should be substantially increased pace of improvements to JavaScript.


Classes are available today in the current Windows 10 Technical Preview, and you can also play with them using We can’t wait to hear about how you will be using the feature in your ES6 code. We’re planning to deliver more articles in the future about other ES6 features. In the meantime, feel free to join the comment discussion, reach out on Twitter @IEDevChat, or on Connect.

Tom Care (@tcare_), Brian Terlson (@bterlson) & Suwei Chen

Chakra Team

Planet MozillaStripe's AWS-Go and uploading to S3

Yesterday, I discovered Stripe's AWS-Go library, and the magic of auto-generated API clients (which is one fascinating topic that I'll have to investigate for MIG).

I took on the exercise of writing a simple file upload tool using aws-go. It was fairly easy to achieve, considering the complexity of AWS's APIs. I would have to evaluate aws-go further before recommending it as a comprehensive AWS interface, but so far it seems complete. Check out for a detailed doc.

The source code is below. It reads credentials from ~/.awsgo:

$ cat ~/.awsgo
    accesskey = "AKI...."
    secretkey = "mw0...."

It takes a file to upload as the only argument, and returns the URL where it is posted.

AWS-Go is not revolutionary compared to python & boto, but benefits from Go's very clean approach to programming. And getting rid of install dependencies, pip and python{2{6,7},3} hell is kinda nice!

package main

import (

// conf takes an AWS configuration from a file in ~/.awsgo
// example:
// [credentials]
//    accesskey = "AKI...."
//    secretkey = "mw0...."
type conf struct {
	Credentials struct {
		AccessKey string
		SecretKey string

func main() {
	var (
		err         error
		conf        conf
		bucket      string = "testawsgo" // change to your convenience
		fd          *os.File
		contenttype string = "binary/octet-stream"
	// obtain credentials from ~/.awsgo
	credfile := os.Getenv("HOME") + "/.awsgo"
	_, err = os.Stat(credfile)
	if err != nil {
		fmt.Println("Error: missing credentials file in ~/.awsgo")
	err = gcfg.ReadFileInto(&conf, credfile)
	if err != nil {

	// create a new client to S3 api
	creds := aws.Creds(conf.Credentials.AccessKey, conf.Credentials.SecretKey, "")
	cli := s3.New(creds, "us-east-1", nil)

	// open the file to upload
	if len(os.Args) != 2 {
		fmt.Printf("Usage: %s <inputfile>\n", os.Args[0])
	fi, err := os.Stat(os.Args[1])
	if err != nil {
		fmt.Printf("Error: no input file found in '%s'\n", os.Args[1])
	fd, err = os.Open(os.Args[1])
	if err != nil {
	defer fd.Close()

	// create a bucket upload request and send
	objectreq := s3.PutObjectRequest{
		ACL:           aws.String("public-read"),
		Bucket:        aws.String(bucket),
		Body:          fd,
		ContentLength: aws.Integer(int(fi.Size())),
		ContentType:   aws.String(contenttype),
		Key:           aws.String(fi.Name()),
	_, err = cli.PutObject(&objectreq)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("%s\n", ""+bucket+"/"+fi.Name())

	// list the content of the bucket
	listreq := s3.ListObjectsRequest{
		Bucket: aws.StringValue(&bucket),
	listresp, err := cli.ListObjects(&listreq)
	if err != nil {
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("Content of bucket '%s': %d files\n", bucket, len(listresp.Contents))
		for _, obj := range listresp.Contents {
			fmt.Println("-", *obj.Key)

Planet MozillaLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Planet MozillaSpotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host

(This is the fourth in our series spotlighting host organizations for the 2015 Ford-Mozilla Open Web Fellowship. For years, Public Knowledge has been at the forefront of fighting for citizens and informing complex telecommunications policy to protect people. Working at Public Knowledge, the Fellow will be at the center of emerging policy that will shape the Internet as we know it. Apply to be a Ford-Mozilla Open Web Fellow and use your tech skills at Public Knowledge to protect the Web.)

Spotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host
by Shiva Stella, Communications Manager of Public Knowledge

This year has been especially intense for policy advocates passionate about protecting a free and open internet, user protections, and our digital rights. Make no mistake: From net neutrality to the Comcast/Time Warner Cable merger, policy makers will continue to have an outsized influence over the web.

In order to enhance our advocacy efforts, Public Knowledge is hosting a Ford-Mozilla Open Web Fellow. We are looking for a leader with technical skills and drive to defend the internet, focusing on fair-use copyright and consumer protections. There’s a lot of important work to be done, and we know the public could use your help.

Public Knowledge Long

Public Knowledge works steadfastly in the telecommunications and digital rights space. Our goal is to inform the public of key policies that impact and limit a wide range of technology and telecom users. Whether you’re the child first responders fail to locate accurately because you dial 911 from a cell phone or the small business owner who can’t afford to “buy into” the internet “fast lane,” these policies affect your digital rights – including the ability to access, use and own communications tools like your set-top box (which you currently lease forever from your cable company, by the way) and your cell phone (which your carrier might argue can’t be used on a competing network due to copyright law).

There is no doubt that public policy impacts people’s lives, and Public Knowledge is advocating for the public interest at a critical time when special interests are attempting to shape policy that benefits them at our cost or that overlooks an issue’s complexity.

Indeed, in this interconnected world, the right policy outcome isn’t always immediately clear. Location tracking, for example, can impact people’s sense of privacy; and yet, when deployed in the right way, can lead to first responders swiftly locating someone calling 911 from a mobile device. Public Knowledge sifts through the research and makes sure consumers have a seat at the table when these issues are decided.

Public policy in this area can also impact the broader economy, and raises larger questions: Should we have an internet with a “fast lane“ for the relatively few companies that can afford it, and a slow lane for the rest of us? What would be the impact on innovation and small business if we erase net neutrality as we know it?

The answers to these questions require a community of leaders to advocate for policies that serve the public interest. We need to state in clear language the impact of ill-informed policies and how they affect people’s digital rights —including the ability to access, use and own communications tools, as well as the ability to create and innovate.

Even as the U.S. Federal Communications Commission reviews millions of net neutrality comments and considers approving huge mergers that risk consumers, the cable industry is busy hijacking satellite bills (STAVRA), stealthily slipping “pro-cable” provisions into legislation and that must be passed so 1.5 million satellite subscribers may continue receiving their (non-cable!) service. Public Knowledge shines light on these policies to prevent them from harming innovation or jeopardizing our creative and connected future. To this end we advocate for an open internet and public access to affordable technologies and creative works, engaging policy makers and the public in key policy decisions that affect us all.

Let us be clear: private interests are hoping you won’t notice or just don’t care about these issues. We’re betting that’s not the case. Please apply today to join the Public Knowledge team as a Ford-Mozilla Open Web Fellow to defend the internet we love.

Apply to be a Ford-Mozilla Open Web Fellow. Application deadline for the 2015 Fellowship is December 31, 2014.

Planet WebKitPhilippe Normand: Web Engines Hackfest 2014

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!

We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It’s an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.

I also hacked on GstGL support for video rendering. It’s quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I’ll improve it and hopefully commit it soon in WebKit ToT.

Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!

The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.

Planet MozillaFirefoxOS 3 Ideas: Hack The Phone Call

People are brainstorming ideas for FirefoxOS 3, and how it can be more user-centred. Here’s one:

There should be ways for apps to transparently be hooked into the voice call creation and reception process. I want to use the standard dialer and address book that I’m used to (and not have to use replacements written by particular companies or services), and still e.g.:

  • My phone company can write a Firefox OS extension (like TU Go on O2) such that when I’m on Wifi, all calls transparently use that
  • SIP or WebRTC contacts appear in the standard contacts app, but when I press “Call”, it uses the right technology to reach them
  • Incoming calls can come over VoIP, the phone network or any other way and they all look the same when ringing
  • When I dial, I can configure rules such that calls to certain prefixes/countries/numbers transparently use a dial-through operator, or VoIP, or a particular SIM
  • If a person has 3 possible contact methods, it tries them in a defined order, or all simultaneously, or best quality first, or whatever I want

These functions don’t have to be there by default; what I’m arguing for is the necessary hooks so that apps can add them – an app from your carrier, an app from your SIP provider, an app from a dial-through provider, or just a generic app someone writes to define call routing rules. But the key point is, you don’t have to use a new dialer or address book to use these features – they can be UI-less (at least when not explicitly configuring them.)

In other words, I want to give control over the phone call back to the user. At the moment, doing SIP on Android requires a new app. TU Go requires a new app. There’s no way to say “for all international calls, when I’m in the UK, use this dial-through operator”. I don’t have a dual-SIM Android phone, so I’m not sure if it’s possible on Android to say “all calls to this person use SIM X” or “all calls to this network (defined by certain number prefixes) use SIM Y”. But anyway, all these things should be possible on FirefoxOS 3. They may not be popular with carriers, because they will all save the user money. But if we are being user-centric, we should do them.

Planet MozillaGive a little

<figure class="wp-caption alignleft" id="attachment_2927" style="width: 300px;">GiveGive by Time Green (CC-BY-SA)</figure>

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:


The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.


The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.


The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!


Planet MozillaHow is the Jetpack/Add-on SDK used?

This is a follow-up post to my What is the Jetpack/Add-on SDK post, in which I wish to discuss the ways Jetpack/Add-on SDK is currently being used (that I know about).

Firefox DevTools

The first obvious place to mention is Firefox DevTools, not long after the Add-on SDK team was merged with the Firefox DevTools team, Dave Camp began a process of molding the Firefox DevTools code to use CommonJS module structure supported by the Jetpack SDK loader. Additionally new DevTools features have been prototyped with CFX/JPM (the SDK’s associated CLIs) such as the Valence (aka Firefox Tools Adapter) (source code), Firebug Next (source code), and the WebIDE (source code).

Firefox OS Simulator

The Firefox OS Simulator (source code) was built using the Jetpack/Add-on SDK, one core feature it utilized was the subprocess module (now called child_process for NodeJS parity) it even used third party modules to add UI like toolbar buttons, and menuitems. Finally it used the SDK test framework.

Click here to find the Firefox OS Simulator on AMO

Firefox Testpilot

Firefox Test Pilot is an opt-in program by feedback on things like features is collected. With Testpilot Firefox can experiment with new features (alone or in an A/B scenario), to see if and how they are used, how much they are used, and to determine whether or not they need more work.

You can explore the experiments here


Australis was the UI redesign project codename for Firefox, it was prototyped with the Jetpack SDK, the source code is here

New Tab Tiles

The new tab page, which uses tiles, some of which are ads, that was also prototyped with the Jetpack/Add-on SDK the source code is here

Mozilla Labs

Lightbeam (aka Collusion)

Before the Mozilla Labs project ended parts of the team used the Jetpack/Add-on SDK to develop ideas for Firefox, one highlight was Lightbeam (formerly known as Collusion) (source code) which was the topic of a TED talk which currently has been viewed more than 1.5 million times.

Click here to find Lightbeam on AMO


There was also a sub project of MozLabs called Prospector which also used the Jetpack/Add-on SDK to build feature prototypes such as:

Click here for the full AMO list

Old School Add-ons

  • Scriptish which is a fork of Greasemonkey which uses the Jetpack/Add-on SDK loader even though it is still technically an old school add-on.

A few old school add-ons are using the dev tools loader, Firebug 2.

New School Add-ons

Mozilla Add-ons
Community Add-ons

Finally I’d like to mention some of the add-ons that were developed outside of Mozilla, which was one of the primary goals that lead to the project’s conception. This are some of my favorites:

Click here to see the full AMO list

  • Note: user counts were taken on Oct 15th 2014 and the numbers will obviously change over time

Addons.Mozilla.Org (AMO)

It should be no surprise that there is a fast track on AMO for extensions built with the Add-on SDK, this is because there is less code to review, and the reviews are generally easier. This is good news for the Mozilla community in three ways, the first is that there are more people developing add-ons, secondly the review times are shorter than they would be otherwise, and last of all (but not least) reviewing add-ons is easier, which results in more reviewers.


There are many ways in which the Jetpack/Add-on SDK is being used in, adds value to, and is an essential part of the Mozilla mission and community, which are not obvious. Furthermore all of these use cases now depend on the Jetpack/Add-on SDK and all have to be factored in to the team’s decision making, because bugs come from all of these important sources. So the team can no longer merely focus on new school add-on metrics imho.

Next I want to describe areas the project could work on in the future.

Related Links

Planet MozillaMozlandia - Arrival

Portland. The three words that come to mind are overwhelmed, cold, and exhilarating. Getting there was a right pain, I’d have to admit. Though, flying around the US the weekend after Black Friday isn’t the best idea anyway. According to my rough calculations, it took about 25 hours from take off in Delhi to wheels down in Portland. That’s a heck a lot of time on planes and at airports. But hey, I’ve been doing this for weeks in a row at this point.

At the airport, I ran into people holding up the Mozilla board. As I waited for the shuttle, I was very happy to run into Luke, from the MDN team. We met at the summit and he was a familiar face. We were chatting all the way to the hotel about civic hacking.

This work week is the most exciting Mozilla event that I’ve attended. I’m finally getting to meet a lot of people I know and renewing friendships from the last few events. I started contributing to Mozilla by contributing to the Webdev team. My secret plan at this work week was to meet all the folks from the old Webdev team in person. I’ve known them for more than 3 years and never quite managed to meet everyone in person.

After a quick shower, I decided to step out to the Mozilla PDX. According to Google Maps, it was a quick walk away and I was trying not to sleep all day despite my body trying to convince me it was a good idea. At the office, I met Fred’s team and we sat around talking for a while. It was good to meet Christie again too! That’s when a wave of exhaustion hit. I didn’t see it coming. Suddenly, I felt sluggish and a warm bed seemed very tempting. After lunch with Jen, Sole, and Matt, I quickly retired to bed.

Sole and the Whale

When I got down after the nap, there was a small group headed to the opening event. This was good, because I got very confused with Google Maps (paper maps were much more helpful).

Whoa, people overload. I walked around a few rounds meeting lots of people. It was fun running into a lot of people from IRC in the flesh. I enjoyed meeting the folks from the Auckland office (I often back them out :P). And I finally met Laura and her team. For change, I’m visiting bkero’s town this time instead of him visiting mine ;)

The crowd

The rest of the evening is a bit of a blur. Eventually, I was exhausted and walked back to the hotel for a good night’s sleep before the fun really started!

Planet MozillaSelf Examination

A few weeks ago we had the Mozilla Mozlanida meet up in Portland. I had a few things on my agenda going into that meeting. My biggest was to critically examine the project my team and I have been working on for almost two years.

That project is Marketplace Payments, which we provide through the Firefox Marketplace for developers. We don't limit what kind of payment system you use in Web Apps, unlike Google or Apple.

In Mozlandia, I was arguing (along with some colleagues) that there really is little point in working on this much anymore. There are many reasons for this, but here's the high level:

  • Providing a payments service that competes against every other web based payment service in existence is outside of our core goals

  • We can't actually compete against every other web based payment service without significant investment

  • Developer uptake doesn't support further investment in the project.

There was mostly agreement on this, so we've agreed to complete our existing work on it and then leave it as it is for a while. We'll watch the metrics, see what happens and make some decisions based on the that.

But really the details of this are not that important. What I believe is really, really important is the ability to critically examine your job and projects and examine their worth.

What normally happens is that you get a group of people and tell them to work on project X. They will iterate through features and complete features. And repeat and keep going. And if you don't stop at some point and critically examine what is going on, it will keep repeating. People will find new features, new enhancements, new areas to add to the project. Just as they have been trained to do so. And the project will keep growing.

That's a perfectly normal thing for a team to do. It's harder to call a project done, the features complete and realize that there might be an end.

Normally that happens externally. Sometimes its done a positive way, sometimes it's done negatively. In the latter people get upset and recriminations and accusations fly. It's not a fun time.

But being able to step aside and declare the project done internally can be hard for one main reason: people fear for their job.

That's what some people said to me in Mozlandia "Andy you've just talked yourself out of a job" or "You've just thrown yourself under a bus".

Maybe, but so be it. I have no fear that there's important stuff to be doing at Mozilla and that my awesome team will have plenty to do.

Right, next project.

Update: Marketplace Payments are still there and we are completing the last projects we have for them. But we aren't going to be doing development beyond that on them for a while. Let's see what the data shows.

Planet MozillaBittorrent's Project Maelstrom is 'Firecloud' on steroids

Earlier this week, BitTorrent, Inc. announced Project Maelstrom. The idea is to apply the bittorrent technologies and approaches to more of the web.

Project Maelstrom

Note: if you can’t read the text in the image, it says: “This is a webpage powered by 397 people + You. Not a central server.” So. Much. Win.

The blog post announcing the project doesn’t have lots of details, but a follow-up PC World article includes an interview with a couple of the people behind it.

I think the key thing comes in this response from product manager Rob Velasquez:

We support normal web browsing via HTTP/S. We only add the additional support of being able to browse the distributed web via torrents

This excites me for a couple of reasons. First, I’ve thought on-and-off for years about how to build a website that’s untakedownable. I’ve explored DNS based on the technology powering Bitcoin, experimented with the PirateBay’s now-defunct blogging platform Baywords, and explored the dark underbelly of the web with sites available only through Tor.

Second, Vinay Gupta and I almost managed to get a project off the ground called Firecloud. This would have used a combination of interesting technologies such as WebRTC, HTML5 local storage and DHT to provide distributed website hosting through a Firefox add-on.

I really, really hope that BitTorrent turn this into a reality. I’d love to be able to host my website as a torrent. :-D

Update: People pay more attention to products than technologies, but I’d love to see Webtorrent get more love/attention/exposure.

Comments? Questions Email me:


Updated: .  Michael(tm) Smith <>