Planet MozillaNew Entry

%3Cp%3Eclick%20the%20entry%20to%20start%20typing%3C%2Fp%3E%0A

Planet MozillaRSS description testing

My RSS generator wasn't adding article bodyies to the RSS, which caused some problems for certain RSS readers. Let's see if this fixes it.

Planet MozillaProxy Connections over TLS - Firefox 33

There have been a bunch of interesting developments over the past few months in Mozilla Platform Networking that will be news to some folks. I've been remiss in not noting them here. I'll start with the proxying over TLS feature. It landed as part of Firefox 33, which is the current release.

This feature is from bug 378637 and is sometimes known as HTTPS proxying. I find that naming a bit ambigous - the feature is about connecting to your proxy server over HTTPS but it supports proxying for both http:// and https:// resources (as well as ftp://, ws://, and ws:/// for that matter). https:// transactions are tunneled via end to end TLS through the proxy via the CONNECT method in addition to the connection to the proxy being made over a separate TLS session.. For https:// and wss:// that means you actually have end to end TLS wrapped inside a second TLS connection between the client and the proxy.

There are some obvious and non obvious advantages here - but proxying over TLS is strictly better than traditional plaintext proxying. One obvious reason is that it provides authentication of your proxy choice - if you have defined a proxy then you're placing an extreme amount of trust in that intermediary. Its nice to know via TLS authentication that you're really talking to the right device.

Also, of course the communication between you and the proxy is also kept confidential which is helpful to your privacy with respect to observers of the link between client and proxy though this is not end to end if you're not accessing a https:// resource. Proxying over TLS connections also keep any proxy specific credentials strictly confidential. There is an advantage even when accessing https:// resources through a proxy tunnel - encrypting the client to proxy hop conceals some information (at least for that hop) that https:// normally leaks such as a hostname through SNI and the server IP address.

Somewhat less obviously, HTTPS proxying is a pre-requisite to proxying via SPDY or HTTP/2. These multiplexed protocols are extremely well suited for use in connecting to a proxy because a large fraction (often 100%) of a clients transactions are funneled through the same proxy and therefore only 1 TCP session is required when using a prioritized multiplexing protocol. When using HTTP/1 a large number of connections are required to avoid head of line blocking and it is difficult to meaningfully manage them to reflect prioritization. When connecting to remote proxies (i.e. those with a high latency such as those in the cloud) this becomes an even more important advantage as the handshakes that are avoided are especially slow in that environment.

This multiplexing can really warp the old noodle to think about after a while - especially if you have multiple spdy/h2 sessions tunneled inside a spdy/h2 connection to the proxy. That can result in the top level multiplexing several streams with http:// transactions served by the proxy as well as connect streams to multiple origins that each contain their own end to end spdy sessions carrying multiple https:// transactions.

To utilize HTTPS proxying just return the HTTPS proxy type from your FindProxyForURL() PAC function (instead of the traditional HTTP type). This is compatible with Google's Chrome, which has a similar feature.

function FindProxyForURL(url, host) {
  if (url.substring(0,7) == "http://") {
   return "HTTPS proxy.mydomain.net:443;"
  }
  return "DIRECT;"
}


Squid supports HTTP/1 HTTPS proxying. Spdy proxying can be done via Ilya's node.js based spdy-proxy. nghttp can be used for building HTTP/2 proxying solutions (H2 is not yet enabled by default on firefox release channels - see about:config network.http.spdy.enabled.http2 and network.http.spdy.enabled.http2draft to enable some version of it early). There are no doubt other proxies with appropriate support too.

If you need to add a TOFU exception for use of your proxy it cannot be done in proxy mode. Disable proxying, connect to the proxy host and port directly from the location bar and add the exception. Then enable proxying and the certificate exception will be honored. Obviously, your authentication guarantee will be better if you use a normal WebPKI validated certificate.

Planet Mozilla“Invest in the future, build for the web!”, take 2, at OSOM

I am right now in Cluj-Napoca, in Romania, for OSOM.ro, an small totally non profit volunteer-organised conference. I gave an updated, shorter revised version of the talk I gave at Amsterdam past June. As usual here are the slides and the source for the slides.

It is more or less the same, but better, and I also omitted some sections and spoke a bit about Firefox Developer Edition.

Also I was wearing this Fox-themed sweater which was imbuing me with special powers for sure:

fox sweater

(I found it at H & M past Saturday, there are more animals if foxes aren’t your thing).

There were some good discussions about open source per se, community building and growing. And no, talks were not recorded.

I feel a sort of strange emptiness now, as this has been my last talk for the year, but it won’t be long until other commitments fill that vacuum. Like MozLandia—by this time next week I’ll be travelling to, or already in, Portland, for our work week. And when I’m back I plan to gradually slide into a downward spiral into the idleness. At least until 2015.

Looking forward to meeting some mozillians I haven’t met yet, and also visiting Ground Kontrol again and exploring new coffee shops when we have a break in Portland, though :-)

flattr this!

Planet Mozilla'Card Not Formatted' Error on Pentax Cameras with Mac OSX Card Reader

With some 64GB SDHC and SDXC cards on Pentax (and possibly other) cameras, you might get a 'Card Not Formatted' error. It may happen if you take some shots, plug the SD card into your Mac's card reader, upload the shots, and then unplug it. I've seen the error on my K30 and K3. Though, it's not an issue with the camera or the card.

The issue is with unplugging it. With some SD cards on OSX, the SD card has to be properly ejected rather than straight-up unplugging it. Or else it'll be in some sort of weirdly formatted state. That may be obvious, but I never ran into issues unplugging cards before.

If you hit the error, you don't have to reformat the card. Simply plug it back into your machine, eject it, and then everything will have properly torn down for the card to be usable.

Planet MozillaTest Drive the New Headless Try Repository

Mercurial and Git both experience scaling pains as the number of heads in a repository approaches infinity. Operations like push and pull slow to a crawl and everyone gets frustrated.

This is the problem Mozilla's Try repository has been dealing with for years. We know the solution doesn't scale. But we've been content kicking the can by resetting the repository (blowing away data) to make the symptoms temporarily go away.

One of my official goals is to ship a scalable Try solution by the end of 2014.

Today, I believe I finally have enough code cobbled together to produce a working concept. And I could use your help testing it.

I would like people to push their Try, code review, and other miscellaneous heads to a special repository. To do this:

$ hg push -r . -f ssh://hg@hg.gregoryszorc.com/gecko-headless

That is:

  • Consider the changeset belonging to the working copy
  • Allow the creation of new heads
  • Send it to the gecko-headless repo on hg.gregoryszorc.com using SSH

Here's what happening.

I have deployed a special repository to my personal server that I believe will behave very similarly to the final solution.

When you push to this repository, instead of your changesets being applied directly to the repository, it siphons them off to a Mercurial bundle. It then saves this bundle somewhere along with some metadata describing what is inside.

When you run hg pull -r on that repository and ask for a changeset that exists in the bundle, the server does some magic and returns data from the bundle file.

Things this repository doesn't do:

  • This repository will not actually send changesets to Try for you.
  • You cannot hg pull or hg clone the repository and get all of the commits from bundles. This isn't a goal. It will likely never be supported.
  • We do not yet record a pushlog entry for pushes to the repository.
  • The hgweb HTML interface does not yet handle commits that only exist in bundles. People want this to work. It will eventually work.
  • Pulling from the repository over HTTP with a vanilla Mercurial install may not preserve phase data.

The purpose of this experiment is to expose the repository to some actual traffic patterns so I can see what's going on and get a feel for real-world performance, variability, bugs, etc. I plan to do all of this in the testing environment. But I'd like some real-world use on the actual Firefox repository to give me peace of mind.

Please report any issues directly to me. Leave a comment here. Ping me on IRC. Send me an email. etc.

Update 2014-11-21: People discovered a bug with pushed changesets accidentally being advanced to the public phase, despite the repository being non-publishing. I have fixed the issue. But you must now push to the repository over SSH, not HTTP.

Planet MozillaFlame Distribution Update

About three weeks ago, I ran out of Flame inventory for Mozilla employees and key volunteer contributors. The new order of Flames is arriving in Mountain View late today (Friday) and I’ll be working some over the weekend, but mostly Monday to deliver on the various orders you all have placed with me through email and other arrangements.

If you contacted me for a Flame or a batch of Flames, expect an email update in the next few days with information about shipping or pick-up locations and times. Thanks for your patience these last few weeks. We should not face any more Flame shortages like this going forward.

Planet MozillaTownhall, not Shopping Mall! Community, making, and the future of the Internet

I presented a version of this talk at the 2014 Futurebook Conference in London, England. They also kindly featured me in the program. Thank you to The Bookseller for a wonderful conference filled with innovation and intelligent people!

A few days ago, I was in the Bodleian Library at Oxford University, often considered the most beautiful library in the world. My enthusiastic guide told the following story:

After the Reformation (when all the books in Oxford were burned), Sir Thomas Bodley decided to create a place where people could go and access all the world’s information at their fingertips, for free.

“What does that sound like?” she asked. “…the Internet?”

While this is a lovely conceit, the part of the story that resonated with me for this talk is the other big change that Bodley made, which was to work with publishers, who were largely a monopoly at that point, to fill his library for free by turning the library into a copyright library. While this seemed antithetical to the ways that publishers worked, in giving a copy of their very expensive books away, they left an indelible and permanent mark on the face of human knowledge. It was not only preservation, but self-preservation.

Bodley was what people nowadays would probably call “an innovator” and maybe even in the parlance of my field, a “community manager.”

By thinking outside of the scheme of how publishing works, he joined together with a group of skeptics and created one of the greatest knowledge repositories in the world, one that still exists 700 years later. This speaks to a few issues:

Sharing economies, community, and publishing should and do go hand in hand and have since the birth of libraries. By stepping outside of traditional models, you are creating a world filled with limitless knowledge and crafting it in new and unexpected ways.

The bound manuscript is one of the most enduring technologies. This story remains relevant because books are still books and people are still reading them.

As the same time, things are definitely changing. For the most part, books and manuscripts were pretty much identifiable as books and manuscripts for the past 1000 years.

But what if I were to give Google Maps to a 16th Century Map Maker? Or what if I were to show Joseph Pulitzer Medium? Or what if I were to hand Gutenberg a Kindle? Or Project Gutenberg for that matter? What if I were to explain to Thomas Bodley how I shared the new Lena Dunham book with a friend by sending her the file instead of actually handing her the physical book? What if I were to try to explain Lena Dunham?

These innovations have all taken place within the last twenty years, and I would argue that we haven’t even scratched the surface in terms of the innovations that are to come.

We need to accept that the future of the printed word may vary from words on paper to an ereader or computer in 500 years, but I want to emphasize that in the 500 years to come, it will more likely vary from the ereader to a giant question mark.

International literacy rates have risen rapidly over the past 100 years and companies are scrambling to be the first to reach what they call “developing markets” in terms of connectivity. In the vein of Mark Surman’s talk at the Mozilla Festival this year, I will instead call these economies post-colonial economies.

Because we (as people of the book) are fundamentally idealists who believe that the printed word can change lives, we need to be engaged with rethinking the printed word in a way that recognizes power structures and does not settle for the limited choices that the corporate Internet provides (think Facebook vs WhatsApp). This is not as a panacea to fix the world’s ills.

In the Atlantic last year, Phil Nichols wrote an excellent piece that paralleled Web literacy and early 20th century literacy movements. The dualities between “connected” and “non-connected,” he writes, impose the same kinds of binaries and blind cure-all for social ills that the “literacy” movement imposed in the early 20th century. In equating “connectedness” with opportunity, we are “hiding an ideology that is rooted in social control.”

Surman, who is director of the Mozilla Foundation, claims that the Web, which had so much potential to become a free and open virtual meeting place for communities, has started to resemble a shopping mall. While I can go there and meet with my friends, it’s still controlled by cameras that are watching my every move and its sole motive is to get me to buy things.

85 percent of North America is connected to the Internet and 40 percent of the world is connected. Connectivity increased at a rate of 676% in the past 13 years. Studies show that literacy and connectivity go hand in hand.

How do you envision a fully connected world? How do you envision a fully literate world? How can we empower a new generation of connected communities to become learners rather than consumers?

I’m not one of these technology nuts who’s going to argue that books are going to somehow leave their containers and become networked floating apparatuses, and I’m not going to argue that the ereader is a significantly different vessel than the physical book.

I’m also not going to argue that we’re going to have a world of people who are only Web literate and not reading books in twenty years. To make any kind of future prediction would be a false prophesy, elitist, and perhaps dangerous.

Although I don’t know what the printed word will look like in the next 500 years,

I want to take a moment to think outside the book,

to think outside traditional publishing models, and to embrace the instantaneousness, randomness, and spontaneity of the Internet as it could be, not as it is now.

One way I want you to embrace the wonderful wide Web is to try to at least partially decouple your social media followers from your community.

Twitter and other forms of social media are certainly a delightful and fun way for communities to communicate and get involved, but your viral campaign, if you have it, is not your community.

True communities of practice are groups of people who come together to think beyond traditional models and innovate within a domain. For a touchstone, a community of practice is something like the Penguin Labs internal innovation center that Tom Weldon spoke about this morning and not like Penguin’s 600,000 followers on Twitter. How can we bring people together to allow for innovation, communication, and creation?

The Internet provides new and unlimited opportunities for community and innovation, but we have to start managing communities and embracing the people we touch as makers rather than simply followers or consumers.

The maker economy is here— participatory content creation has become the norm rather than the exception. You have the potential to reach and mobilize 2.1 billion people and let them tell you what they want, but you have to identify leaders and early adopters and you have to empower them.

How do you recognize the people who create content for you? I don’t mean authors, but instead the ambassadors who want to get involved and stay involved with your brand.

I want to ask you, in the spirit of innovation from the edges

What is your next platform for radical participation? How are you enabling your community to bring you to the next level? How can you differentiate your brand and make every single person you touch psyched to read your content, together? How can you create a community of practice?

Community is conversation. Your users are not your community.

Ask yourself the question Rachel Fershleiser asked when building a community on Tumblr: Are you reaching out to the people who want to hear from you and encouraging them or are you just letting your community be unplanned and organic?

There reaches a point where we reach the limit of unplanned organic growth. Know when you reach this limit.

Target, plan, be upbeat, and encourage people to talk to one another without your help and stretch the creativity of your work to the upper limit.

Does this model look different from when you started working in publishing? Good.

As the story of the Bodelian Library illustrated, sometimes a totally crazy idea can be the beginning of an enduring institution.

To repeat, the book is one of the most durable technologies and publishing is one of the most durable industries in history. Its durability has been put to the test more than once, and it will surely be put to the test again. Think of your current concerns as a minor stumbling block in a history filled with success, a history that has documented and shaped the world.

Don’t be afraid of the person who calls you up and says, “I have this crazy idea that may just change the way you work…” While the industry may shift, the printed word will always prevail.

Publishing has been around in some shape or form for 1000 years. Here’s hoping that it’s around for another 1000 more.

Planet MozillaReps Weekly Call – November 20th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps

Summary

  • FOSDEM update.
  • Post-event metrics and receipts (Important reminder)
  • Firefox Tiles Reps FAQ.
  • 10 days of Mozillians.
  • Yahoo agreement.
  • Community newsletter.
  • #fx10 Jakarta
  • Pending Reps applicants.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Planet MozillaIntroduction to Exponential Thinking and Technology

A few weeks ago I had the privilege to deliver the closing keynote at GroupM’s What’s Next Illuminate conference in New York City. I gave a short introduction to exponential thinking (the stuff we teach at Singularity University) and then walked the audience through a whole bunch of examples (focussed on media).

The talk was a shortened and more media-related version of my “Technology Trends” talk I give here at SU to groups from all over the world quite often.

Here’s the video:

Planet Mozilladocument.body.scrollTop vs document.documentElement.scrollTop

Here's a track from Web Compatibility's Greatest Hits Album (Volume I) that just doesn't want to go away—with the latest club remix titled "scrolling to sections from the menu in the mobile Google News site doesn't work due to setting scrollTop position on document.body in Firefox for Android".

Here's some background for those with less refined musical tastes.

(Why yes I can do this bad metaphor stuff all day long, why do you ask?)

If you want to get or set the vertical scroll position of a document, you can use element.scrollTop. According to the CSSOM View Module spec, if you're in standards mode you need to operate on the document's root element (the <html> element—or document.documentElement in DOM land). In quirks mode you would use the <body> element, via document.body.

This works in IE and Firefox and the late Presto Opera.

In Blink and WebKit browsers, it's the exact opposite. Both have attempted to implement the standard (safari, chrome), but both have had to back out their patches due to sites breaking (some Google properties and webkit.org among them, as luck would have it).

The bug that was filed against WebKit for Facebook breaking as a result of changing to the standard is especially interesting because it shows the tension between following standards (and other browsers) and breaking sites for their own users.

It's also a good example of how user-agent-string-based development can sometimes make it hard, if not impossible, to remove some of the crappier stuff from the web platform.

Here's some excerpts, but the whole bug is a good read.

Comment 15:

It really doesn't matter how faithfully you implemented the spec. If it causes a major backward compatibility with the Web, we can't have it.

Comment 31:

Yes, the regression doesn't reproduce if we fake the UA string as I mentioned in the comment #31.

Maybe sites will update one day and let other browsers do the right thing™. (Not that I'm holding my breath over here.)

Until then I guess we get to have fun writing stuff like this (found on apple.com a few weeks back):

(document.documentElement ||
 document.body.parentNode ||
 document.body).scrollTop;

Planet MozillaNueva estrategia de búsqueda para Firefox promueve la elección y la innovación

Google dejará de ser el buscador por defecto en Firefox a partir de diciembre para Estados Unidos, según el anuncio oficial publicado por Chris Beard en el blog de Mozilla. En otras regiones del planeta también Google será reemplazado por otros “competidores” en aras de promover la elección en la Web.
search-with-yahooLas búsquedas son una parte esencial en la experiencia en Internet para todos, solamente los usuarios de Firefox realizan más de 100 millones de búsquedas por año.

Con Firefox, Mozilla popularizó la integración del buscador en el navegador aliándose con compañías de Internet como Google, Yahoo y otras más para generar remuneración y avanzar en su misión. Google ha sido el buscador por defecto globalmente en Firefox desde el 2004 y al vencerse este año el contrato, Mozilla ha tomado esto como una oportunidad para revisar su estrategia y explorar otras opciones.

Según Beard, al evaluar los socios, para Mozilla la primera consideración fue asegurar una estrategia alineada a los valores de elección e independencia, capaz de posicionarlos y avanzar en su misión para brindar un mejor servicio a los usuarios y a la Web. Al final, cada opción disponible por los socios era fuerte, mejorando los términos económicos y reflejando los valores que Firefox brinda al ecosistema. Pero una opción sobresalió por encima de las demás.

Mozilla ha finalizado su práctica de tener un único buscador global en Firefox y en su lugar han adoptado una forma flexible que permitirá tener buscadores por país:

Estados Unidos

  • Yahoo será el buscador por defecto durante los próximos 5 años.
  • Iniciando en diciembre, los usuarios de Firefox serán introducidos en una nueva y mejorada experiencia de búsqueda con Yahoo en la cual resalta una moderna interfaz.
  • Con esta asociación, Yahoo soportará Do Not Track (DNT) en Firefox
  • Google, Bing, DuckDuckGo, eBay, Amazon, Twitter y Wikipedia continuarán siendo opciones de búsqueda alternativas.

Default-Virgin-flight-27-600x421Rusia

  • Yandex será el buscador por defecto.
  • Google, DuckDuckGo, OZON.ru, Price.ru, Mail.ru, y Wikipedia continuarán siendo opciones de búsqueda alternativas.

RussiaChina

  • Baidu continuará siendo el buscador por defecto.
  • Google, Bing, Youdao, Taobao y otras opciones locales continuarán siendo opciones de búsqueda alternativas.

ChinaResto de los países

  • Firefox es un navegador para todo el mundo, indiferente de la preferencia de búsqueda.
  • Firefox ahora tiene más opciones en cuanto a proveedores de búsqueda que cualquier otro navegador, con 61 proveedores preinstalados por 88 versiones en lenguas diferentes.
  • A pesar que Mozilla decidió no renovar el contrato, Google continuará siendo una opción de búsqueda preinstalada.
  • Google continuará proveyendo la Geolocalización y Navegación segura en Firefox.
  • Mozilla se enfocará en expandir su trabajo con socios motivados a explorar innovadoras y nuevas interfaces de búsqueda, experiencias de contenidos, y mejoras de privacidad en escritorio y móvil.

Search_hr_wlogo-600x402

Por eso la independencia importa. Al no perseguir el lucro, nos permite crear opciones diferentes. Opciones que mantienen la Web abierta, por todos lados e independiente. Pensamos que hoy se da un gran paso hacia esa dirección.

Cabe destacar que Google se mantendrá disponible como buscador por defecto para los demás países pero la oferta está abierta para que otros socios interesados se sumen a esta estrategia.

Fuente: The Mozilla Blog

Fuente: Google System

Planet MozillaMozilla Now Accepts Bitcoin

For some time, Mozilla supporters have asked for the ability to donate using bitcoin. We are finally able to fulfill that request. Beginning today, we accept bitcoin as one of the many ways people can choose to support Mozilla. Read … Continue reading

Planet MozillaCommunity Building (Lessons from Mozilla)

Today I had the great honor and pleasure to teach a class on Building (Online) Communities at Electronic Arts’ internal leadership development program. For my presentation I took a trip down memory lane and pulled out the key insights and learnings from my time at Mozilla.

Here’s the deck:

Planet MozillaArtisanal Contributors

Part 1: Start In Person

Ascend had very few ‘rules’ but there was one which was non-negotiable: it’s an in-person program. We didn’t do distance learning, online coursework, or video-based classes. We did bring in a couple of speakers virtually to speak to the room of 20 participants but the opposite was never true.

This was super important in how we were going to build a strong cohort. Don’t get me wrong, I’m a fan of remote work and global contribution as well as with people working from wherever they are. This was a 6 week intensive program though and in order to build the inter-dependent cohort I was hoping to1, it had to be in person at first. Those cruicial early stages where someone is more likely to ‘disappear’ if things were hard, confusing, or if they couldn’t get someone’s attention to ask a question.

It’s been over 5 years since I graduated from my software development program and over 8 years since I started lurking in IRC channels2 and getting to know Mozillians in digital space first. I wouldn’t have stuck with it, or gotten so deeply involved without my coursework with Dave Humphrey though. That was a once a week class, but it meant the world to be in the same room as other people who were learning and struggling with the same or similar problems. It was an all-important thread connecting what I was trying to do in my self-directed time with actual people who could show more caring about me and my ability to participate.

Even as an experienced open source contributor I can jump into IRC channels for projects I’m trying to work on – most recently dd-wrt for my home server setup – and when I ask a question (with lots of evidence for what I’ve already tried and an awareness of what the manual has to say) I get no response, aka: Crickets. There are a host of reasons, and I know more than a beginner might about what those could be: timezones, family comitments, no one with the expertise currently in the channel, and more. None of that matters when you’re new to this type of environment. Silence is interpreted as a big “GO AWAY YOU DON’T BELONG HERE” despite the best intentions of any community.

In person learning is the best way to counter that. Being able to turn to a colleague or a mentor and say what’s happening helps get you both reassurance that it’s not you, but also someone who can help you get unstuck on what to do next. While you wait for a response, check out this other topic we’re studying. Perhaps you can try other methods of communication too, like in a bug or an email.

Over the course of our first pilot I also discovered that removing myself from the primary workroom the Ascend participants were in helped the cohort to rapidly built up strengths in helping each other first3. The workflow looked more like: have a question/problem, ask a cohort member (or several), if you still can’t figure it out ask on IRC, and if then if you’re still stuck find your course leader. This put me at the end of the escalation path4 and meant that people were learning to rely both on in-person communications as well as IRC but more importantly were building up the muscle of “don’t stop asking for help until you get it” which is really where open source becomes such a great space to work in.

Back to my recent dd-wrt experience, I didn’t hear anything back in IRC and I felt I had exhausted the forums & wikis their community provided. I started asking in other IRC channels where tech-minded people hung out (thanks womenwhohack!) and then I tried yet another search with slightly different terms. In the end I found what I needed in a YouTube tutorial. I hope that sufficiently demonstrates that a combination of tactics are what culminate in an ability to be persistent when learning in open source projects.

Never underestimate the importance of removing isolation for new contributors to a project. In person help, even just at first, can be huge.


  1. Because the ultimate goal of Ascend was to give people skills for long-term contribution and participation and a local cohort of support and fellow learners seemed like a good bet for that to be possible once the barrier-removing help of the 6 week intensive was no longer in place. 
  2. By the way, I’m such a huge fan of IRC that I wrote the tutorial for it at Mozilla in order to help get more non-engineering folks using it, in my perfect world everyone is in IRC all the time with scrollback options and logging. 
  3. Only after the first three weeks when we moved to the more independent work, working on bugs, stage. 
  4. Which is awesome because I was always struggling to keep up with the course creation as we were running it, I didn’t realize that teaching 9-5 was asking for disaster and next time we’ll do 10-4 for the participants to give the mentors pre and post prep time. 

Planet MozillaRelease Management Tooling: Past, Present, and Future

Release Management Tooling: Past, Present, and Future

As I was interviewing a potential intern for the summer of 2015 I realized I had outlined all our major tools and what the next enhancement for each could be but that this wasn’t well documented anywhere else yet.

By coming to Release Management from my beginnings as a Release Engineer, I’ve been a part of seeing our overall release automation improve across the whole spectrum of what it takes to put out packaged software for multiple platforms and we’ve come a long way so this post is also intended to capture how the main tools we use have gotten to their current state as well as share where they are heading.

Ship-It

Past: Release Manager on point for a release sent an email to the Release-Drivers mailing list with an hg changeset, a version, build number, and this was the “go” to build for Release Engineering to take over and execute a combination of automated/manual steps (there was even a time when it was only said in IRC, email became the constant when Joduinn pushed for consistency and a traceable trail of events). Release Engineers would update a config files & locale changes, get them attached to a bug, approved, uplifted, then go reconfigure the build machines so they could kick off the release build automation.

Present: Ship-It is an app developed by Release Engineering (bhearsum) that allows a Release Manager to input the configurations needed (changeset, version, build number, partials to be created, l10n changesets) all in one place, and on submit the build automation picks up this change from a db, reconfigures the build machine, and triggers builds. When all goes well, there are zero human hands between the “go” and the availability of builds to QA.

Future: In two parts:
1. To have a simple app that can take a list of bug numbers and check them for landing to {branch} (where branch is Beta, Release, or ESR), once all the bug numbers listed have landed, check tree herder for green status on that last changeset, submit to Ship-It if builds are successful. Benefits: hands off even sooner, knowing that all the important fixes are on the branch in question, and that the tree is totally green prior to build (sometimes we “go” without all the results because of human timing needs).
2. Complete End-To-End Release Checklist, dynamically updated to show what stage a release job is at and who’s got the ball in their court. This should track from buglist added (for the final landings a RM is waiting on) all the way until the release notes are live and QA signs off on updates for the general release being in the wild.

Nucleus (aka Release Note App)

Past: Oh dear, you probably don’t even want to know how our release notes used to be made. It’s worse than sausage. There was a sqlite db file, a script that pulled from that db and generated html based on templates and then the Release Manager had to manually re-order the html to get the desired appearance on final pages, all this was then committed to SVN and with that comes the power to completely break mozilla.org properties. Fun stuff. Really. Also once Release Management was more than just one person we shared this sqlite db over Dropbox which had some fun quirks, like clobbering your changes if two people had the file open at the same time. Nowhere to go but up from here!

Present: Thanks to the web production team (jgmize, hoosteeno, craigcook, jbertsch) we got a new Django app in place that gives us a proper databse that’s redundant, production quality, and not in our hands. We add in release notes as well as releases and can publish notes to both staging and production without any more commits to SVN. There’s also an API that can be scripted to.

Future: The future’s so bright in this area, let me get my shades. We have a flag in Bugzilla for relnote-firefox where it can get set to ? when something is nominated and then when we decide to take on that bug as a release note we can set it to {versionNum}+. With a little tweaking on the Bugzilla side of things we could either have a dedicated field for “release-note text” or we could parse it out of a syntax in a comment (though that’s gonna be more prone to user error, so I prefer the former) and then automatically grab all the release notes for a version, create the release in Nucleus, add the notes, publish to staging, and email the link around for feedback without any manual interference. This also means we can dynamically adjust release notes using Bugzilla (and yes, this will need to be really cautiously done), and it makes sure that our recent convention of having every release note connect to a bug persist and become the standard.

Release Dash

Past: Our only way to visualize the work we were doing was a spreadsheet, and graphs generated from it, of how many crasher bugs were tracked for a version, how many bugs tracked/fixed over the course of 18 weeks for a version, and not much else. We also pay attention to the crash rate at ship time, whether we had to do a dot release or chemspill, and any other release-version-specific issues are sort of lost in the fray after we’re a couple of weeks out from a release. This means we don’t have a great sense of our own history, what we’re doing that works in generating a more stable/successful release, and whether a release is in fact ready to go out the door. It’s a gamble, and we take it every 6 weeks.

Present: We have in place a dashboard that is supposed to allow us to view the current crash data, select Talos (performance) data, custom bug queries, and be able to compare a current release coming down the pipe to previous releases. We do not use this dashboard yet because it’s been a side project for the past year and a half, primarily being created and improved upon by fabulous – yet short-term – interns at Mozilla. The dashboard relies on Elastic Search for Bugzilla data and the cluster it points to is not always up. The dash is written in php and that’s no one’s strong suit on our current team, our last intern did his work by creating a Python Flask app that would work into the current dash. The present situation is basically: we need to work on this.

Future: In the future, this dashboard will be robust, reliable, production-quality (and supported), and it will be able to go up on Mozilla office screens in the dashboard rotation where it will make clear to any viewer:
* Where we are in the current release cycle
* What blockers remain for releas
* How our stability is (over/under acceptable rates)
* If we’re meeting performance expectations
And hopefully more. We have to find more ways to get visibility into issues a release might hit once it’s with the larger population. I’d love to see us get more of our Beta user’s feedback by asking for it on specific features/fixes, get a broader Beta audience that is more reflective of our overall release population (by hardware, location, language, user types) and then grow their ability to report issues well. Then we can find ways to get that front and center too – including to developers because they are great at confirming if something unusual is happening.

What Else?

Well, we used to have an automated script that reminded teams of their open & tracked bugs on Beta/Aurora/Nightly in order to provide a priority order that was visible to devs & their managers. It’s a finicky script that breaks often. I’d like to see that replaced with something that’s not just a cronjob on my personal VPS. We’re also this close to not needed to update product-details (still in SVN) on every release. The fact that the Release Management team has the ability to accidentally take down all mozilla.org properties when a mistake is made submitting svn propedits is not desireable or necessary. We should get the heck away from that asap.

We’ll have more discussions of this in Portland, especially with the teams we work closely with and Sylvestre and I will be talking up our process & future goals at FOSDEM in 2015 as well as following it with a work week in Paris where we can put our heads down and code. Next summer we get an intern again and so we’ll have another set of skilled hands to put on tooling & web service improvements.

Always improving. Always automating. These are the things that make me excited for the next year of Release Management.

Planet MozillaSpotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host

{This is the third installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. We are thrilled to feature the New America Foundation’s Open Technology Institute as a host. Over the years, OTI has been a meaningful change agent, helping to protect the free and open Web. Working at OTI, the Open Web Fellow will be developing tools that lead to greater transparency, enabling all stakeholders to better understand how public policy and business practices impact the Web experience.}

Spotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host Organization
By Kevin Bankston, Policy Director, and Georgia Bullen, Senior Data Analyst; Open Technology Institute

Last month’s MozFest 2014 provided us a welcome opportunity to think about what we at New America’s Open Technology Institute hope to do over the next year as one of the few organizations lucky enough to host a Ford-Mozilla Open Web Fellow during that fellowship program’s inaugural year. At OTI, we are committed to freedom and social justice in the digital age. To achieve these goals, we engage in policy debates, build technology, and work with communities to understand needs, test tools and build alternative models of infrastructure. And we are looking for a passionate maker to help us with our work in 2015. In particular, to help make more transparent the workings of the Internet and the companies that offer services over it.

OTI-Institute-CMYK [Converted]-01

So much of what impacts our online experience happens without us seeing it, making it easy to overlook.

For example, look at the Net Neutrality debate, where decisions made at interconnection points deep in the network have both business and policy implications. At OTI, we have tools that allow us to dig into the technical depths of the issue through our Measurement Lab platform, and we recently published a major report laying out much of that data.  But we need help figuring out how to make this information more available and more clear so that policy experts, advocates, industry professionals and everyday Internet users can understand what interconnection is, how it works, and how it affects the online experience. We’ve started on one of these efforts by working on a visualization tool that we’re calling the Measurement Lab Observatory, but there’s so much more we can do with the Measurement Lab data, as well as the platform and tools to make it more accessible to everyone–if only we can find the right fellow.

With the help of the participants at our MozFest usability workshop, we thought about other ways to get people involved in Internet measurement, such as building a network troubleshooting tool that could generate new M-Lab data while also testing your connection.  We also talked about developing out our Firefox Browser extension to have different themes depending on a user’s needs, such as a journalist or advocate dashboard which includes recent news about Internet policy issues, or a “notebook” app with which Internet citizen scientists can run and annotate tests as part of the M-Lab research team.

These are just the types of ideas that we’re hoping our incoming Ford-Mozilla Fellow can run with.

On the policy and governance side, there’s also a lot more that we could be doing to reveal what happens behind the scenes between governments and Internet companies. Many companies now publish “Transparency Reports” that include information about how and when governments ask for user’s data. However, there’s no standardization in how companies report, making it hard to meaningfully combine or compare the data from different companies — and hard for new companies to get into the reporting game. Building on some of our previous research and education efforts around transparency reporting, in 2015 we will be launching a project called the Transparency Reporting Toolkit.  We’re going to build a Web portal filled with best practices information and tools to help companies create and upload reports in a standardized way, and tools for others to mash up and visualize the data from multiple companies’ reports. OTI’s technologists and data visualization experts are gearing up to build those tools, but it’s a big project and we could use some help — possibly yours.

Ultimately, we can only make good policy with good information, and we can only get good information – and, crucially, understand that information – with good tools.  We’re ready to move forward on all of these projects in 2015, full steam ahead. All we need now is the right technologist to help us make those tools. If that sounds exciting to you, apply to be a 2015 Ford-Mozilla Open Web Fellow and work with us and the Mozilla community to help build new windows into the technical and political depths of the Internet.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Planet MozillaUsing the Firefox Developer Edition dark theme with Nightly

With a recent version of Nightly, go to about:config and set browser.devedition.theme.enabled to true.

Open DevTools (I use alt + cmd + i, or you can also go to the Tools → Web Developer → Toggle tools menu). Then open DevTools preferences by clicking on the gear icon, and select “Dark Theme” on the top right, underneath the Themes.

Screenshot for clarification:

nightly with dev edition theme

Note: you might not get the full effect if there is “legacy stuff” in your profile. If it doesn’t look as you expect… your best option might be to just create a new profile when you start the browser.

Note 2: for some reason the tabs weren’t rendering correctly on my normal nightly profile because the about:config browser.tabs.drawInTitlebar entry was set to false instead of true—I set it to true and now everything looks fine for me.

Or just use the standard Firefox Developer Edition if you’re not an impatient person like me :-P

flattr this!

Planet MozillaRFC: We deserve better than runtime warnings

Consider the following scenario:

  1. Module A prints warnings when it’s used incorrectly;
  2. Module B uses module A correctly;
  3. Some future refactoring of module B starts using module A incorrectly, hence displaying the warnings;
  4. Nobody realises for months, because we have too many warnings;
  5. Eventually, something breaks.

How often has this happened to everyone of us?

This scenario has many variants (e.g. module A changed and nobody realized that module B is now in a situation it misuses module A), but they all boil down to the same thing: runtime warnings are designed to be lost, not fixed. To make things worse, many of our warnings are not actionable, simply because we have no way of knowing where they come from – I’m looking at you, Cu.reportError.

So how do we fix this?

We would certainly save considerable amounts of time if warnings caused immediate assertion failures, or alternatively test failures (i.e. fail, but only when running the unit tests). Unfortunately, we can do neither, as we have a number of tests that trigger the warnings either

  • by design (e.g. to check that we can recover from such misuses of A, or because we still need a now-considered-incorrect use of an API to keep working until we have ported all the clients to the better API);
  • or for historical reasons (e.g. the now incorrect use of A used to be correct, but we haven’t fixed all tests that depend on it yet).

However, I believe that causing test failures is still the solution. We just need a mechanism that supports a form of whitelisting to cope with the aforementioned cases.

Introducing RuntimeAssert

RuntimeAssert is an experiment at provoding a standard mechanism to replace warnings. I have a prototype implemented as part of bug 1080457. Feedback would be appreciated.

The key features are the following:

  • when a test suite is running, a call to `RuntimeAssert` causes the test suite to fail;
  • when a test suite is running, a call to `RuntimeAssert` contains at least the filename/line number of where it was triggered, preferably a stack wherever available;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them either as expected;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them as pending fix;
  • when a test suite is not running, a call to `RuntimeAssert` does nothing costly (it may default to PR_LOG or Cu.reportError).

Possible API:

  • in JS, we trigger a test failure by calling RuntimeAssert.fail(keyword, string or Error) from production code;
  • in C++, we likewise trigger a test failure by calling MOZ_RUNTIME_ASSERT(keyword, string);
  • in the testsuite, we may whitelist errors by calling Assert.whitelist.expected(keyword, regexp)  or Assert.whitelist.FIXME(keyword, regexp).

Examples:

//
// Module
//
let MyModule = {
  oldAPI: function(foo) {
    RuntimeAssert.fail(“Deprecation”, “Please use MyModule.newAPI instead of MyModule.oldAPI”);
    // ...
  },
  newAPI: function(foo) {
    // ...
  },
};

let MyModule2 = {
  api: function() {
    return somePromise().then(null, error => {
      RuntimeAssert.fail(“MyModule2.api”, error);
      // Rather than leaving this error uncaught, let’s make it actionable.
    });
  },

  api2: function(date) {
    if (typeof date == “number”) {
      RuntimeAssert.fail(“MyModule2.api2”, “Passing a number has been deprecated, please pass a Date”);
      date = new Date(date);
    }
    // ...
   }
}


//
// Whitelisting a RuntimeAssert in a test.
//

// This entire test is about MyModule.oldAPI, warnings are normal.
Assert.whitelist.expected(“Deprecation”, /Please use MyModule.newAPI/);

// We haven’t fixed all calls to MyModule2.api2, so they should still warn, but not cause an orange.
Assert.whitelist.FIXME(“MyModule2.api2”, /please pass a Date/);

Assert.whitelist.expected(“MyModule2.api”, /TypeError/, function() {
  // In this test, we will trigger a TypeError in MyModule2.api, that’s entirely expected.
  // Ignore such errors within the (async) scope of this function.
});

Applications

In the long-term, I believe that RuntimeAssert (or some other mechanism) should replace almost all our calls to Cu.reportError.

In the short-term, I plan to use this for reporting

  • uncaught Promise rejections, which currently require a bit too much hacking for my tastes;
  • errors in XPCOM.lazyModuleGetter & co;
  • failures during AsyncShutdown;
  • deprecation warnings as part of Deprecated.jsm.


Planet MozillaFirefox Interest Dashboard: privacy-respecting analytics for your web browsing history

On a recent Mozilla project call I heard about the new Firefox Interest Dashboard. As someone who loves self-tracking, but stopped using my Fitbit due to privacy concerns, this is awesome.

My Firefox Interest Dashboard

Some of the numbers may be a bit off, and the categorisation certainly is in some cases, but it’s a promising start! The great thing is that if you use Firefox Sync it uses your data from other installations you use, too!

From the Content Services team:

This is an early version of interest categorization we’re working on. We invite you to test out this experimental beta add-on and help us out with the misclassified results. We would love to hear from you on suggestions on improvement or any feedback through the flag icon on the interest timeline.

Unlike other analytics services, the FAQ assures users that “all of the interest analysis and categorization is done on the client-side of your browser. No personal data is stored on Mozilla’s servers.”

Download the add-on (Firefox only)


Questions? Comments? Direct them to doug@mozillafoundation.org or discuss in the #TeachTheWeb discussion forum.

Planet MozillaSSL/TLS for the Pragmatic

Tonight I had the pleasure to present "SSL/TLS for the Pragmatic" to the fine folks of Bucks County Devops. It was a fun evening, and I want to thank the organizers, Mike Smalley & Ben Krein, for the invitation.

It was a great opportunity to summarize 18 months of work at Mozilla on building the Server Side TLS Guidelines. By the feedback I received tonight, and on several other occasions, I think we've achieved the goal of building a document that is useful to operations people, and made TLS just a little easier to understand.

We are not, however, anywhere done with the process of teaching TLS to the Internet. Stats speak for themselves, with 70% of sites still supporting SSLv3, 86% enabling RC4, and about 32% still not preferring PFS over RSA handshakes. But things are getting better every day, and ongoing efforts may bring safe defaults in Linux servers as soon as Fedora 21. We live in exciting times!

The slides from my talk are below, and on github as well. I hope you enjoy them. Feel free to share your comments at julien[at]linuxwall.info.

Planet Mozillas/http(:\/\/(?:noscript|flashgot|hackademix)\.net)/https\1/

I'm glad to announce noscript.net, flashgot.net and hackademix.net have been finally switched to full, permanent TLS with HSTS

Please do expect a smörgåsbord of bugs and bunny funny stuff :)

Planet MozillaYahoo and Mozilla Form Strategic Partnership

SUNNYVALE, Calif. and MOUNTAIN VIEW, Calif., Wednesday, November 19, 2014 – Yahoo Inc. (NASDAQ: YHOO) and Mozilla Corporation today announced a strategic five-year partnership that makes Yahoo the default search experience for Firefox in the United States on mobile and desktop. The agreement also provides a framework for exploring future product integrations and distribution opportunities to other markets.

The deal represents the most significant partnership for Yahoo in five years. As part of this partnership, Yahoo will introduce an enhanced search experience for U.S. Firefox users which is scheduled to launch in December 2014. It features a clean, modern and immersive design that reflects input from the Mozilla team.

“We’re thrilled to partner with Mozilla. Mozilla is an inspirational industry leader who puts users first and focuses on building forward-leaning, compelling experiences. We’re so proud that they’ve chosen us as their long-term partner in search, and I can’t wait to see what innovations we build together,” said Marissa Mayer, Yahoo CEO. “At Yahoo, we believe deeply in search – it’s an area of investment, opportunity and growth for us. This partnership helps to expand our reach in search and also gives us an opportunity to work closely with Mozilla to find ways to innovate more broadly in search, communications, and digital content.”

“Search is a core part of the online experience for everyone, with Firefox users alone searching the Web more than 100 billion times per year globally,” said Chris Beard, Mozilla CEO. “Our new search strategy doubles down on our commitment to make Firefox a browser for everyone, with more choice and opportunity for innovation. We are excited to partner with Yahoo to bring a new, re-imagined Yahoo search experience to Firefox users in the U.S. featuring the best of the Web, and to explore new innovative search and content experiences together.”

To learn more about this, please visit the Yahoo Corporate Tumblr and the Mozilla blog.

About Yahoo

Yahoo is focused on making the world’s daily habits inspiring and entertaining. By creating highly personalized experiences for our users, we keep people connected to what matters most to them, across devices and around the world. In turn, we create value for advertisers by connecting them with the audiences that build their businesses. Yahoo is headquartered in Sunnyvale, California, and has offices located throughout the Americas, Asia Pacific (APAC) and the Europe, Middle East and Africa (EMEA) regions. For more information, visit the pressroom (pressroom.yahoo.net) or the Company’s blog (yahoo.tumblr.com).

About Mozilla

Mozilla has been a pioneer and advocate for the Web for more than a decade. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to discover, experience and connect to the Web on computers, tablets and mobile phones. For more information please visit https://www.mozilla.com/press

Yahoo is registered trademark of Yahoo! Inc. All other names are trademarks and/or registered trademarks of their respective owners.


Filed under: Mozilla

Planet MozillaDaala Demo 6: Perceptual Vector Quantization (by J.M. Valin)

Jean-Marc has finished the sixth Daala demo page, this one about PVQ, the foundation of our encoding scheme in both Daala and Opus.

(I suppose this also means we've finally settled on what the acronym 'PVQ' stands for: Perceptual Vector Quantization. It's based on, and expanded from, an older technique called Pyramid Vector Quantization, and we'd kept using 'PVQ' for years even though our encoding space was actually spherical. I'd suggested we call it 'Pspherical Vector Quantization' with a silent P so that we could keep the acronym, and that name appears in some of my slide decks. Don't get confused, it's all the same thing!)

Planet MozillaEncryptr: ‘zero knowledge’ essential information storage

Encryptr is one of the first “in production” applications built on top of Crypton. Encryptr can store short pieces of text like passwords, credit card numbers and other random pieces of information privately, in the cloud. Since it uses Crypton, all data that is saved to the server is encrypted first, making even a server compromise an exercise in futility for the attacker.

A key feature is that you can run Encryptr on your phone as well as your desktop and all data is available in each place immediately. Have a look:


The author of Encryptr, my colleague Tommy @therealdevgeeks, has recently blogged about building Encryptr. I hope you give it a try and send him feedback through the Github project page.


Planet MozillaSimple things: styling ordered lists

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

It is pretty easy to get an ordered list into a document. All you have to do is add an OL element with LI child elements:

<ol>
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

But what if you want to style the text differently from the numbers? What if you don’t like that they end with a full stop? The generated numbers of the OL are somewhat of that dark magic browsers do for us (something we work on dragging into the sunlight with ShadowDOM).

In order to make those more style-able in the past you had to add another element to get a hook:

<ol class="oldschool">
  <li><span>Collect underpants</span></li>
  <li><span>???</span></li>
  <li><span>Profit</span></li>
</ol>

.oldschool li {
  color: green;
}
.oldschool span {
  color: lime;
}

Which is kind of a terrible hack and doesn’t quite scale as you may never know who edits your list. With newer browsers we have a better way of doing that using CSS counters. The browser support is ridiculously good, so there should be no excuse for us not to use them:

counters

Using counter, you keep the HTML structure:

<ol class="counter">
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

You then reset the counter for each of the lists with this class:

.counter { 
  counter-reset: list;
}

This means each list will start at 1 and not go on through the document tree. You then get rid of the list style and style the list item like you want to. In this case we give it a colour and we position it relative. This allows us to position other, new content in there and contain it to the list item:

.counter li {
  list-style: none;
  position: relative;
  color: lime;
}

Once you hid the normal numbering with the list-style: none; you can create your own numbers using counter and generated CSS content:

.counter li::before {
  counter-increment: list;
  content: counter(list) '.';
  position: absolute;
  top: 0px;
  left: -1.2em;
  color: green;
}

If you wanted to remove the full stop, all you need to do is remove it in the CSS. You have now full styling control over these numbers, for example you can animate them slightly moving from one colour to another and zoom out a bit:

demo animation of the effect

.animated li::before {
  transition: 0.5s;
  color: green;
}
.animated li:hover::before {
  color: white;
  transform: scale(1.5);
}

Counters allow you for a lot of different times of numbering. You can for example add a leading zero by using counter(list,decimal-leading-zero), you can use Roman numerals with counter(list,lower-roman) or even Greek ones with counter(list,lower-greek).

If you want to see all of that in action, check out this Fiddle:

Pretty simple, and quite powerful. Here are some more places to read up on this:

Planet MozillaConverting Mozilla's SVG implementation to Moz2D - part 2

This is part 2 of a pair of posts describing my work to convert Mozilla's SVG implementation to directly use Moz2D. Part 1 provided some background information and details about the process. This post will discuss the performance benefits of the conversion of the SVG code and future work.

Benefits

For the most part the performance improvements from the conversion to Moz2D were gradual; as code was incrementally converted, little by little gfxContext overhead was avoided. On doing an audit of our open SVG performance bugs it seems that painting performance is no longer one of the reasons that we perform poorly, except for when we us cairo backed DrawTargets (Linux, Windows XP and other Windows versions with blacklisted drivers), and with the exception of one bug that needs further investigation. (See below for the issues that still do causes SVG performance problems.)

Besides the incremental improvements, there have been a couple of interesting perf bumps that are worth mentioning.

The biggest perf bump by far came when I converted the code that does the actual filling and stroking of SVG geometry to directly use a DrawTarget. The time taken to render this map then dropped from about 15s to 1-2s on my Mac. On the same machine Chrome Canary shows a blank window for about 5s, and then takes a further 20s to render. Now, to be honest, this improvement will be down to something pathological that has been removed rather than being down to avoiding Thebes overhead. (I haven't got to the bottom of exactly what that was yet.) The DrawTarget object being drawn to is ultimately the same object, and Thebes overhead isn't likely to be more than a few percent of any time spent in this code. Nevertheless, it's still a welcome win.

Another perf bump that came from the Moz2D conversion was that it enabled us to cache path objects. When using Thebes, paths are built up using gfxContext API calls and the consumer never gets to touch the resulting path. This prevents the consumer from keeping hold of the path and reusing it in future. This can be a disadvantage when the path is reused frequently, especially when D2D is being used where path creation is relatively expensive. Converting to Moz2D has allowed the SVG code to hold on to the path objects that it creates and reuse them. (For example, in addition to their obvious use during rasterization, paths might be reused for bounds calculations (think invalidation areas, objectBoundingBox content, getBBox() calls) and hit-testing.) Caching paths made us noticeably more responsive on this cool data visualization (temporarily mirrored here while the site is down) when mousing over the table rows, and gave us a +25% boost on this NYT article, for example.

For those of you that are interested in Talos, I did take a look at the SVG test data, but the unfortunately frequent up-and-down of unrelated regressions and wins makes it impossible to use that to show any overall impact of Moz2D conversion on the Talos tests. (Since the beginning of the year the times on Windows have improved slightly while on Mac they have regressed slightly.) The incremental nature of most of the work also unfortunately meant that the impact of individual patches couldn't usually be distinguished from the noise in Talos' results. One notable exception was the change to make SVG geometry use a path object directly which resulted in an improvement in the region of 6% for the svg_opacity suite on Windows 7 and 8.

Other than the performance benefits, several parts of the SVG implementation that were pretty messy and hard to get into and debug have become a lot more approachable. This has already allowed me to fix various SVG bugs that would otherwise have taken a lot longer to work out, and I hope it makes the code easier to approach for devs who aren't so familiar with it.

One final note on performance for any of you that will do your own testing to compare build - note that the enabling of e10s and tiled layers has caused significant changes in performance characteristics. You might want to turn those off.

Future SVG work

As I noted above there are still SVG performance issues unrelated to graphics speed. There are three sources of significant SVG performance issues that can make Mozilla perform poorly on SVG relative to other implementations. There is our lack of hardware acceleration of SVG filters; there's the issue of display list overhead dwarfing painting on SVGs that contain huge numbers of elements (display lists being an implementation detail, and one that gave us very large wins in many other cases); and there are a whole bunch of "strange" bugs that I expect are related to our layers infrastructure that are causing us to over invalidate (and thus do work painting when we shouldn't need to).

Currently these three issues are not on a schedule, but as other higher priority Mozilla work gets ticked of I expect we'll add them.

Future Moz2D work

The performance benefits from the Moz2D conversion on the SVG code do seem to have been positive enough that I expect that we will continue converting the rest of layout in the future. As usual, it will all depend on relative priorities though.

One thing that we should do is audit all the code that creates DrawTargets to check for backend type compatibility. Mixing hardware and software backed DrawTargets when we don't need to can cause us to unwittingly be taking big performance hits due to readback from and/or upload to the GPU. I fixed several instances of mismatch that I happened to notice during the conversion work, and in one case accidentally introduced one which fortunately was caught because it caused a 10-25% regression in a specific Talos test. We know that we still have outstanding bugs on this (such as bug 944571) and I'm sure there are a bunch of cases that we're unaware of.

I mentioned above that painting performance is still a significant issue on machines that fall back to using cairo backed DrawTargets. I believe that the Graphics team's plan to solve this is to finish the Skia backend for Moz2D and use that on the platforms that don't support D2D.

There are a few things that need to be added to Moz2D before we can completely get rid of gfxContext. The main thing we're missing is push-group API on DrawTarget. This is the main reason that gfxContexts actually wraps a stack of DrawTargets, which has all sorts of irritating fallout. Most annoying it makes it hazardous to set clip paths or transforms directly on DrawTargets that may be accessed via a wrapping gfxContext before the DrawTarget's clip stack and transform has been restored, and why I had to continue passing gfxContexts to a lot of code that now only paints directly via the DrawTarget.

The only Moz2D design decision that I've found myself to be somewhat unhappy with is the decision to make patterns relative to user-space. This is what most other hardware accelerated libraries do, but I don't think it's a good fit for 2D browser rendering. Typically crisp rendering is very important to web content, so we render patterns assuming a specific user-space to device-space transform and device space pixel alignment. To maintain crisp rendering we have to make sure that patterns are used with the device-space transform that they were created for, and having to do this manually can be irksome. Anyway, it's a small detail, but something I'll be discussing with the Graphics guys when I see them face-to-face in a couple of weeks.

Modulo the two issues above (and all the changes that I and others had made to it over the last year) I've found the Moz2D API to be a pleasure to work with and I feel the SVG code is better performing and a lot cleaner for converting to it. Well done Graphics team!

Planet MozillaConverting Mozilla's SVG implementation to Moz2D - part 1

One of my main work items this year was the conversion of the graphics portions of Mozilla's SVG implementation to directly use Moz2D APIs instead of using the old gfxContext/gfxASurface Thebes APIs. This pair of posts will provide some information on that work. This post will give some background and information on the conversion process, while part 2 will provide some discussion about the benefits of the work and what steps we might want to carry out next.

For background on why Mozilla is building Moz2D (formerly called Azure) and how it can improve Mozilla's performance see some of the earlier posts by Joe, Bas and Robert.

Early Moz2D development

When Moz2D was first being put together it was initially developed and tested as an alternative rendering backend for Mozilla's implementation of HTML <canvas>. Canvas was chosen as the initial testbed because its drawing is largely self contained, it requires a relatively small number of features from any rendering backend, and because we knew from profiling that it was being particularly impacted by Thebes/cairo overhead.

As Moz2D started to become more stable, Thebes' gfxContext class was extended to allow it to wrap a Moz2D DrawTarget (prior to that it was backed only by an instance of a Thebes gfxASurface subclass, in turn backed by a cairo_surface_t). This might seem a bit strange since, after all, Moz2D is supposed to replace Thebes, not be wrapped by it adding yet another layer of abstraction and overhead. However, it was an important step to allow the Graphics team to start testing Moz2D on Mozilla's more complicated, non-canvas, rendering scenarios. It allowed many classes of Moz2D bugs and missing Moz2D features to be worked on/out before beginning a larger effort to convert the masses of non-canvas rendering code to Moz2D.

In order to switch any of the large number of instances of gfxContext to be backed by a DrawTarget, any code that might encounter that gfxContext and try to get a gfxASurface from it had to be updated to handle DrawTargets too. For example, lots of forks in the code had to be added to BasicLayerManager, and gfxFont required a new GlyphBufferAzure class to be written. As this work progressed some instances of Thebes gfxContexts were permanently flipped to being backed by a Moz2D DrawTarget, helping keep working Moz2D code paths from regressing.

SVG, the next Guinea pig

Towards the end of 2013 it was felt that Moz2D was sufficiently ready to start thinking about converting Mozilla's layout code to use Moz2D directly and eliminate its use of gfxContext API. (The layout code being the code that decides where and how most things are placed on the screen, and by far the biggest consumer of the graphics code.) Before committing a lot of engineering time and resources to a large scale conversion, Jet wanted to convert a specific part of the layout code to ensure that Moz2D could meet its needs and determine what performance benefits it could provide to layout. The SVG code was chosen for this purpose since it was considered to be the most complicated to convert (if Moz2D could work for SVG, it could work for the rest of layout).

Stage 1 - Converting all gfxContexts to wrap a DrawTarget

After drawing up a rough list of the work to convert the SVG code to Moz2D I got stuck in. The initial plan was to add code paths to the SVG code to check for and extract DrawTargets from gfxContexts that were passed in (if the gfxContext was backed by one) and operate directly on the DrawTarget in that case. (At some future point the Thebes forks could then be removed.) It soon became apparent that these forks were often not how we would want the code to be structured on completion of Moz2D conversion though. To leverage Moz2D more effectively I frequently found myself wanting to refactor the code quite substantially, and in ways that were not compatible with the existing Thebes code paths. Rather than spending months writing suboptimal Moz2D code paths only to have to rewrite things again when we got rid of the Thebes paths I decided to save time in the long run and first make sure that any gfxContexts that were passed into SVG code would be wrapping a DrawTarget. That way maintaining Thebes forks would be unnecessary.

It wasn't trivial to determine which gfxContexts might end up being passed to SVG code. The complexity of the code paths and the virtually limitless permutations in which Web content can be combined meant that I only identified about a dozen gfxContexts that could not end up in SVG code. As a result I ended up working to convert all gfxContexts in the Mozilla code. (The small amount of additional work to convert the instances that couldn't end up in SVG code allowed us to reduce a whole bunch of code complexity (and remove a lot of then dead code) and simplified things for other devs working with Thebes/Moz2D.)

Ensuring that all the gfxContexts that might be passed to SVG code would be backed by a DrawTarget turned out to be quite a task. I started this work when relatively few gfxContexts had been converted to wrap a DrawTarget so unsurprisingly things were a bit rough. I tripped over several Moz2D bugs at this point. Mostly though the headaches were caused by the amount of code that assumed gfxContexts wrapped and could provide them with a gfxASurface/cairo_surface_t/platform library object, possibly getting or then passing those objects from/to seemingly far corners of the Mozilla code. Particularly challenging was converting the image code where the sources and destinations of gfxASurfaces turned out to be particularly far reaching requiring the code to be converted incrementally in 34 separate bugs. Doing this without temporary performance regressions was tricky.

Besides preparing the ground for the SVG conversion, this work resulted in a decent number of performance improvements in its own right.

Stage 2 - Converting the SVG code to Moz2D

Converting the SVG code to Moz2D was a lot more than a simple case of switching calls from one graphics API to another. The stateful context provided by a retained mode API like Thebes or cairo allows consumer code to set context state (for example, fill pattern, or anti-alias mode) in points of the code that can seem far removed from other code that takes an action (for example, filling a path) that relies on that state having been set. The SVG code made use of this a lot since in many cases (for example, when passing things through for callbacks) it simplified the code to only pass a context rather than a context and some state to set.

This wouldn't have been all that bad if it wasn't for another fundamental difference between Thebes/cairo and Moz2D -- in Moz2D paths and patterns are relative to user-space, whereas in Thebes/cairo they are relative to device-space. Whereas with Thebes we could set a path/pattern and then change the transform before drawing (perhaps, say, to apply a clip in a different space) and the position of the path/pattern would be unaffected, with Moz2D such a transform change would change (and thus break) the rendering. This, incidentally, was why the SVG code was expected to be the hardest area to switch to Moz2D. Partly for historic reasons, and partly because some of the features that SVG supports lead it to, the SVG code did a lot of setting state, changing transforms, setting some more state and then drawing. Often the complexity of the code made it difficult to figure out which code could be setting relevant state before a transform change, requiring more involved refactoring. On the plus side, sorting this out has made parts of the code significantly easier to understand, and has been something I've wanted to find the time to do for years.

Benefits and next steps

To continue reading about the performance benefits of the conversion of the SVG code and some possible next steps continue to part 2.

Planet MozillaAvast, you're kidd... killing me - said NoScript >:(

If NoScript keeps disappearing from your Firefox, Avast! Antivirus is likely the culprit.
It's gone Berserk and mass-deleting add-ons without a warning.
I'm currently receiving tons of reports by confused and angry users.
If the antivirus is dead (as I've been preaching for 7 years), looks like it's not dead enough, yet.

Planet MozillaWhat I am looking for in a guest writer on this blog

Simple: go try guest writing someplace else. This is my personal blog and if I am interested in something, I come to you and do it interview style in order to point to your work or showcase something amazingly cool that you have done.

anteater-sound-of-music

Please, please, please with cherry on top, stop sending me emails like this one:

Hi,

I’m {NAME}, a freelance writer/education consultant. I found “Christian Heilmann” on a Google search and thought I would contact you to see if you would like to work with me. I own a website on Job Application Service that I’m currently promoting for myself. I thought we could benefit each other somehow? If you are interested, I’d be happy to write a very high-quality article for your site and get a couple permanent links from it? While your website is benefiting from my high-quality article, I’m getting links from your site, making this proposition mutually beneficial.
Shall I write an article that matches your niche and send it across for your review or do you need me to write on a particular topic that interests you and your readers, I’m open to any topic, thoughts please?
If this does not interest you, I am sorry to have bothered you. Have a good day! If this does great I hope we can build a long-term business relationship together! If you wish to have a chat on the phone please let me know your phone number and when a good time to call is :) If you’d like, I can share samples with you.
Regards,
{FIRSTNAME}

I am very happy you know how to enter a name in Google and find the blog of that person. That’s a good start. Nobody got hurt, you didn’t overdo it with the research or spent too much effort before asking for my phone number and pointing out just how much you would get out of this “mutually beneficial relationship”. Seriously, I would love to be a fly on the wall when you try dating.

I’ve worked hard on this blog, that’s why it has some success or is at least found. Go work on yours yourself. That’s how it should be. A blog is you. Just like this one is mine.

Planet MozillaBMO show_bug Load Times 2x Faster Since January

The load time for viewing bugs on bugzilla.mozilla.org has got 2x faster since January. See this tweet for graphical evidence.

If you are looking for a direction in which to send your bouquets, glob is your man.

Planet MozillaThe Future of Promise

If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking.
At the moment, we have two active implementations of Promise:
(as well as a little code using an older, long deprecated, implementation of Promise)
This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise.

General Overview

Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons.
In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web.
Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical.
Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014.

Missing pieces

Debugging and testing

At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.

API differences

  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up

Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module.

As a developer, what should I do?

For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything.
If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.

Planet MozillaNative apps, the open web, and web literacy

In a recent blog post, John Gruber argues that native apps are part of the web. This was in response to a WSJ article in which Christopher Mims stated his belief that the web is dying; apps are killing it. In this post, I want to explore the relationship between native apps and web literacy. This is important as we work towards a new version of Mozilla’s Web Literacy Map. It’s something that I explored preliminarily in a post earlier this year entitled What exactly is ‘the mobile web’? (and what does it mean for web literacy?). This, again, was in response to Gruber.

Native app

This blog focuses on new literacies, so I’ll not be diving too much into technical specifications, etc. I’m defining web literacy in the same way as we do with the Web Literacy Map v1.1: ‘the skills and competencies required to read, write and participate on the web’. If the main question we’re considering is are native apps part of the web? then the follow-up question is and what does this mean for web literacy?

Defining our terms

First of all, let’s make sure we’re clear about what we’re talking about here. It’s worth saying right away that 'app’ is almost always used as a shorthand for 'mobile app’. These apps are usually divided into three categories:

  1. Native app
  2. Hybrid app
  3. Web app

From this list, it’s probably easiest to describe a web app:

A web application or web app is any software that runs in a web browser. It is created in a browser-supported programming language (such as the combination of JavaScript, HTML and CSS) and relies on a web browser to render the application. ( Wikipedia)

It’s trickier to define a native app, but the essence can be seen most concretely through Apple’s ecosystem that include iOS and the App Store. Developers use a specific programming language and work within constraints set by the owner of the ecosystem. By doing so, native apps get privileged access to all of the features of the mobile device.

A hybrid app is a native app that serves as a 'shell’ or 'wrapper’ for a web app. This is usually done for the sake of convenience and, in some cases, speed.

The boundary between a native app and a web app used to be much more clear and distinct. However, the lines are increasingly blurred. For example:

  • APK files (i.e. native apps) can be downloaded from the web and installed on Android devices.
  • Developments as part of Firefox OS mean that web technologies can securely access low-level functionality of mobile devices (e.g. camera, GPS, accelerometer).
  • The specifications for HTML5 and CSS3 allow beautiful and engaging web apps to be used offline.

Web literacy and native apps

As a result of all this, it’s probably easier these days to differentiate between a native app and a web app by talking about ecosystems and silos. Understanding it this way, a native app is one that is built specifically using the technologies and is subject to the constraints of a particular ecosystem. So a developer creating an app for Apple’s App Store would have to go through a different process and use a different programming language than if they were creating one for Google’s Play Store. And so on.

Does this mean that we need to talk of a separate 'literacy’ for each ecosystem? Should we define 'Google literacy’ as the skills and competencies required to read, write and participate in Google’s ecosystem? I don’t think so. While there may be variations in the way things are done within the different ecosystems, these procedural elements do not constitute 'literacy’.

What we’re aiming for with the Web Literacy Map is a holistic overview of the skills and competencies people require when using the web. I think at this juncture we’ve got a couple of options. The first would be define 'the web’ more loosely to really mean 'the internet’.

This is John Gruber’s preferred option. He thinks we should focus less on web browsers (i.e. HTML) and more on the connections (i.e. HTTP). For example, in a 2010 talk he pointed out a difference between 'web apps’ and 'browser apps’. His argument rested on a technical point, which he illustrated with an example. When a user scrolls through their timeline using the Twitter app for iPhone, they’re not using a web browser, but they are using HTTP technologies. This, said Gruber, means that ecosystems such as Apple’s and the web are not in opposition to one another.

While this is technically correct, it’s a red herring. HTML does matter because the important thing here is the open web. Check out Gruber’s sleight of hand in this closing paragraph:

Arguments about “open” and “closed” often devolve into unresolvable cross-talk where the two sides have different definitions of what open and closed really mean. But the weird thing about a truly open platform is that its openness allows closed things to be built on top of it. In broad strokes, that’s why GNU/GPL software isn’t “open” in the way that BSD software is (and why Richard Stallman outright rejects the term “open source”). If you expand your view of “the web” from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.

I think Gruber needs to read up on enclosure and the Commons. To use a 16th-century English agricultural metaphor, the important thing isn’t that the grass is growing in the field, it’s that it’s been fenced off and people are excluded.

A way forward

A second approach is to double-down on what makes the web different and unique. Mozilla’s mission is to promote openness, innovation & opportunity on the web and the Web Literacy Map is a platform for doing this. Even if we don’t tie development of the Web Literacy Map explicitly to the Mozilla manifesto it’s still a key consideration. Therefore, when we’re talking about 'web literacy’ it’s probably more accurate to define it as 'the skills and competencies required to read, write and participate on the open web.

What do we mean by the 'open web’? While Tantek Çelik approaches it from a technical standpoint, I like Brad Neuberg’s (2008) focus on the open web as a series of philosophies:

Decentralization - Rather than controlled by one entity or centralized, the web is decentralized – anyone can create a web site or web service. Browsers can work with millions of entities, rather than tying into one location. It’s not the Google or Microsoft Web, but rather simply the web, an open system that anyone can plug into and create information at the end-points.
Transparency - An Open Web should have transparency at all levels. This includes being able to view the source of web pages; having human-readable network identifiers, such as URLs; and having clear network entry points, such as HTTP and REST exposes.
Hackability - It should be easy to lash together and script the different portions of this web. MySpace, for example, allows users to embed components from all over the web; Google’s AdSense, another example, allows ads to be integrated onto arbitrary web pages. What would you like to hack together, using the web as a base?
Openness - Whether the protocols used are de facto or de-jure, they should either be documented with open specifications or open code. Any entity should be able to implement these standards or use this code to hook into the system, without penalty of patents, copyright of standards, etc.
From Gift Economies to Free Markets - The Open Web should support extreme gift economies, such as open source and Wikis, all the way to traditional free market entities, such as Amazon.com and Google. I call this Freedom of Social Forms; the tent is big enough to support many forms of social and economic organization, including ones we haven’t imagined yet. Third-Party Integration - At all layers of the system third-parties should be able to hook into the system, whether creating web browsers, web servers, web services, etc.
Third-Party Innovation - Parties should be able to innovate and create without asking the powers-that-be for permission.
Civil Society and Discourse - An open web promotes both many-to-many and one-to-many communication, allowing for millions of conversations by millions of people, across a range of conversation modalities.
Two-Way Communication - An Open Web should allow anyone to assume three different roles: Readers, Writers, and Code Hackers. Readers read content, Writers write content, and Code Hackers hack new network services that empower the first two roles.
End-User Usability and Integration - One of the original insights of the web was to bind all of this together with an easy to use web browser that was integrated for ease of use, despite the highly decentralized nature of the web. The Open Web should continue to empower the mainstream rather than the tech elite with easy to use next generation browsers that are highly usable and integrated despite having an open infrastructure. Open should not mean hard to use. Why can’t we have the design brilliance of Steve Jobs coupled with the geek openness of Steve Wozniak? Making them an either/or is a false dichotomy.

Conclusion

The Web Literacy Map describes the skills and competencies required to read, write and participate on the open web. But it’s also prescriptive. It’s a way to develop an open attitude towards the world:

Open is a willingness to share, not only resources, but processes, ideas, thoughts, ways of thinking and operating. Open means working in spaces and places that are transparent and allow others to see what you are doing and how you are doing it, giving rise to opportunities for people who could help you to connect with you, jump in and offer that help. And where you can reciprocate and do the same.

Native apps can mitigate against the kind of reciprocity required for an open web. In many ways, it’s the 21st century enclosure of the commons. I believe that web literacy, as defined and promoted through the Web Literacy Map, should not consider native apps part of the open web. Such apps may be built on top of web technologies, they may link to the open web, but native apps are something qualitatively different. Those who want to explore what reading, writing and participating means in closed ecosystems have other vocabularies – provided by media literacy, information literacy, and digital literacy – with which to do so.


Comments? Questions? Direct them here: doug@mozillafoundation.org or discuss this post in the #TeachTheWeb discussion forum

Planet Mozilla"Unloading" frame scripts in restartless extensions

The big news is: e10s is coming to desktop Firefox after all, and it was even enabled in the nightly builds already. And while most of the times the add-ons continue working without any changes, this doesn’t always work correctly. Plus, using the compatibility shims faking a single-process environment might not be the most efficient approach. So reason enough for add-on authors to look into the dreaded and underdocumented message manager and start working with frame scripts again.

I tried porting a simple add-on to this API. The good news: the API hasn’t changed since Firefox 17, so the changes will be backwards-compatible. And the bad news? Well, there are several.

  • Bug 1051238 means that frame scripts are cached — so when a restartless add-on updates the old frame script code will still be used. You can work around that by randomizing the URL of your frame script (e.g. add "?" + Math.random() to it).
  • Bug 673569 means that all frame scripts run in the same shared scope prior to Firefox 29, so you should make sure there are no conflicting global variables. This can be worked around by wrapping your frame script in an anonymous function.
  • Duplicating the same script for each tab (originally there was only a single instance of that code) makes me wonder about the memory usage here. Sadly, I don’t see a way to figure that out. I assume that about:memory shows frame scripts under the outOfProcessTabChildGlobal entry. But due to the shared scope there is no way to see individual frame scripts there.
  • Finally, you cannot unload frame scripts if your restartless extension is uninstalled or disabled. messageManager.removeDelayedFrameScript() will merely make sure that the frame script won’t be injected into any new tabs. But what about tabs that are already open?

Interestingly, it seems that Mark Finkle was the only one to ask himself that question so far. The solution is: if you cannot unload the frame script, you should at least make sure it doesn’t have any effect. So when the extension unloads it should send a "myaddon@example.com:disable" message to the frame scripts and the frame scripts should stop doing anything.

So far so good. But isn’t there a race condition? Consider the following scenario:

  • An update is triggered for a restartless extension.
  • The old version is disabled and broadcasts “disable” message to the frame scripts.
  • The new version is installed and starts its frame scripts.
  • The “disable” message arrives and disabled all frame scripts (including the ones belonging to the new extension version).

The feedback I got from Dave Townsend says that this race condition doesn’t actually happen and that loadFrameScript and broadcastAsyncMessage are guaranteed to affect frame scripts in the order called. It would be nice to see this documented somewhere, until then it is an implementation detail that cannot be relied on. The work-around I found here: since the frame script URL is randomized anyway (due to bug 1051238), I can send it along with the “disable” message:

messageManager.broadcastAsyncMessage("myaddon@example.com:disable", frameScriptURL);

The frame script then processes the message only if the URL matches its own URL:

addMessageListener("myaddon@example.com:disable", function(message)
{
  if (message.data == Components.stack.filename)
  {
    ...
  }
});

Planet MozillaMathML November Meeting

MathML November Meeting

Note

Sorry for the delay to write this.

This is a report about the Mozilla MathML November IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in January 7th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Note

Yes. Our December meeting was cancelled. =(

Leia mais...

Planet MozillaHappy 10th Birthday, MozillaWiki!

Last Monday, Firefox turned 10 years old. Thunderbird turns 10 on 7 December.

This week we celebrate another birthday: MozillaWiki turns 10 on Wednesday, 18 November!

I’m immensely proud of our wiki, its ten year history, and of all the work Mozillians do to make MozillaWiki a hub of collaboration and a living memory for the Mozilla Project.

To show our appreciation for your efforts over the last decade, the MozillaWiki team has created a 10th Birthday badge.

<figure class="wp-caption aligncenter" id="attachment_768370" style="width: 454px;">MozillaWiki 10th Birthday BadgeMozillaWiki 10th Birthday Badge</figure>

All you need to do to join in the celebration and claim the badge is log in to MozillaWiki. Once you’ve done that, you’ll see a link to claim the badge at the top of the page. Don’t have a MozillaWiki account? No worries! Create one during this Birthday celebration and you can claim the badge too.

A bit of MozillaWiki history

Before I talk about all the good work we’ve done, and what we have planned for the remainder of this year and beyond, let’s take a quick stroll through the last 10 years. Thank you Internet Archive for hosting these snapshots of the wiki!

July 2004

The earliest snapshot I could find of the domain wiki.mozilla.org was from July 2004. It looks like we were hosting separate wiki installations, which may or may not have been Mediawiki.

<figure class="wp-caption aligncenter" id="attachment_768360" style="width: 980px;">wik.mozilla.org July 2004wik.mozilla.org July 2004</figure> <figure class="wp-caption aligncenter" id="attachment_768373" style="width: 1496px;">wiki.mozilla.org/GeckoDev August 2004wiki.mozilla.org/GeckoDev August 2004</figure>

November-December 2004

According to WikiApiary, the current installation of MozillaWiki was created on 18 November 2004. The closest snapshot to this date in the Internet Archive is 11 December 2004:

<figure class="wp-caption aligncenter" id="attachment_768361" style="width: 980px;">MozillaWiki December 2004MozillaWiki December 2004</figure>

April 2005

By April 2005, the wiki had been upgraded, had a new theme (Cavendish), and had started using Apache rewrite rules to make the url pretty (e.g. no index.php).

<figure class="wp-caption aligncenter" id="attachment_768362" style="width: 980px;">Mozilla Wiki, April 2005Mozilla Wiki, April 2005</figure>

August 2008

Three years later, in April 2008, we were still rockin’ the Cavendish theme and the Main Page had some more content, including links to the weekly project call that continues to this day.

<figure class="wp-caption aligncenter" id="attachment_768369" style="width: 980px;">MozillaWiki August 2008MozillaWiki August 2008</figure>

December 2010

We started tracking releases in December 2007 (see version). Here’s what the Releases page looked like in December 2010.

<figure class="wp-caption aligncenter" id="attachment_768368" style="width: 980px;">MozillaWiki December 2010 Releases pageMozillaWiki December 2010 Releases page</figure>

May 2011

In May 2011, after 6 years of service, Cavendish was retired as the default skin and replaced with GMO.

<figure class="wp-caption aligncenter" id="attachment_768367" style="width: 980px;">MozillaWiki May 2011 - New GMO skinMozillaWiki May 2011 – New GMO skin</figure>

July 2012

A year later, July 2012, MozillaWiki looked much the same.

<figure class="wp-caption aligncenter" id="attachment_768366" style="width: 980px;">MozillaWiki July 2012MozillaWiki July 2012</figure>

July 2013

By July 2013, the Main Page was edited to include a few recent changes, but otherwise looked very similar.

<figure class="wp-caption aligncenter" id="attachment_768365" style="width: 980px;">MozillaWiki July 2013MozillaWiki July 2013</figure>

August 2014

By August 2014, the revitalization of the MozillaWiki was in full swing and we were preparing for a major update to both the skin (GMO to Vector) as well as the underlying software (Mediawiki 1.19 to 1.23). We also had made significant changes to the content of the Main Page based on results of our recent user survey.

<figure class="wp-caption aligncenter" id="attachment_768364" style="width: 980px;">MozillaWiki August 2013MozillaWiki August 2013</figure>

November 2014

Here’s what the wiki looks like today, 17 November, the day before it’s birthday. We’re running a slightly modified Vector skin and Mediawiki 1.23.x branch.

<figure class="wp-caption aligncenter" id="attachment_768363" style="width: 980px;">MozillaWiki November 2014MozillaWiki November 2014</figure>

MozillaWiki today

Pages, visitors and accounts

As of 16 November, MozillaWiki has 115,912 pages, all public, and nearly 10k uploaded files. About 630 people per month, on average, log in and make contributions to the wiki. These include both staff and volunteers. Want to track these stats yourself? Visit Special:Statistics.

The number of daily visitors ranges from 9k-30k, with an average likely around 13-14k. Who are these visitors? According our analytics software we get visitors from all over the world, with the greatest concentration being from the US, Canada and UK.

The wiki has over 330,000 registered user accounts. I estimate that about 300k of these are inactive spam accounts, so the real number for user accounts is probably closer to 30,000.

What kinda of content is hosted on MozillaWiki?

All kinds of project activity is coordinated and recorded on the wiki. This includes activity related to our products: Firefox, Firefox OS, WebMaker, etc. It also includes community activities such as Reps, Firefox Student Ambassadors, etc. Most project activities have some representation on MozillaWiki. People also use the wiki to track projects and goals on an individual level. In this regard, it served as a place for Mozillians’ profiles long before we had mozillians.org.

The MozillaWiki isn’t setup for localized content now, but this hasn’t stopped our localized community from translating content. Every day a significant portion of account requests come from volunteers from regional communities and are often in a language other than English. In 2015, depending on resources available, we plan to significantly improve support for localized content on MozillaWiki.

2014 Accomplishments

This year we’ve made significant progress towards revitalizing MozillaWiki.

Accomplishments include:

  • Forming a team of dedicated volunteers to lead a revitalization effort.
  • Creating an About page for MozillaWiki that clarifies its scope and role in the project, including what is appropriate content and how to report issues.
  • Fixing years-old bugs that cause significant usability problems (table sorting, unavailability of Wikieditor, etc.).
  • Identifying a product owner for MozillaWiki and creating a Module for it, lead by a mix of staff and contributors.
  • Halting the creation of new spam and cleaning up significant amounts of spam content.
  • Upgrading Mediawiki from 1.19.x branch to 1.23.x branch AND changing the default theme without any significant downtime or disruptions to users.
  • Organizing a user survey and using those results to guide much of our roadmap, including the redesign of the Main Page and sidebar navigation.

Thank you everyone who has been a part of this work!

There’s still plenty to do, and many ways to contribute

We’ve made so much progress on the technical and infrastructure debt of MozillaWiki that we’re now ready to focus on improving content and collaboration mechnisms.

How can I help?

The are many ways you can help, and we have contribution opportunities for all kinds of skill levels and time commitments.

We’re working on documenting and organizing these contribution opportunities here: https://wiki.mozilla.org/MozillaWiki:Contribute so check that page often.

Join our mailing-list or community call

If you’d like to help us organize those opportunities, or have other ideas for improving the wiki, join one of our MozillaWiki Team communication channels or one of our community meetings. These meetings are held twice a month on Tuesday at 8:30 PST / 15:30 UTC. Our next meeting is 16 December. All who are interested in contributing to the wiki are welcome.

In the meantime, log in to MozillaWiki and celebrate its birthday with us by claiming the birthday badge!

Planet MozillaWe have a massive recruitment problem

A few months ago, I flew over to see my parents for their 50th wedding anniversary. As some of you may know, I have a humble background. My dad was a coal miner and then factory worker and my mother has always been a home maker / housewife. I am the only one in my family that went to college and I skipped university as the thing to do was to make money in a job as soon as you are 18.

It was humbling and almost embarassing to have conversations with my family. Half of them are either unemployed or worried about their jobs. The rest are unhappy in their jobs but see no way to change that as they need the security. Finding joy in family life and leisure time is more important than enjoying the work. A job is a job, you got to do what you got to do and all that.

Futurama: you gotta do what you gotta do

That’s why it feels surreal to come back into “our world” and get offers for jobs I don’t want. A lot of them. Some with ridiculous amounts of money offered and most with perks that would make my family blush and sense a trap.

Why are we not recruiter compatible and vice versa?

We’re lucky to be that sought after and yet it seems there is no happy symbiosis between us and recruiters. On the contrary, as soon as you even mention recruiting most of us techies start ranting.

I feel uneasy doing that. I feel like an arrogant ass and I feel that we should be more grateful about the opportunities we get. The relationship of recruiter and job seeker should be high-fives and unicorns. Instead there is a massive sense of dread: “Oh god, another job offer, how tiring”.

There are reasons for our dismay:

  • A lot of headhunters/recruiters work on a commission basis and are measured by how many contacts they had that day. This leads to a scattergun approach and you get offers that are not “you” at all.
  • Many recruiters seem to just look for keywords and then send the offer out to you. That’s why you get Java positions when you have a JavaScript background. Just like you would send car mechanic jobs to a carpet expert.
  • Others go for company names. A great example was this recruiter trying to hire someone’s dog as a Java/Python developer.
  • Many recruiting sites are very pushy to get you into their database to show potential hiring companies just how many job searchers they have. This leads to very old and outdated profiles and you get offers for jobs you’ve done years ago. Basically they don’t want to find you a job, they want you as an ad.
  • People write ridiculous job descriptions and send them to us. In the past I wrote what kind of people I try to hire and once it went through HR and recruitment review something completely ridiculous ended up online. You’ve seen those. Asking people for two degrees, but not older than 20. Seven years experience in a half year old technology, and similar confusing points.
  • There is probably nothing more intrusive to someone who feels at home online to be called by somebody. Recruiters, however, seem to see the “personal touch” as the most important thing.

On the other side of this issue, we are not innocent either:

  • Instead of telling people why we didn’t want their offer, we just ignore them. There is no learning on either side.
  • We love our own tools and are not too interested in changing that. Every recruitment department I worked with needed a CV in a document format for filing and keeping. Instead of having one of those at hand we love to create online CVs and portfolios or point people to our GitHub account as “real people who would hire me find all they need there”. This is navel gazing and arrogant. If I want to go on a bus, I need a bus ticket. A macaroni picture with glitter on it saying “most amazing responsive bus ticket” will not get me on there. Have the tool for the task.
  • We don’t keep our presence up-to-date. If you’re not seeking, say so on LinkedIn. Have a template to send back to recruiters telling them “thanks, but no.”
  • We also shouldn’t create profiles of our dog on LinkedIn. This is a professional tool, if we don’t use these in a professional manner we shouldn’t be surprised that they go to the dogs.
  • Keep your skills up-to-date. If you never ever want to work with a certain product any longer, remove it from your online presence. That way keyword searchers don’t find you.

We need to communicate better

I feel there is a massive waste going on and an accumulation of frustration on both sides. We need to get better in helping another to make this the natural partnership it should be. I feel terrible hearing about friends not in our world who send out hundreds of applications and don’t get answers whilst we complain about people trying to offer us jobs. It feels almost unreal.

There are a few good ideas around and there is a start to clean this mess up. Joblint is a tool that comes to mind. It is an analysing tool that takes job descriptions and allows you to

“Test tech job specs for issues with sexism, culture, expectations, and recruiter fails”.

A lot of miscommunication could be avoided simply by using that.

Considering giving a helping hand

Maybe I should do something about this and use my time off to reach out and try to change something. I wonder if a workshop for recruiters about issues to avoid would be of interest? In any case, let’s try to be more understanding. Recruiters do their jobs the same way we do ours. By understanding their drives and goals, we can make both of our lives easier. By being arrogant and come across as divas we shouldn’t be surprised if job descriptions start calling out for rockstars, ninjas, gurus and mavens.

Let’s highlight the great experiences we had, and share what worked. Maybe that could be the lever we need to crack this nut open.

Planet MozillaLet’s Encrypt

Today we announced a project that I’ve been working on for a while now – Let’s Encrypt. This is a new Certificate Authority (CA) that is intended to be free, fully automated, and transparent. We want to help make the dream of TLS everywhere a reality. See the official announcement blog post I wrote for more information.

Eric Rescorla and I decided to try to make this happen during the summer of 2012. We were trying to figure out how to increase SSL/TLS deployment, and felt that an innovative new CA would likely be the best way to do so. Mozilla agreed to help us out as our first major sponsor, and by May of 2013 we had incorporated Internet Security Research Group (ISRG). By September 2013 we had merged a similar project started by EFF and researchers from the University of Michigan into ISRG, and submitted our 501(c)(3) application. Since then we’ve put a lot of work into ISRG’s governance, found the right sponsors, and put together the plans for our CA, Let’s Encrypt.

I’ll be serving as ISRG’s Executive Director while we search for more permanent leadership. During this time I’ll remain with Mozilla.

Too many people to thank for their help here, many of whom work for our sponsors, but I want to call out Eric Rescorla (Mozilla) and Kevin Dick (Right Side Capital Management) in particular. Eric was my original co-conspirator, and Kevin has spent innumerable hours with me helping to create partnerships and the necessary legal infrastructure for ISRG. Both are incredible at what they do, and I’ve learned a lot from working with them.

Now it’s time to finish building the CA – lots of software to write, hardware to install, and auditing to complete. If you have relevant skills, we hope you’ll join us.


Planet MozillaDaala: Perceptual Vector Quantization (PVQ)

Here's my new contribution to the Daala demo effort. Perceptual Vector Quantization has been one of the core ideas in Daala, so it was time for me to explain how it works. The details involve lots of maths, but hopefully this demo will make the general idea clear enough. I promise that the equations in the top banner are the only ones you will see!

Read more!

Planet MozillaFirefox and Cisco’s Project Squared

Yesterday I was at Cisco’s Collaboration Summit where Cisco’s CTO for Collaboration Jonathan Rosenberg and I showed Cisco’s new WebRTC-based Project Squared collaboration service running in Firefox, talking to a Cisco Collaboration Desktop endpoint without requiring transcoding.

This demo is the culmination of a year long collaboration between Cisco and Mozilla in the WebRTC space. WebRTC enables voice and video communication directly from within the browser. This means that anyone can build a video conferencing service just using WebRTC and HTML5 standards, without the need for the user to download a plugin or a native application.

Cisco is not only developing WebRTC-based services that run on the Web. They have  also joined a growing number of organizations and companies helping Mozilla to build a better Web. Over the last year Cisco has contributed numerous technical improvements to Mozilla’s WebRTC implementation, including support for screen sharing and the H.264 video codec. These features are now shipping in Firefox. We intend to use them in the future in Mozilla’s own Hello communication service that we are bringing to Firefox.

Cisco’s contributions to the Web go beyond just advancing Firefox. For the last three years the IETF, the standards body defining the networking protocols for WebRTC, has been unable to agree on a mandatory video codec for WebRTC, putting ubiquitous interoperability in doubt.

One of the major blockers to coming to a consensus was that H.264 is subject to royalty-bearing patents, which made it problematic for open source projects such as Firefox to deploy it. To break this logjam, Cisco open-sourced its H.264 code base and made it available in plugin form. Any product  — not just Firefox — can download the plugin and use it to enable H.264 without paying any royalties.

This collaboration between Mozilla and Cisco enabled Firefox to add support for H.264 in WebRTC, and also played a significant role in the compromise reached at the last IETF meeting to adopt both H.264 and VP8 as mandatory video codecs for WebRTC in browsers. As a result of this compromise, in the future all browsers should match the capabilities already available in Firefox.

Mozilla will continue to work on advancing Firefox and the Web, and we are excited to have strong partners like Cisco who share our commitment to the open Web as a shared technology platform.


Filed under: Mozilla Tagged: Mozilla, WebRTC

Planet MozillaAccessibility goes into DOM

PWFG group suggested two new methods for DOM Element interface. These methods reflect role and name accessibility concepts, and corresponding methods were named as computedRole and computedLabel.

I have bunch of issues with the approach I wanted to outline here. Just to keep things in one place.

The purpose


I've been told that primary reason is a testing propose, but having role and name only is not enough to run UAIG tests or any accessibility automation tool since it would require other accessibility properties.

Also they say that it might be used for non accessibility proposes. I realize that semantics, the ARIA adds, can be used by non assistive technologies. In Firefox we have a large number of non AT consumers but we don't have a good idea in most of cases what they are for. So I don't really have the use case, and thus it's hard to say whether accessible role and name only works well for non a11y proposes.

Concerning to assistive technologies I think they also need a much larger API.

Blowing the DOM


Anything useful should require extra accessible properties as I said above. These are accessible description, states, relations, ability to navigate the hierarchy etc. That means sooner or later the Element interface has to be changed to a great extent. Check out AtkObject to get an idea about possible changes.

In beginning of times accessibility interfaces was built on top of DOM and later they were turned into full APIs. Now we are faced to backward process, accessibility APIs are getting back to DOM. I'm not sure if that's a good idea because accessibility tasks are something very specific, and accessibility API might be not suitable for common needs of web apps.

Restrictions


Not every semantically meaningful piece on the screen has a DOM node, for example, list bullets don't necessary have DOM elements associated with them. So Element based accessibility API is too restrictive to fit the requirements of the assistive technologies.

Performance


Last but not least is performance issue. In most of browsers the accessibility engine is kept separately and it gets running on demand. If accessibility is merged with the DOM then nothing tells the user this method may trigger heavy accessibility computations and make his app slower. Surely the browsers will learn how to get smarter but the approach will have a perf hit either way.

What's it going to be then, eh?


The idea is to provide a separate accessibility interface. If you like it then it can be done by parts, for example, introduce role and name only for the first round same as the original proposal says. Later you can think of adding all other properties.

This idea was welcomed initially, then later it was rejected as being too complex and accessibility centric. But - and that's most important thing - it doesn't have disadvantages the Element approach has.

Planet MozillaLet’s Encrypt: One more step on the road to TLS Everywhere

Principle 4 of the Mozilla Manifesto states: Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.

Unfortunately treating user security as optional is exactly what happens when sites let users connect over insecure HTTP rather than HTTP over TLS (HTTPS). What insecure means here is that your network traffic is totally unprotected and can be read and/or modified by anyone who shares a network with you, including random people sharing Starbucks or airport WiFi.

One of the biggest reasons that web sites don’t deploy TLS is the requirement to get a digital certificate — a cryptographic credential which allows a user’s browser to know it’s talking to the right site and not to an attacker. Certificates are issued by Certificate Authorities (CAs) often using a clumsy and error-prone manual process. A further disincentive to deployment is that  most CAs charge a fee for their certificates, which not only prices some people out of the market but also interferes with automatic issuance and renewal.

Mozilla, along with our partners Akamai, Cisco, EFF, and Identrust decided to do something about this situation. Together, we’ve formed a new consortium, the Internet Security Research Group, which is starting Let’s Encrypt, a new certificate authority designed to bring security to everyone. Let’s Encrypt is built around a few key principles:

  • Free: Certificates will be offered at no cost.
  • Automatic: Certificates will be issued via a public and published API, allowing Web server software to automatically obtain new certificates at installation time and without manual intervention.
  • Independent: No piece of infrastructure this important should be controlled by a single company. ISRG, the parent entity of Let’s Encrypt, is governed by a board drawn from industry, academia, and nonprofits, ensuring that it will be operated in the public interest.
  • Open: Let’s Encrypt will be publishing its source code and protocols, as well as submitting the protocols for standardization so that server software as well as other CAs can take advantage of them.

Let’s Encrypt will be issuing its first real certificates in Q2 2015. In the meantime, we have published some initial protocol drafts along with a demonstration client and server at: https://github.com/letsencrypt/node-acme and https://github.com/letsencrypt/heroku-acme. These are functional today and can be used to issue test certificates.

It’s been a long road getting here and we’re not done yet, but this is an important step towards a world with TLS Everywhere.


Filed under: Mozilla

Planet MozillaTools for the 21st century musician—super abridged dotJS edition

I attended dotJS yesterday where I gave a very short version of past past week’s talk at Full Frontal (18 minutes versus 40).

The conference happened in a theatre and we were asked not to use bright background so I changed my slides to be darker and classier.

It didn’t really go as smoothly as I expected (a kernel panic a bit before the start of the talk, and I got nervous and distracted so I got more nervous and…), but I guess I can’t always WIN! It was fun to speak in French if only one line, though: Je suis très contente d’être parmi vous!–thanks to Thomas for the assistance in coming up with the perfect presentation line, and Guillaume and Sasha for listening to me repeat it until it resembled passable French!

While the video is edited and released, here’s a sample in the form of slides, online and their source code in GitHub.

It was fun to use CSS filters to invert the images so they would not be a big white block on top of a dark background. Yay CSS filters!

.filter-invert {
    filter: invert(100%) brightness(2);
}

Also, using them in transitions between slides. I discovered that I could blur between slides. Cinematic effects! (sorta, as I cannot get vertical/horizontal blur). But:

.bespoke-active.emphatic-text {
  filter: none;
}
.bespoke-inactive.emphatic-text {
  filter: blur(10px);
}

I use my custom plugin presentation-fullscreen for getting real fullscreen in my slides. It’s on npm:

npm install presentation-fullscreen --save

then just

require('presentation-fullscreen');

will add a new option to the contextual menu for making the whole body go fullscreen.

I shall write about this tip and how I use bespoke.js in general, and a couple thoughts and ideas I had during the conference soon. Topics including (so I don’t forget): why a mandatory lack of anonymity is not the solution to doxxing, and the ideal talk length.

flattr this!

Planet MozillaFirefox 34 release date moving to Dec 1/2

The Firefox 34 release date will move out one week from Nov 25 to Dec 1/2. This change impacts Firefox Desktop, Firefox for Android, Firefox ESR, and Thunderbird.
The purpose of this change is to allow for an additional week of stabilization during the 34 cycle.

Details of the change:

  • Release date change from Nov 25 to Dec 1/2 (need to determine the date that works best given the work week)
  • Merge date change from Tue, Nov 24 to Fri, Nov 28
  • Two additional desktop betas (10 and 11) will be added to the calendar this week on our usual beta build schedule (build Mon and Thu, release Tue and Fri)
  • One additional mobile beta (beta 11) will be added to the schedule.
    Note that mobile beta 10 will gtb on schedule on Mon.
    Mobile beta 11 will gtb on Thu with desktop in order to be ready early the following week.
  • RC builds will happen on Mon, Nov 24
Note that we are effectively moving an extra week that we had previously added to the 35 Beta cycle in the 34 Beta cycle. 35 will have a 7 week Aurora cycle instead of a 7 week Beta cycle.

Planet MozillaIs the Web dying?

This article may or may not be pay-walled, depending on how you arrive at it.   It is an exploration of the shift to apps.

The history of computing is companies trying to use their market power to shut out rivals, even when it’s bad for innovation and the consumer….That doesn’t mean the Web will disappear. Facebook and Google still rely on it to furnish a stream of content that can be accessed from within their apps. But even the Web of documents and news items could go away. Facebook has announced plans to host publishers’ work within Facebook itself, leaving the Web nothing but a curiosity, a relic haunted by hobbyists.

This is something I was getting at with my post yesterday: that advertising remains one of the Web’s unique selling points.  It is much more effective as an advertising platform than mobile apps are.  At the moment, the Internet giants extract an enormous amount of value from the content on the Web, using it to drive engagement with their services.  The Web has very low barriers to entry, but economic sustainability is difficult and the only proven revenue model appears to be advertising at scale.  The model needs liberating.

(Note: The source of this article, the Wall Street Journal, may appear to refute that, (given it has a paywall), but I believe that their model is essentially freemium and it isn’t clear to me what revenues they derive from subscription customers.)


Planet MozillaAltering large tables without bringing down your service

When we run ALTER statements on our big tables we have to plan ahead to keep from breaking whatever service is using the database. In MySQL, many times* a simple change to a column (say, from being a short varchar to being a text field) can read-lock the entire table for however long it takes to make the change. If you have a service using the table when you begin the query you'll start eating into your downtime budget.

If you have a large enough site to have database slaves you'll have a double-whammy - all reads will block on the master altering the table, and then, by default, the change will be replicated out to your slaves and not only will they read-lock the table while they alter it, but they will pause any further replication until the change is done, potentially adding many more hours of outdated data being returned to your service as the replication catches up.

The good news is, in some situations, we can take advantage of having database slaves to keep the site at 100% uptime while we make time consuming changes to the table structure. The notes below assume a single master with multiple independent slaves (meaning, the slaves aren't replicating to each other).

Firstly, it should go without saying, but the client application needs to gracefully handle both the existing structure and the anticipated structure.

When you're ready to begin, pull a slave out of rotation and run your alter statement on it. When it completes, put the slave back into the cluster and let it catch up on replication. Repeat those steps for each slave. Then failover one of the slaves as a new master and pull the old master out of rotation and run the alter statement on it. Once it has finished put it back in the cluster as a slave. When the replication catches up you can promote it back to the master and switch the temporary master back to a slave.

At this point you should have the modified table structure everywhere and be back to your original cluster configuration.

Special thanks to Sheeri who explained how to do all the above and saved us from temporarily incapacitating our service.

*What changes will lock a table vary depending on the version of MySQL. Look for "Allows concurrent DML?" in the table on this manual page.

Planet Mozillahappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1096565] GET REST calls should allow arbitrary URL parameters to be passed in addition the values in the path
  • [1097813] Bug.search causes error when using simple token auth and specifying ‘token’ instead of ‘Bugzilla_token’
  • [1036802] Requests to the native rest/bzapi endpoints with gzip encoding always result in HTTP/200 responses
  • [1097382] OS sniffing should detect Windows 10 from “Windows NT 6.4″ instead of detecting Windows NT
  • [1098956] remove autoland support
  • [1100368] css concatenation breaks data: urls

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Planet MozillaAnother 100K bug reports… and nobody noticed

Bugzilla bug report #1,100,000

We used to have a little cheering for every 100,000 bug reports filing.  Bugzilla hasn’t keeled over yet!

But somehow, in the aftermath of the megabug, I think we forgot to plan for this one.  Oops!

It looks like it took us about 19 months to get here from the previous one.  I’ll leave it to Gervase Markham to dig up the appropriate statistics.

Planet MozillaRedesign of reddit’s Login/Account Creation Window and reddit.com/login

We’ve just launched a cleanup of our login and account creation dialog and reddit.com/login. Here’s a comparison between the old version and new version: Props to new engineer aurora-73 for […]

Planet MozillaThe sad state of server-side TLS Session Resumption implementations

The probably oldest complaint about TLS is that its handshake is slow and together with the transport encryption has a lot of CPU overhead. This certainly is not true anymore if configured correctly (even if some companies choose to ignore that).

One of the most important features to improve user experience for visitors accessing your site via TLS is session resumption. Session resumption is the general idea of avoiding a full TLS handshake by storing the secret information of previous sessions and reusing those when connecting to a host the next time. This drastically reduces latency and CPU usage.

Enabling session resumption in web servers and proxies can however easily compromise forward secrecy. To find out why having a de-factor standard TLS library (i.e. OpenSSL) can be a bad thing and how to avoid botching PFS let us take a closer look at forward secrecy, and the current state of server-side implementation of session resumption features.

What is (Perfect) Forward Secrecy?

(Perfect) Forward Secrecy is an important part of modern TLS setups. The core of it is to use ephemeral (short-lived) keys for key exchange so that an attacker gaining access to a server cannot use any of the keys found there to decrypt past TLS sessions they may have recorded previously.

We must not use a server’s RSA key pair, whose public key is contained in the certificate, for key exchanges if we want PFS. This key pair is long-lived and will most likely outlive certificate expiration dates as you would just use the same key pair to generate a new certificate after the current expired. In case the server is compromised it would be far too easy to determine the location of the private key on disk or in memory and use it to decrypt recorded TLS sessions from the past.

Using Diffie-Hellman key exchanges where key generation is a lot cheaper we can use a key pair exactly once and discard it afterwards. An attacker with access to the server can still compromise the authentication part as shown above and {M,W}ITM everything from here on using the certificate’s private key, but past TLS sessions stay protected.

How can Session Resumption botch PFS?

TLS provides two session resumption features: Session IDs and Session Tickets. To better understand how those can be attacked it is worth looking at them in more detail.

Session IDs

In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both server and client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

To support session resumption via session IDs the server must maintain a cache that maps past session IDs to those sessions’ secret states. The cache itself is the main weak spot, stealing the cache contents allows to decrypt all sessions whose session IDs are contained in it.

The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized cache that is purged daily. Purging your cache might however not help if the cache itself lives on a persistent storage as it might be feasible to restore deleted data from it. An in-memory storage should be more resistant to these kind of attacks if it turns over about once a day and ensures old data is overridden properly.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future and is the weak spot an attacker will target.

The client will store its secret information for a TLS session along with the ticket received from the server. By transmitting that ticket back to the server at the beginning of the next TLS connection both parties can resume their previous session, given that the server can still access the secret key that was used to encrypt.

We ideally want the same secrecy bounds for Session Tickets as for Session IDs. To achieve this we need to ensure that the key used to encrypt tickets is rotated about daily. It should just as the session cache not live on a persistent storage to not leave any trace.

Apache configuration

Now that we determined how we ideally want session resumption features to be configured we should take a look at a popular web servers and load balancers to see whether that is supported, starting with Apache.

Configuring the Session Cache

The Apache HTTP Server offers the SSLSessionCache directive to configure the cache that contains the session IDs of previous TLS sessions along with their secret state. You should use shmcb as the storage type, that is a high-performance cyclic buffer inside a shared memory segment in RAM. It will be shared between all threads or processes and allow session resumption no matter which of those handles the visitor’s request.

<figure class="code">
SSLSessionCache shmcb:/path/to/ssl_gcache_data(512000)
</figure>

The example shown above establishes an in-memory cache via the path /path/to/ssl_gcache_data with a size of 512 KiB. Depending on the amount of daily visitors the cache size might be too small (i.e. have a high turnover rate) or too big (i.e. have a low turnover rate).

We ideally want a cache that turns over daily and there is no really good way to determine the right session cache size. What we really need is a way to tell Apache the maximum time an entry is allowed to stay in the cache before it gets overridden. This must happen regardless of whether the cyclic buffer has actually cycled around yet and must be a periodic background job to ensure the cache is purged even when there have not been any requests in a while.

You might wonder whether the SSLSessionCacheTimeout directive can be of any help here - unfortunately no. The timeout is only checked when a session ID is given at the start of a TLS connection. It does not cause entries to be purged from the session cache.

Configuring Session Tickets

While Apache offers the SSLSessionTicketKeyFile directive to specify a key file that should contain 48 random bytes, it is recommended to not specify one at all. Apache will simply generate a random key on startup and use that to encrypt session tickets for as long as it is running.

The good thing about this is that the session ticket key will not touch persistent storage, the bad thing is that it will never be rotated. Generated once on startup it is only discarded when Apache restarts. For most of the servers out there that means they use the same key for months, if not years.

To provide forward secrecy we need to rotate the session ticket key about daily and current Apache versions provide no way of doing that. The only way to achieve that might be use a cron job to gracefully restart Apache daily to ensure a new key is generated. That does not sound like a real solution though and nothing ensures the old key is properly overridden.

Changing the key file while Apache is running does not do it either, you would still need to gracefully restart the service to apply the new key. An do not forget that if you use a key file it should be stored on a temporary file system like tmpfs.

Disabling Session Tickets

Although disabling session tickets will undoubtedly have a negative performance impact, for the moment being you will need to do that in order to provide forward secrecy:

<figure class="code">
SSLOpenSSLConfCmd Options -SessionTicket
</figure>

Ivan Ristic adds that to disable session tickets for Apache using SSLOpenSSLConfCmd, you have to be running OpenSSL 1.0.2 which has not been released yet. If you want to disable session tickets with earlier OpenSSL versions, Ivan has a few patches for the Apache 2.2.x and Apache 2.4.x branches.

To securely support session resumption via tickets Apache should provide a configuration directive to specify the maximum lifetime for session ticket keys, at least if auto-generated on startup. That would allow us to simply generate a new random key and override the old one daily.

Nginx configuration

Another very popular web server is Nginx. Let us see how that compares to Apache when it comes to setting up session resumption.

Configuring the Session Cache

Nginx offers the ssl_session_cache directive to configure the TLS session cache. The type of the cache should be shared to share it between multiple workers:

<figure class="code">
ssl_session_cache shared:SSL:10m;
</figure>

The above line establishes an in-memory cache with a size of 10 MB. We again have no real idea whether 10 MB is the right size for the cache to turn over daily. Just as Apache, Nginx should provide a configuration directive to allow cache entries to be purged automatically after a certain time. Any entries not purged properly could simply be read from memory by an attacker with full access to the server.

You guessed right, the ssl_session_timeout directive again only applies when trying to resume a session at the beginning of a connection. Stale entries will not be removed automatically after they time out.

Configuring Session Tickets

Nginx allows to specify a session ticket file using the ssl_session_ticket_key directive, and again you are probably better off by not specifying one and having the service generate a random key on startup. The session ticket key will never be rotated and might be used to encrypt session tickets for months, if not years.

Nginx, too, provides no way to automatically rotate keys. Reloading its configuration daily using a cron job might work but does not come close to a real solution either.

Disabling Session Tickets

The best you can do to provide forward secrecy to visitors is thus again switch off session ticket support until a proper solution is available.

<figure class="code">
ssl_session_tickets off;
</figure>

HAproxy configuration

HAproxy, a popular load balancer, suffers from basically the same problems as Apache and Nginx. All of them rely on OpenSSL’s TLS implementation.

Configuring the Session Cache

The size of the session cache can be set using the tune.ssl.cachesize directive that accepts a number of “blocks”. The HAproxy documentation tries to be helpful and explain how many blocks would be needed per stored session but we again cannot ensure an at least daily turnover. We would need a directive to automatically purge entries just as for Apache and Nginx.

And yes, the tune.ssl.lifetime directive does not affect how long entries are persisted in the cache.

Configuring Session Tickets

HAproxy does not allow configuring session ticket parameters. It implicitly supports this feature because OpenSSL enables it by default. HAproxy will thus always generate a session ticket key on startup and use it to encrypt tickets for the whole lifetime of the process.

A graceful daily restart of HAproxy might be the only way to trigger key rotation. This is a pure assumption though, please do your own testing before using that in production.

Disabling Session Tickets

You can disable session ticket support in HAproxy using the no-tls-tickets directive:

<figure class="code">
ssl-default-bind-options no-sslv3 no-tls-tickets
</figure>

A previous version of the post said it would be impossible to deactivate session tickets. Thanks to the HAproxy team for correcting me!

Session Resumption with multiple servers

If you have multiple web servers that act as front-ends for a fleet of back-end servers you will unfortunately not get away with not specifying a session ticket key file and a dirty hack that reloads the service configuration at midnight.

Sharing a session cache between multiple machines using memcached is possible but using session tickets you “only” have to share one or more session ticket keys, not the whole cache. Clients would take care of storing and discarding tickets for you.

Twitter wrote a great post about how they manage multiple web front-ends and distribute session ticket keys securely to each of their machines. I suggest reading that if you are planning to have a similar setup and support session tickets to improve response times.

Keep in mind though that Twitter had to write their own web server to handle forward secrecy in combination with session tickets properly and this might not be something you want to do yourselves.

It would be great if either OpenSSL or all of the popular web servers and load balancers would start working towards helping to provide forward secrecy by default and server admins could get rid of custom front-ends or dirty hacks to rotate keys.

Planet MozillaFirefox 34 beta8 to beta9

  • 31 changesets
  • 56 files changed
  • 920 insertions
  • 250 deletions

ExtensionOccurrences
cpp15
js8
html4
h4
c3
xul2
mn2
list2
ini2
css2
build2
xml1
xhtml1
svg1
sjs1
mm1
jsm1
java1
conf1

ModuleOccurrences
browser16
layout9
dom6
content5
netwerk4
js3
gfx3
mobile2
xpfe1
widget1
media1
intl1
editor1
docshell1

List of changesets:

Matthew GreganBug 1085175. r=roc, a=dveditz - 9cd882996cbe
Jordan SantellBug 1078539 - Add a doorhanger widget for the developer edition notification to browser console, developer tools, webide and responsive design mode. r=jryans, a=lmandel - e7f8aa528841
James WillcoxBug 1097126 - Restrict MediaCodec backend to Android 5.0 and higher r=blassey a=lmandel - 7dfbe52d1a2b
James WillcoxBacked out changeset 7dfbe52d1a2b a=lmandel - b0fea8a116aa
James WillcoxBug 1097276 - Disable MediaCodec backend r=edwin a=lmandel - cd70fa61662a
Chenxia LiuBug 1093619 - Don't display onboarding screen for Webapp profiles. r=margaret, a=lmandel - 5cbc59a67d8c
James WillcoxBug 1097276 - Disable fragmented MP4 support on Android r=rillian a=lmandel - f7dd649eb2f6
Richard NewmanBug 1095298 - Ignore 'C' locale when initializing nsLocaleService on Android. r=emk, a=lmandel - 43fd2720be09
Olli PettayBug 1096263 - XMLHttpRequest.send({}) should not throw. r=bz, a=lmandel - 9e57cec588a9
Olli PettayBug 1096263 - XMLHttpRequest.send({}) should not throw, tests. r=bz, a=lmandel - 0197e9eb324f
Botond BalloBug 1068961 - Reset clip rect for color layers. r=roc, a=lmandel - 9f14f2af8bf7
Robert O'CallahanBug 1084672 - Call NotifyDidPaint from the refresh driver to ensure it gets called regardless of whether OMTC is used or not. r=mattwoodrow, a=lmandel - 2b08e1cb3c6f
Matthew GreganBug 1096716 - Delay buffer frame calculation in WMF audio decoder until after UpdateOutputType to avoid using obsolete rate/channel values. r=cpearce, a=lmandel - 49f10dbc7d69
Boris ZbarskyBug 1090616 - Don't assume that the nodes we just pasted are still in the DOM, because mutation listeners suck like that. r=smaug, a=lmandel - 609915862295
Steven MichaudBug 1092855 - Work around bad interaction between jemalloc and Apple uninitialized memory bug. r=spohl, a=lsblakk - 4bdf71e69d10
Matthew GreganBug 1092859 - Always use soft volume in WinMM cubeb backend. r=padenot, a=lmandel - f2dd9f2a084a
Steven MichaudBug 1086977 - Facebook's old "Facebook Photo Uploader" (fbplugin) crashes on load on OS X Yosemite. r=bsmedberg a=lmandel - e588ff4e326e
Neil RashbrookBug 1070768 - Fix up more references to XPFE's autocomplete.css. r=Ratty, a=lmandel - aa474c125c53
Gijs KruitboschBug 1096787 - Copy new logins.json database when using fx reset. r=MattN, a=lsblakk - b718e8c0d423
Randell JesupBug 1080312 - Update iteration code from upstream. r=jesup, a=abillings - 4bb1c6116c39
Gijs KruitboschBug 1096695 - hardcode strings for beta to give more info about sslv3 being dead, r=bz,gavin, a=lmandel - d585e4e50468
Gijs KruitboschBug 1096695 - fix test failures in test_aboutCrashed.xul, rs=bustage,a=bustage - 117eb4e49c72
Bas SchoutenBug 1096913 - Remove the simple cache entry when replacing the cache entry. r=mwu, a=lmandel - e1ee2331bd12
Kannan VijayanBug 1081850 - Check for resolve hooks when optimizing no-such-prop operations in baseline. r=jandem, a=lmandel - e685be9bd4d6
Mike HommeyBug 1096651 - Avoid using random bits when determining SSE3/SSE4 availability for the JIT. r=luke, a=lmandel - 3e5cb63660bd
L. David BaronBug 1086937 - Patch 0: Add missing null check of root element so this patch series doesn't expose a crash in layout/style/crashtests/472237-1.html. r=birtles, a=lmandel - 4ccd3e117f5d
Robert O'CallahanBug 1092842 - When setting cliprects on background color display items, don't shrink them to exclude opaque borders (unless there's nonzero border-radius). r=mattwoodrow, a=lmandel - 19296c34b1ca
Robert O'CallahanBug 1097437 - Work around Quartz bug where corners of stroked rects don't get a solid color when they should. r=jrmuizel, a=lmandel - 9e4c3c78fe01
Steven MichaudBug 1017595 - Links from external applications sometimes fail to open when Firefox is hidden. r=spohl, a=lmandel - 89c3e0133233
Robert O'CallahanReftest manifest changes that were incorrectly landed as part of Bug 1096181, but should have been part of Bug 1097437 or Bug 1092842. a=orange - c486cd17bebb
Jordan SantellBug 1078539 - Disable dev edition promo banner in beta temporarily. r=jryans, a=lmandel - 1242fc159d04

Planet MozillaJan de Mooij: “Seguiremos haciendo un Firefox más rápido”

Hace poco días tuvimos el placer de compartir con ustedes una importante noticia para el futuro de Firefox. Me refiero a las mejoras en velocidad al ejecutar Java Script del motor de nuestro navegador favorito. Para ese artículo estaba esperando la respuesta de uno de sus desarrolladores, y dada a la importancia de la buena nueva se publicó.

Pregunta (P). La última pregunta de nuestra conversación anterior estuvo relacionada con la puntuación en Octane y hoy han alcanzado su objetivo, ya somos los primeros. ¿Qué aspectos de Ion Monkey han mejorado para alcanzar esta increíble velocidad al ejecutar Java Script (JS)?

Respuesta (R)./ Hemos realizado muchas mejoras a Ion Monkey pero también a otras partes del motor JS. Habilitar la Recolección Generacional de Basura nos ha ayudado mucho con respecto a Octane. En este artículo publicado por Nicholas Nethercote se explica con más detalle esta característica. También se han aplicado optimizaciones menores que ayudan a obtener mejores puntuaciones.

P. ¿Cómo impactan estos cambios en los usuarios finales de Firefox ?

R./ Estos cambios harán que muchas cosas vayan muy rápido, principalmente cuando se ejecuten aplicaciones escritas en JS que requieran alto procesamiento de CPU. Ejemplo de esto son los juegos, emuladores y pdf.js, donde el rendimiento JS es sumamente importante.

P. ¿Podrían añadir pruebas en AWFY sobre Linux? Porque las plataformas actuales solo incluyen Windows, Mac, Android y Firefox OS.

R./ Sí podríamos añadir Linux!, pienso que las puntuaciones en Linux deben parecerse mucho a las alcanzadas en Mac. En un futuro añadiremos esta plataforma.

P. ¿Qué significan Firefox (Shell) y Firefox (PGO) en AWFY?

R./ Firefox (Shell) es el shell de JS, una pequeña aplicación de línea de comandos independiente que los desarrolladores de SpiderMonkey usan. Es muy útil pues no tenemos que compilar el navegador completo cuando trabajamos en el motor JS. Puedes encontrar más información al respecto en la página del proyecto.

Firefox (PGO) es una versión compilada para Windows basada en perfil de optimización guiado. Todas nuestras versiones oficiales usan PGO. Firefox compilado con PGO es más rápido que sin PGO en muchos casos.

P. Es maravilloso que Firefox ahora vaya de primero en la velocidad como hace varios años. Esto representa un regalo por sus 10 años, dime cómo se sienten en Mozilla con este logro.

R./ Nosotros estamos muy emocionados con todas nuestras mejoras en rendimiento porque nos tomó mucho esfuerzo conseguirlo. Pero no pararemos aquí, seguiremos encontrando nuevas vías para hacer Firefox más rápido.

Jan de Mooij: Muchas gracias por escribir acerca de lo que hacemos en Mozilla.

Por ahora sólo nos queda seguir la evolución del motor como el año anterior y esperar que avance aún más. Siempre con la confianza puesta en Mozilla y sus desarrolladores que nunca nos han defraudado.

Visitar Are We Fast Yet?

Planet MozillaPriceless

Many people don’t know/didn’t realise/don’t care/already know/already know and didn’t care, but I am back full time with Mozilla after a hiatus of a number of months.  In fact, I jumped at the chance to join Mozilla’s Content Services team and after a fairly short conversation with Darren, Mozilla VP of Content Services, I knew it was what I wanted to do.

I feel very happy to be back, and I wanted to put a few thoughts on paper about my motivation.  I think back to a conversation I had earlier this year in Barcelona with Alina, someone who always expands my thinking.  I believe in the web.  I believe in Mozilla’s efforts to maintain its importance in how the Internet is developed and consumed.  I believe it is entirely preferable to the reemergence of technological islands, walled gardens, empires, (call them what you will).  And yet, from a different perspective, isn’t the web evidently facilitating such empires anyway?

Taking it further: the web, this incredible creation that has enriched our lives in ways we could not have imagined for ourselves previously, is also an agent of economic and cultural imperialism, in the same way that free trade and liberalised capital markets arguably have been in the 1980s and 1990s?  I realise that many Mozillians will have an inherent faith in market solutions and I certainly believe in free trade…up to a point.

People will identify Content Services with advertising.  And how do I feel about advertising in general?  About the same way as when I first read J.K. Galbraith on the subject, over 20 years ago. Advertising troubled Galbraith, even in 1958 when he first published The Affluent Society.  Advertising gives cultural force to the means of production.  Or as he put it, “wants are increasingly created by the process by which they are satisfied”.  That is, advertising is the means by which supply creates demand.  It allows capital to influence our psyche and creates new cultural barriers to market entry.  In 2014, it’s hard to imagine an economy without advertising.  And while I do not entirely share Naomi Klein’s wholly negative views on what brands mean, I do find that idea that so many of our cultural signifiers are created with the purpose of persuading us to consume x in preference to y to be more than a little uncomfortable.

Given that perspective, why Content Services?  Well, Content Services is that it is not all about advertising.  Content Services will help deliver Mozilla a new voice for its community and with its users.  But saying that, the most important thing for Content Services is advertising.  That is because advertising is the most important economic activity on the web – by a long way.

Look at what advertising has enabled on the web.  How much content is free to consume as a consequence of federated advertising?  Many Firefox user choose to block adverts, and other people round on those users for failing to honour the implicit contract between the publisher and the reader.  I am not sure I subscribe to that point of view entirely, but to fail to view advertising as an exchange of value between the user and the website is to be disingenuous.  And it is something that the web does extraordinarily well – and at scale, and on an aggregated basis.  It has empowered the user, and the search engine, the social network, and it has wreaked havoc on publishers.

Almost every user of the web enjoys a huge consumer surplus.  That is, they would pay far more for the web than it costs them (once you’ve paid for access to the network, you “pay” almost nothing to use the web).  And some consumers enjoy a much larger surplus than others.  Typically, richer consumers, who have a higher propensity to pay, transmit an effective subsidy to poorer consumers who would have a lower propensity to pay.  And this aggregated arbitrage is a good thing. Generally.

Except that it has given rise to incalculably powerful and valuable empires.  These empires might be defined as the ones who own the user’s identity.  The ones you log in to, who know the most about you, about what you do, who you know and what you read.  Empires which extract value from all of the industries they touch, from online publishers to mobile network operators and ISPs.  I must stress none of this is a criticism of a Google or a Facebook: they have delivered huge utility and have successfully exploited their market positions.  But it is notable that a company with no significant patents or copyrights, nor indeed revenues, and which employed a reported 55 people could be valued at $19Bn. It is reasonable to suppose, under such circumstances, that there are major externalities generated by this business, or that this business is a significant free rider, which almost all internet business are: something those of use who support net neutrality implicitly agree is a good thing (I do not intend the term pejoratively).

What are the externalities?  As we’re fond of telling each other, if you’re not paying, you’re the product.  The price we pay is our attention and exposure to adverts, and knowledge about ourselves.  We are being broken down, analysed, reassembled as segments, profiles, tracked as individuals and as sub-markets and, yes, being spied upon.  Some people are relaxed about this, perhaps feeling that they have nothing to hide, and besides, they haven’t even come for the Socialists yet…

What’s more, the cultural impact is abysmal.  In the old world, advertising inventory was finite, confined to the space for adverts on billboards, in newspapers and so on.  When the Mozilla community created its iconic New York Times advert, it was an incredible demonstration of the power of a community – placing such an advert cost real money.  But in the online world, inventory is flexible, and theoretically infinite.  You can grow your inventory by retaining users, get getting more clicks.  And you do this by writing clickbait headlines, by instrumenting your users, by taking your publication downmarket, by designing your site so that one article extends over multiple webpages, and so on and so forth.  The effects on our culture are obvious.  The effects on our psyche, we’re only just starting to understand.

And then, there is a battle over online privacy.  Or perhaps, more aptly, an arms race.  Privacy measures are met with counter-measures.  Tracking cookies, which still seem important today, may seem positively anodyne in years to come.  The more intimacy we gain with the internet, and the more capabilities it assumes, the deeper and deeper this problem becomes.

So, where does all this leave us?  Well, there is another Mozillian I would mention who has frequently inspired me: former CEO John Lilly.  Almost exactly four years ago, John gave a talk for the House of Commons, in which he presciently suggested that just as troubling as the Orwellian aspects of the internet are, so to should be aware of the dangers of a culture that is amusing us into bovine submission. John is a man who reads books, and as he points out, the internet is as much Brave New World as 1984.  And John also spelled out the importance for mass participation in the creation of counter-measures to this.  Actually, just go and read his post again, if you haven’t already.

Advertising on the web is a problem, it risks trivialising our culture, creating a mass surveillance system and is supporting new forms of digital empires.  And yet, it is better than the alternatives: all this economic value being pushed to proprietary technology platforms.  And it is in danger: it is in danger of being unpalatable in a modern democracy, and of being superseded by proprietary technologies with even worse consequences.  That is why Mozilla has to act, and why it is entirely appropriate that we involve ourselves in this industry.  It is why we should conceive and build solutions to these problems, and look to empower all parts of the internet ecosystem that generate value for the consumer.  This problem is our problem.  We must not just try to wish it out of existence.

Our first duty is clear: it is to the Firefox user, and the trust they have in Mozilla.  It would not be right that we would send our users to whichever service on the internet and rule out-of-scope the consequences for them (and nor do we).  We build Firefox users the tools to be in charge of their experience.  But we must help instantiate the rest of the world we want to see, bringing advertisers and publishers who share these values into the Mozilla community.  We will understand their needs, and where they are transparent, where they scale and support heterogeneity, where they offer a reasonable, knowable and workable exchange of value, we should finds ways to facilitate them.  Until that happens, the concentration of power on the internet will only continue.  And honestly, who else but Mozilla is going to address this problem?

And more important to me than any of this, is to be working side-by-side again with my many wonderful friends at Mozilla.


Planet MozillaWhy curl defaults to stdout

(Recap: I founded the curl project, I am still the lead developer and maintainer)

When asking curl to get a URL it’ll send the output to stdout by default. You can of course easily change this behavior with options or just using your shell’s redirect feature, but without any option it’ll spew it out to stdout. If you’re invoking the command line on a shell prompt you’ll immediately get to see the response as soon as it arrives.

I decided curl should work like this, and it was a natural decision I made already when I worked on the predecessors during 1997 or so that later would turn into curl.

On Unix systems there’s a common mantra that “everything is a file” but also in fact that “everything is a pipe”. You accomplish things on Unix by piping the output of one program into the input of another program. Of course I wanted curl to work as good as the other components and I wanted it to blend in with the rest. I wanted curl to feel like cat but for a network resource. And cat is certainly not the only pre-curl command that writes to stdout by default; they are plentiful.

And then: once I had made that decision and I released curl for the first time on March 20, 1998: the call was made. The default was set. I will not change a default and hurt millions of users. I rather continue to be questioned by newcomers, but now at least I can point to this blog post! :-)

About the wget rivalry

cURLAs I mention in my curl vs wget document, a very common comment to me about curl as compared to wget is that wget is “easier to use” because it needs no extra argument in order to download a single URL to a file on disk. I get that, if you type the full commands by hand you’ll use about three keys less to write “wget” instead of “curl -O”, but on the other hand if this is an operation you do often and you care so much about saving key presses I would suggest you make an alias anyway that is even shorter and then the amount of options for the command really doesn’t matter at all anymore.

I put that argument in the same category as the people who argue that wget is easier to use because you can type it with your left hand only on a qwerty keyboard. Sure, that is indeed true but I read it more like someone trying to come up with a reason when in reality there’s actually another one underneath. Sometimes that other reason is a philosophical one about preferring GNU software (which curl isn’t) or one that is licensed under the GPL (which wget is) or simply that wget is what they’re used to and they know its options and recognize or like its progress meter better.

I enjoy our friendly competition with wget and I seriously and honestly think it has made both our projects better and I like that users can throw arguments in our face like “but X can do Y”and X can alter between curl and wget depending on which camp you talk to. I also really like wget as a tool and I am the occasional user of it, just like most Unix users. I contribute to the wget project well, both with code and with general feedback. I consider myself a friend of the current wget maintainer as well as former ones.

Planet MozillaVP8 and H.264 to both become mandatory for WebRTC

WebRTC is one of the most exciting things to happen to the Web in years: it has the potential to bring instant voice and video calling to anyone with a browser, finally unshackling us from proprietary plugins and installed apps. Firefox, Chrome, and Opera already support WebRTC, and Microsoft recently announced future support.

Unfortunately, the full potential of the WebRTC ecosystem has been held back by a long-running disagreement about which video codec should be mandatory to implement. The mandatory to implement audio codecs were chosen over two years ago with relatively little contention: the legacy codec G.711 and Opus, an advanced codec co-designed by Mozilla engineers. The IETF RTCWEB Working Group has been deadlocked for years over whether to pick VP8 or H.264 for the video side.

Both codecs have merits. On the one hand, VP8 can be deployed without having to pay patent royalties. On the other hand, H.264 has a huge installed base in existing systems and hardware. That is why we worked with Cisco to develop their free OpenH264 plugin and as of October this year, Firefox supports both H.264 and VP8 for WebRTC.

At the last IETF meeting in Hawaii the RTCWEB working group reached strong consensus to follow in our footsteps and make support for both H.264 and VP8 mandatory for browsers. This compromises was put forward by Mozilla, Cisco and Google. The details are a little bit complicated, but here’s the executive summary:

  • Browsers will be required to support both H.264 and VP8 for WebRTC.
  • Non-browser WebRTC endpoints will be required to support both H.264 and VP8. However, if either codec becomes definitely royalty free (with no outstanding credible non-RF patent claims) then endpoints will only have to do that codec.
  • “WebRTC-compatible” endpoints will be allowed to do either codec, both, or neither.

See the complete proposal by Mozilla Principal Engineer Adam Roach here. There are still a few procedural issues to resolve, but given the level of support in the room, things are looking good.

We believe that this compromise is the best thing for the Web at this time: It lets us move forward with confidence in WebRTC interoperability and allows people who for some reason or another really can’t do one of these two codecs to be “WebRTC-compatible” and know they can interoperate with any WebRTC endpoint. This is an unmitigated win for users and Web application developers, as it provides broad interoperability within the WebRTC ecosystem.

It also puts a stake in the ground that what the community really needs is a codec that everyone agrees is royalty-free, and provides a continuing incentive for proponents of each codec to work towards this target.

Mozilla has been working for some time on such a new video codec which tries to avoid the patent thickets around current codec designs while surpassing the quality of the latest royalty-bearing codecs. We hope to contribute this technology to an IETF standardization effort following the same successful pattern as with Opus.


Filed under: Mozilla Tagged: Mozilla, VP8, WebRTC

Planet MozillaKilling the office suite

Have you ever had the experience of trying to write a document in MS Word (or Open/LibreOffice) and it keeps "correcting" your formatting to something you don't want? The last time I experienced that was about a year ago, and that was when I decided "screw it, I'll just write this in HTML instead". That was a good decision.

Pretty much anything you might want to use a word processor for, you can do in HTML - and oftentimes it's simpler. Sure, there's a bit of a learning curve if you don't know HTML, but that's true for anything. Now anytime I need to create "a document" (a letter, random notes or signs to print, etc.) I always do it in HTML rather than LibreOffice, and I'm the happier for it. I keep all my data in git repositories, and so it's a bonus that these documents are now in a plaintext format rather than a binary blob.

I realized that this is probably part of a trend - a lot of people I know nowadays to "powerpoint" presentations using web technologies such as reveal.js. I haven't seen many people comment on using web tech to do word processing, but I know I do it. The only big "office suite" thing left is the spreadsheet. It would be awesome if somebody wrote a drop-in JS spreadsheet library that you could include into a HTML page and instantly turn a table element into a spreadsheet.

I'm reminded of this old blog post by Joel Spolsky: How Trello is different. He talks about most of the people who use Excel really just use it because it provides a table format for entering things, rather than it's computational ability. HTML already provides that, but whenever I've tried doing that I find the markup/content ratio too high, so it always seemed like a pain. It would be nice to have a WSYIWYG tool that let you build a table (or spreadsheet) and import/export it as raw HTML that you can publish, print, share, etc.

As an addendum, that blog post by Joel also introduced me to the concept of unshipped code as "inventory", which is one of the reasons I really hate finding old bugs sitting around in Bugzilla with perfectly good patches that never landed!

Planet MozillaAllocators in Rust

There has been a lot of discussion lately about Rust’s allocator story, and in particular our relationship to jemalloc. I’ve been trying to catch up, and I wanted to try and summarize my understanding and explain for others what is going on. I am trying to be as factually precise in this post as possible. If you see a factual error, please do not hesitate to let me know.

The core tradeoff

The story begins, like all interesting design questions, with a trade-off. The problem with trade-offs is that neither side is 100% right. In this case, the trade-off has to do with two partial truths:

  1. It is better to have one global allocator than two. Allocators like jemalloc, dlmalloc, and so forth are all designed to be the only allocator in the system. Of necessity they permit a certain amount of “slop”, allocating more memory than they need so that they can respond to requests faster, or amortizing the cost of metadata over many allocations. If you use two different allocators, you are paying those costs twice. Moreover, the allocator tends to be a hot path, and you wind up with two copies of it, which leaves less room in the instruction cache for your actual code.
  2. Some allocators are more efficient than others. In particular, the default allocators shipped with libc on most systems tend not to be very good, though there are exceptions. One particularly good allocator is jemalloc. In comparison to the default glibc or windows allocator, jemalloc can be noticeably more efficient both in performance and memory use. Moreover, jemalloc offers an extended interface that Rust can take advantage of to gain even more efficiency (for example, by specifying the sizes of a memory block when it is freed, or by asking to reallocate memory in place when possible).

Clearly, the best thing is to use just one allocator that is also efficient. So, to be concrete, whenever we produce a Rust executable, everyone would prefer if that Rust executable – along with any C code that it uses – would just use jemalloc everywhere (or whatever allocator we decide is ‘efficient’ tomorrow).

However, in some cases we can’t control what allocator other code will use. For example, if a Rust library is linked into a larger C program. In this case, we can opt to continue using jemalloc from within that Rust code, but the C program may simply use the normal allocator. And then we wind up with two allocators in use. This is where the trade-off comes into play. Is it better to have Rust use jemalloc even when the C program within which Rust is embedded does not? In that case, the Rust allocations are more efficient, but at the cost of having more than one global allocator, with the associated inefficiencies. I think this is the core question.

Two extreme designs

Depending on whether you want to prioritize using a single allocator or using an efficient allocator, there are two extreme designs one might advocate for the Rust standard library:

  1. When Rust needs to allocate memory, just call malloc and friends.
  2. Compile Rust code to invoke jemalloc directly. This is what we currently do. There are many variations on how to do this. Regardless of which approach you take, this has the downside that when Rust code is linked into C code, there is the possibility that the C code will use one allocator, and Rust code another.

It’s important to clarify that what we’re discussing here is really the default behavior, to some extent. The Rust standard library already isolates the definition of the global allocator into a particular crate. End users can opt to change the definition of that crate. However, it would require recompiling Rust itself to do so, which is at least a mild pain.

Calling malloc

If we opted to default to just calling malloc, this does not mean that end users are locked into the libc allocator or anything like that. There are existing mechanisms for changing what allocator is used at a global level (though I understand this is relatively hard on Windows). Presumably when we produce an actual Rust executables, we would default to using jemalloc.

Calling malloc has the advantage that if a Rust library is linked into a C program, both of them will be using the same global allocator, whatever it is (unless of course that C program itself doesn’t call malloc).

However, one downside of this is that we are not able to take advantage of the more advanced jemalloc APIs for sized deallocation and reallocation. This has a measureable effect in micro-benchmarks. I am not aware of any measurements on larger scale Rust applications, but there are definitely scenarios where the advanced APIs are useful.

Another potential downside of this approach is that malloc is called via indirection (because it is part of libc; I’m a bit hazy on the details of this point, and would appreciate clarification). This implies a somewhat higher overhead for calls to malloc/free than if we fixed the allocator ahead of time. It’s worth noting that this is the normal setup that all C programs use by default, so relative to a typical C program, this setup carries no overhead.

(When compiling a statically linked executables, rustc could opt to redirect malloc and friends to jemalloc at this point, which would eliminate the indirection overhead but not take advantage of the specialized jemalloc APIs. This would be a simplified variant of the hybrid scheme I eventually describe below.)

Calling jemalloc directly

If we opt to hardcode Rust’s default allocator to be jemalloc, we gain several advantages. The performance of Rust code, at least, is not subject to the whims of whatever global allocator the platform or end-user provides. We are able to take full advantage of the specialized jemalloc APIs. Finally, as the allocator is fixed to jemalloc ahead of time, static linking scenarios do not carry the additional overhead that calling malloc implies (though, as I noted, one can remove that overhead also when using malloc via a simple hybrid scheme).

Having Rust code unilatelly call jemalloc also carries downsides. For example, if Rust code is embedded as a library, it will not adopt the global allocator of the code that it is embedded within. This carries the performance downsides of multiple allocators but also a certain amount of risk, because a pointer allocated on one side cannot be freed on the other (some argue this is bad practice; this is certainly true if you do not know that the two sides are using the same allocator, but is otherwise legitimate, see the section below for more details).

The same problem can also occur in reverse, when C code is used from within Rust. This happens today with rustc: due to the specifics of our setup, LLVM uses the system allocator, not the jemalloc allocator that Rust is using. This causes extra fragmentation and memory consumption. It’s also not great because jemalloc is better than the system allocator in many cases.

To prefix or not to prefix

One specific aspect of calling jemalloc directly concerns how it is built. Today, we build jemalloc using name prefixes, effectively “namespacing” it so that it does not interfere with the system allocator. This is what causes LLVM to use a different allocator in rustc. This has the advantage of clarity and side-stepping certain footguns around dynamic linking that could otherwise occur, but at the cost of forking the allocators.

A recent PR aimed to remove the prefix. It was rejected because in a dynamic linking scenario, this creates a fragile situation. Basically, the dynamic library (“client”) defines malloc to be jemalloc. The host process also has a definition for malloc (the system allocator). The precise result will depend on the flags and platform that you’re running on, but there are basically two possible outcomes, and both can cause perfectly legitimate code to crash:

  1. The host process wins, malloc means the same thing everywhere (this occurs on linux by default).
  2. malloc means different things in the host and the client (this occurs on mac by default, and on linux with the DEEPBIND flag).

In the first case, crashes can arise if the client code should try to intermingle usage of the nonstandard jemalloc API (which maps to jemalloc) with the standard malloc API (which the client believes to also be jemalloc, but which has been remapped to the system allocator by the host). The jemalloc documentation isn’t 100% explicit on the matter, but I believe it is legal for code to (e.g.) call mallocx and then call free on the result. Hence if Rust should link some C code that did that, it would crash under the first scenario.

In the second case, crashes can arise if the host/client attempt to transfer ownership of memory. Some claim that this is not a legitimate thing to do, but that is untrue: it is (usually) perfectly legal for client code to (e.g.) call strdup and then pass the result back to the host, expecting the host to free it. (Granted, it is best to be cautious when transfering ownership across boundaries like this, and one should never call free on a pointer unless you can be sure of the allocator that was used to allocate that pointer in the first place. But if you are sure, then it should be possible.)

UPDATE: I’ve been told that on Windows, freeing across DLL boundaries is something you can never do. On Reddit, Mr_Alert writes: “In Windows, allocating memory in one DLL and freeing it in another is very much illegitimate. Different compiler versions have different C runtimes and therefore different allocators. Even with the same compiler version, if the EXE or DLLs have the C runtime statically linked, they’ll have different copies of the allocator. So, it would probably be best to link rust_alloc to jemalloc unconditionally on Windows.” Given the number of differences between platforms, it seems likely that the best behavior will ultimately be platform dependent.

Fundamentally, the problems here are due to the fact that the client is attempting to redefine the allocator on behalf of the host. Forcing this kind of name conflict to occur intentionally seems like a bad idea if we can avoid it.

A hybrid scheme

There is also the possibility of various hybrid schemes. One such option that Alex Crichton and I put together, summarized in this gist, would be to have Rust call neither the standard malloc nor the jemalloc symbols, but rather an intermediate set of APIs (let’s call them rust_alloc). When compiling Rust libraries (“rlibs”), these APIs would be unresolved. These rust allocator APIs would take all the information they need to take full advantage of extended jemalloc APIs, if they are available, but could also be “polyfilled” using the standard system malloc interface.

So long as Rust libraries are being compiled into “rlibs”, these rust_alloc dependencies would remain unresolved. An rlib is basically a statically linked library that can be linked into another Rust program. At some point, however, a final artifact is produced, at which point the rust_alloc dependency must be fulfilled. The way we fulfill this dependency will ultimately depend on what kind of artifact is produced:

  • Static library for use in a C program: link rust_alloc to malloc
  • Dynamic library (for use in C or Rust): link rust_alloc to malloc
  • Executable: resolve rust_alloc to jemalloc, and override the system malloc with jemalloc as well.

This seems to offer the best of both worlds. Standalone, statically linked Rust executables (the recommended, default route) get the full benefit of jemalloc. Code that is linked into C or dynamically loaded uses the standard allocator by default. Any C code used from within Rust executables will also call into jemalloc as well.

However, there is one major caveat. While it seems that this scheme would work well on linux, the behavior on other platforms is different, and it’s not yet clear if the same scheme can be made to work as well on Mac and Windows.

Naturally, even if we sort out the cross-platform challenges, this hybrid approach too is not without its downsides. It means that Rust code built for libraries will not take full advantage of what jemalloc has to offer, and in the case of dynamic libraries there may be more overhead per malloc invocation than if jemalloc were statically linked. However, by the same token, Rust libraries will avoid the overhead of using two allocators and they will also be acting more like normal C code. And of course the embedding program may opt, in its linking phase, to redirect malloc (globally) to jemalloc.

So what should we do?

The decision about what to do has a couple of facets. In the immediate term, however, we need to take steps to improve rustc’s memory usage. It seems to me that, at minimum, we ought to accept strcat’s PR #18915, which ensures that Rust executables can use jemalloc for everything, at least on linux. Everyone agrees that this is a desirable goal.

Longer term, it is somewhat less clear. The reason that this decision is difficult is that there is no choice that is “correct” for all cases. The most performant choice will depend on the specifics of the case:

  • Is the Rust code embedded?
  • How much allocation takes place in Rust vs in the other language?
  • What allocator is the other language using?

(As an example, the performance and memory use of rustc improved when we adopted jemalloc, even partially, but other applications will fare differently.)

At this point I favor the general principle that Rust code, when compiled as a library for use within C code, should more-or-less behave like C code would behave. This seems to suggest that, when building libraries for C consumption, Rust should just call malloc, and people can use the normal mechanisms to inject jemalloc if they so choose. However, when compiling Rust executables, it seems advantageous for us to default to a better allocator and to get the maximum efficiency we can from that allocator. The hybrid scheme aims to achieve both of these goals but there may be a better way to go about it, particularly around the area of dynamic linking.

I’d like to see more measurement regarding the performance impact of foregoing the specialized jemalloc APIs and using weak linking. I’ve seen plenty of numbers suggesting jemalloc is better than other allocators on the whole, and plenty of numbers saying that using specialized APIs helps in microbenchmarks. But it is unclear what the impact of such APIs (or weak linking) is on the performance of larger applications.

I’d also like to get the input from more people who have experience in this area. I’ve talked things over with strcat a fair amount, who generally favors using jemalloc even if it means two allocators. We’ve also reached out to Jason Evans, the author of jemalloc, who stressed the fact that multiple global allocators is generally a poor choice. I’ve tried to reflect their points in this post.

Note though that whatever we decide we can evolve it as we go. There is time to experiment and measure. One thing that is clear to me is that we do not want Rust to “depend on” jemalloc in any hard sense. That is, it should always be possible to switch from jemalloc to another allocator. This is both because jemalloc, good as it is, can’t meet everyone’s needs all the time, and because it’s just not a necessary dependency for Rust to take. Establishing an abstraction boundary around the “Rust global allocator” seems clearly like the right thing to do, however we choose to fulfill it.

Planet MozillaThunderbird Summit 2014

Last month (Oct. 15th to Oct. 18th, to be precise), twenty volunteers descended on Mozilla’s Toronto office to discuss Mozilla Thunderbird. This included Mozilla employees, Thunderbird contributors of all sorts (developers, user interface designers, add-on reviewers), Lightning contributors, and chat/Instantbird contributors.

The entire group of volunteers.

It was great to spend some quality hacking time with Florian and to meet Nihanth, both Instantbird guys who I talk to most days on IRC! I also had the pleasure of re-meeting a few people from the Mozilla Summit last year (I attended in Toronto) and to meet some brand new people!

Nihanth hacking. The chat contributors: me, Florian and Nihanth. Daniel joining us over Vidyo.

A few pictures of the chat contributors: Nihanth; me, Florian and Nihanth; and Daniel (dialing in!)

It was really nice to actually sit down for a few days and work on Instantbird/Thunderbird without the distractions of "real life". I, unfortunately, spent the first day fixing an Instantbird bustage (from a mozilla-central change that removed some NSS symbols…why, I have no idea). But after that, we got some really exciting work done! We started cleaning up and finalizing some patches from Google Summer of Code 2014 to add WebRTC support to XMPP! You can check out the progress in bug 1018060.

First working call over Instantbird WebRTC. Screenshot of first working call over Instantbird WebRTC.

First working call over Instantbird WebRTC.

Eating some poutine!

Other highlights of the trip include eating the "Canadian delicacy" of poutine (with pulled pork on it)!

Planet MozillaReps Weekly Call – November 13rd 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-mozfest

Summary

  • New Reps dashboard
  • Portal UX initiative
  • WoMoz update
  • FX10 celebrations

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>