Planet MozillaHow to Invent the Future

Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?

Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.

So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.

futurepatlang

Look Twenty Years Into the Future

If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.

Extrapolate Technologies

What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant  technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.

Focus on People

Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?

Create a Vision

Based upon your technology and social extrapolations  create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.

A Team of Dreamers and Doers

Inventing a future requires a team with a mixture of skills.  You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.

Prototype the Vision

Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.

Live Within the Prototype

It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.

Make It Useful to You

You’re a person who hopes to live in this future, so prototype things that will be useful to you.  You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.

Amaze the World

If you are successful in applying these patterns you will invent things that are truly amazing.  Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.

 

 

 

 

 

Planet MozillaReplay Pulse messages

If you know what is Pulse and you would like to write some integration tests for an app that consumes them pulse_replay might make your life a bit easier.

You can learn more about by reading this quick README.md.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaCloud Services QA Team Sync, 03 May 2016

Cloud Services QA Team Sync Weekly sync-up, volunteer, round-robin style, on what folks are working on, having challenges with, etc.

Planet MozillaWebdev Extravaganza: May 2016

Webdev Extravaganza: May 2016 Once a month web developers from across Mozilla get together to share news about the things we've shipped, news about open source libraries we maintain...

Planet MozillaConnected Devices Weekly Program Update, 03 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Planet MozillaMailing-List Mush: End of Life for Firefox on OSX 10.6-8, ICU dropping Windows XP Support

Apparently I’m now Windows XP Firefox Blogging Guy. Ah well, everyone’s gotta have a hobby.

End of Life for Firefox on OSX 10.6-8

The Firefox Future Releases Blog announced the end of support for Mac OSX 10.6-10.8 for Firefox. This might be our first look at how Windows XP’s end of life might be handled. I like the use of language:

All three of these versions are no longer supported by Apple. Mozilla strongly encourages our users to upgrade to a version of OS X currently supported by Apple. Unsupported operating systems receive no security updates, have known exploits, and are dangerous for you to use.

You could apply that just as easily and even more acutely to Windows XP.

But, then, why isn’t Mozilla ending support for XP in a similar announcement? Essentially it is because Windows XP is still too popular amongst Firefox users. The Windows XP Firefox population still outnumbers the Mac OSX (all versions) and Linux populations combined.

My best guess is that we’ll be able to place the remaining Windows XP Firefox users on ESR 52 which should keep the last stragglers supported into 2018. That is, if the numbers don’t suddenly decrease enough that we’re able to drop support completely before then, shuffling the users onto ESR 45 instead.

What’s nice is the positive-sounding emails at the end of the thread announcing the gains in testing infrastructure and the near-term removal of code that supported now-unused build configurations. The cost of supporting these platforms is non-0, and gains can be realized immediately after dropping support.

ICU Planning to Drop Support for Windows XP

A key internationalization library in use by Firefox, ICU, is looking to drop Windows XP support in their next version. The long-form discussion is on dev-platform (you might want to skim the unfortunate acrimony over Firefox for Android (Fennec) present in that thread) but it boils down to: do we continue shipping old software to support Windows XP? For how long? Is this the straw that will finally break WinXP support’s back?

:milan made an excellent point on how the Windows XP support decision is likely to be made:

Dropping the XP support is *completely* not an engineering decision.  It isn’t even a community decision.  It is completely, 100% MoCo driven Firefox product management decision, as long as the numbers of users are where they are.

On the plus side, ICU seems to be amenable to keep Windows XP support for a little longer if we need it… but we really ought to have a firm end-of-life date for the platform if we’re to make that argument in a compelling fashion. At present we don’t have (or at least haven’t communicated) such a date. ICU may just march on without us if we don’t decide on one.

For now I will just keep an eye on the numbers. Expect a post when the Windows XP numbers finally dip below the Linux+OSX as that will be a huge psychological barrier broken.

But don’t expect that post for a few months, at least.

:chutten

 

 


Planet MozillaFirefox integra GTK3 en Linux y mejora la seguridad del compilador de JavaScript (JIT)

El 26 de abril pasado, Mozilla liberó una nueva versión de Firefox en la que destacan la integración con GTK3 en Linux, mejoras en las seguridad del compilador JS en tiempo real, cambios en WebRTC y nuevas funcionalidades para Android e iOS. Como anunciaba @Pochy en el día de ayer, esta actualización de Firefox la pueden obtener desde nuestra zona de Descargas.

Después de varios meses de pruebas y desarrollo, finalmente GTK3 ha sido incluido para la versión en Linux. Esto permitirá reducir la dependencia de las versiones antiguas del servidor X11, compatibilidad mejorada con HiDPI y sobre todo una mejor integración con los temas.

El nuevo navegador también mejora la seguridad del compilador de JavaScript Just in Time (JIT) de SpiderMonkey. La idea es influir en el código RWX (lectura-escritura-ejecución), que a veces provoca un riesgo. Inicialmente representa una excepción a las reglas del sistema operativo, especialmente el almacenamiento de datos en un área de memoria donde pueden ser ejecutados (leer), pero no escribirse.

Para remediar este problema, Mozilla he empleado un mecanismo denominado W^X. Su función es la de prohibir la escritura de JavaScript en áreas de memoria que contienen el código JIT. Este cambio será a expensas de un ligero descenso en el rendimiento, que se calcula de acuerdo con el editor en un rango de 1 a 4%. Por otra parte, se invita a los creadores de algunas extensiones para probar la compatibilidad de su código de tratar con este mecanismo.

También se ha mejorado el rendimiento y la fiabilidad de las conexiones de WebRTC, y permite el uso del módulo de descifrado de contenido para el contenido H.264 y AAC cuando sea posible. Mientras tanto, para los desarrolladores contamos con nuevas herramientas que ahora pueden emplear en sus trabajos, en este artículo publicado en el blog de Labs podrás conocer más al respecto.

Novedades en Android

  • El Historial y los Marcadores se han añadido al menú.
  • La instalación de complementos no verificados será cancelada.
  • Las página almacenadas en la caché ahora son mostradas cuando estamos sin conexión.
  • Las notificaciones sobre las pestañas abiertas en segundo plano ahora muestran la URL.
  • Firefox pedirá permisos para acceder a ciertos permisos en tiempo de ejecución y no al instalar la aplicación (Android 6.0 o superiores).
  • Durante el autocompletamiento mientras escribes una dirección ahora se incluye el dominio para hacer más fácil tu navegación.

Firefox para iOS

  • Añadida la localización en danés [da].
  • Los sitios sugeridos por defecto ahora pueden ser eliminados.
  • Los 5 primeros sitios del ranking de Alexa son mostrados a los nuevos usuarios.
  • Mejorado el manejo por parte del navegador de vínculos a Apple Maps y otras aplicaciones de terceros como Twitter.

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Aclaración para la versión móvil.

En las descargas se pueden encontrar 3 versiones para Android. El archivo que contiene i386 es para los dispositivos que tengan la arquitectura de Intel. Mientras que en los nombrados arm, el que dice api11 funciona con Honeycomb (3.0) o superior y el de api9 es para Gingerbread (2.3).

Puedes obtener esta versión desde nuestra zona de Descargas en español para Linux, Mac, Windows y Android. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario ;-).

Planet MozillaRelEng & RelOps Weekly highlights - May 2, 2016

<figure class="alignright">Can you dig it?</figure>Two weeks worth of awesome crammed into one blog post. Can you dig it?

Modernize infrastructure:

Kendall and Greg have deployed new hg web nodes! They’re bigger, better, faster! The four new nodes have more processing power than the old ten nodes combined. In addition, all of the web and ssh nodes have been upgraded to CentOS 7, giving us a modern operating system and better security.

Relops and jmaher certified Windows 7 in the cloud for 40% of tests. We’re now prepping to move those tests. The rest should follow soon. From a capacity standpoint, moving any Windows testing volume into the cloud is huge.

Mark deployed new versions of hg and git to the Windows testing infrastructure.

Rob’s new mechanisms for building TaskCluster Windows workers give us transparency on what goes into a builder (single page manifests) and have now been used to successfully build Firefox under mozharness for TaskCluster with an up-to-date toolchain (mozilla-build 2.2, hg 3.7.3, python 2.7.11, vs2015 on win 2012) in ec2.

Improve Release Pipeline:

Firefox 46.0 Release Candidates (RCs) were all done with our new Release Promotion process. All that work in the beta cycle for 46.0 paid off.

Varun began work on improving Balrog’s backend to make multifile responses (such as GMP) easier to understand and configure. Historically it has been hard for releng to enlist much help from the community due to the access restrictions inherent in our systems. Kudos to Ben for finding suitable community projects in the Balrog space, and then more importantly, finding the time to mentor Varun and others through the work.

Improve CI Pipeline:

Aki’s async code has landed in taskcluster-client.py! Version 0.3.0 is now on pypi, allowing us to async all the python TaskCluster things.

Nick’s finished up his work to enable running localized (l10n) desktop builds on Try. We’ve wanted to be able to verify changes against l10n builds for a long time…this particular bug is 3 years old. There are instructions in the wiki: https://wiki.mozilla.org/ReleaseEngineering/TryServer#Desktop_l10n_jobs

With build promotion well sorted for the Firefox 46 release, releng is switching gears and jumping into the TaskCluster migration with both feet this month. Kim and Mihai will be working full-time on migration efforts, and many others within releng have smaller roles. There is still a lot of work to do just to migrate all existing Linux workloads into TaskCluster, and that will be our focus for the next 3 months.

Operational:

Vlad and Amy landed patches to decommission the old b2g bumper service and its infrastructure.

Alin created a dedicated server to run buildduty tools. This is part of an ongoing effort to separate services and tools that had previously been piggybacking on other hosts.

Amy and Jake beefed up our AWS puppetmasters and tweaked some time out values to handle the additional load of switching to puppet aspects. This will ensure that our servers stay up to date and in sync.

What’s better than handing stuff off? Turning stuff off. Hal started disabling no-longer-needed vcs-sync jobs.

Release:

Shipped Firefox 46.0RC1 and RC2, Fennec 46.0b12, Firefox and Fennec 46.0, ESR 45.1.0 and 38.8.0, Firefox and Fennec 47.0beta1, and Thunderbird 45.0b1. The week before, we shipped Firefox and Fennec 45.0.2 and 46.0b10, Firefox 45.0.2esr and Thunderbird 45.0.

For further details, check out the release notes here:

See you next week!

Planet MozillaTracelogger gui updates

Tracelogger is one of the tools JIT devs (especially me) use to look into performance issues and to improve the JS engine of Firefox performance-wise. It traces which functions are executing together with extra information like which engine is running. How long compilation took. How many times we are GC’ing and if we are calling VM functions …

I made the GUI a bit more powerful. First of all I moved the computation of the overview to a web worker. This should help the usability of the tool. Next to that I made it possible to toggle the categories on and off. That might make it easier to understand the graphs. I also introduced a settings popup. In the settings popup you can now choice to see absolute (cpu ticks) or relative (%) timings.

Screenshot from 2016-05-03 09:36:41

There are still a lot of improvements that are possible. Eventually it should be possible to zoom on graphs, toggle scripts on/off, see full times of scripts (instead of self time only) and maybe make it possible to show another graph (like a flame chart). Hopefully one day.

This is off course open source and available at:
https://github.com/h4writer/tracelogger/tree/master/website

Dev.OperaFor a Better Extensions Ecosystem

In 2013, when Opera desktop and Opera for Android switched over to Chromium, we faced a choice on how to go about our extensions ecosystem. We decided to look to the future and anticipated that someday developers might want to have a standardised way to make extensions (or at least, have a common set of APIs) so that the effort of making a separate extension for each browser is reduced.

That is why we chose our extension packaging format as .nex which stands for ‘navigator extensions’, signifying our take on a vendor-neutral format for extensions. Back it 2013, we wrote the following:

…NEX — a vendor-neutral packaging format for browser add-ons that we have initially implemented on top of the Chromium .crx format in Opera. We intend to further develop this as an open add-ons format through international standards bodies. Initially we intend to consolidate the differences between add-on execution environments themselves considering their current similarities. As a secondary objective we then aim to kick-start meaningful discussion around shared system and device-level APIs with a view to making browser add-ons first-class citizens of the web core.

Years later, it seems a lot of other browser makers are also on board with getting some common extension APIs which work cross-browser. In order to do this, Opera, Microsoft, and Mozilla have begun work in the Extensions Community Group where we will try to agree upon a set of common APIs, as well as a common manifest and packaging format for browser extensions. The goal is to enable extension developers to write one extension and have it work cross-browser.

This does not mean that browsers will not have their own proprietary APIs — they will, for some features. However, if we can work out a common manifest and packaging format, along with a core set of APIs to have in common, then this will go a long way in having interoperability of extensions across browsers. This could also open the door to a type of ‘progressive enhancement’ model for developing extensions (where instead of making a separate extension for each browser, we could make one extension and feature detect for various APIs), much like we have for web sites.

We appeal to the community to participate in the discussion in the group, and to give their feedback and ideas. We hope other browser makers will join us, so that together we can make extension development for multiple browsers much smoother than it is right now.

Planet MozillaThis is what a rightsholder looks like in 2016

In today’s policy discussions around intellectual property, the term ‘rightsholder’ is often misconstrued as someone who supports maximalist protection and enforcement of intellectual property, instead of someone who simply holds the rights to intellectual property. This false assumption can at times create a kind of myopia, in which the breadth and variety of actors, interests, and viewpoints in the internet ecosystem – all of whom are rightsholders to one degree or another – are lost.

This is not merely a process issue – it undermines constructive dialogues aimed at achieving a balanced policy. Copyright law is, ostensibly, designed and intended to advance a range of beneficial goals, such as promoting the arts, growing the economy, and making progress in scientific endeavour. But maximalist protection policies and draconian enforcement benefit the few and not the many, hindering rather than helping these policy goals. For copyright law to enhance creativity, innovation, and competition, and ultimately to benefit the public good, we must all recognise the plurality and complexity of actors in the digital ecosystem, who can be at once IP rightsholders, creators, and consumers.

Mozilla is an example of this complex rightsholder stakeholder. As a technology company, a non-profit foundation, and a global community, we hold copyrights, trademarks, and other exclusive rights. Yet, in the pursuit of our mission, we’ve also championed open licenses to share our works with others. Through this, we see an opportunity to harness intellectual property to promote openness, competition and participation in the internet economy.

We are a rightsholder, but we are far from maximalists. Much of the code produced by Mozilla, including much of Firefox, is licensed using a free and open source software licence called the Mozilla Public License (MPL), developed and maintained by the Mozilla Foundation. We developed the MPL to strike a real balance between the interests of proprietary and open source developers in an effort to promote innovation, creativity and economic growth to benefit the public good.

Similarly, in recognition of the challenges the patent system raises for open source software development, we’re pioneering an innovative approach to patent licensing with our Mozilla Open Software Patent License (MOSPL). Today, the patent system can be used to hinder innovation by other creators. Our solution is to create patents that expressly permit everyone to innovate openly. You can read more in our terms of license here.

While these are just two initiatives from Mozilla amongst many more in the open source community, we need more innovative ideas in order to fully harness intellectual property rights to foster innovation, creation and competition. And we need policy makers to be open (pun intended) to such ideas, and to understand the place they have in the intellectual property ecosystem.

More than just our world of software development, the concept of a rightsholder is in reality broad and nuanced. In practice, we’re all rightsholders – we become rightsholders by creating for ourselves, whether we’re writing, singing, playing, drawing, or coding. And as rightsholders, we all have a stake in this rich and diverse ecosystem, and in the future of intellectual property law and policy that shapes it.

Here is some of our most recent work on IP reform:

Planet MozillaMay 2016 Featured Add-ons

Pick of the Month: uBlock Origin

by Raymond Hill
Very efficient blocker with a low CPU footprint.

”Wonderful blocker, part of my everyday browsing arsenal, highly recommended.”

Featured: Download Plan

by Abraham
Schedule download times for large files during off-peak hours.

”Absolutely beautiful interface!!”

Featured: Emoji Keyboard

by Harry N.
Input emojis right from the browser.

”This is a good extension because I can input emojis not available in Hangouts, Facebook, and email.”

Featured: Tab Groups

by Quicksaver
A simple way to organize a ton of tabs.

”Awesome feature and very intuitive to use.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Planet MozillaAdventures porting Easy Passwords to Chrome and back to Firefox

Easy Passwords is based on the Add-on SDK and runs in Firefox. However, people need access to their passwords in all kinds of environments, so I created an online version of the password generator. The next step was porting Easy Passwords to Chrome and Opera. And while at it, I wanted to see whether that port will work in Firefox via Web Extensions. After all, eventually the switch to Web Extensions will have to be done.

Add-on SDK to Chrome APIs

The goal was using the same codebase for all variants of the extension. Most of the logic is contained in HTML files anyway, so it wouldn’t have to be changed. As to the remaining code, it should just work with some fairly minimal implementation of the SDK APIs on top of the Chrome APIs. Why not the other way round? Well, I consider the APIs provided by the Add-on SDK much cleaner and easier to use.

It turned out that Easy Passwords used twelve SDK modules, many of these could be implemented in a trivial way however. For example, the timers module merely exports functions that are defined anyway (unlike SDK extensions, Chrome extensions run in a window context).

There were a few conceptual differences however. For example, Chrome extensions don’t support modularization — all background scripts execute in a single shared scope of the background page. Luckily, browserify solves this problem nicely by compiling all the various modules into a single background.js script while giving each one its own scope.

The other issue is configuration: Chrome doesn’t generate settings UI automatically the way simple-prefs module does it. No way around creating a special page for the two settings. Getting automatic SDK-style localization of HTML pages on the other hand was only a matter of a few lines (Chrome makes it a bit more complicated by disallowing dashes in message names).

A tricky issue was unifying the way scripts are attached to HTML pages. With the Add-on SDK these are content scripts which are defined in the JavaScript code — otherwise they wouldn’t be able to communicate to the extension. In Chrome you use regular <script> tags however, the scripts get the necessary privileges automatically. In the end I had to go with conditional comments interpreted by the build system, for the Chrome build these would become regular HTML code. This had the advantage that I could have additional scripts for Chrome only, in order to emulate the self variable which is available to SDK content scripts.

Finally, communication turned out tricky as well. The Add-on SDK automatically connects a content script to whichever code is responsible for it. Whenever some code creates a panel it gets a panel.port property which can be used to communicate with that panel — and only with that panel. Chrome’s messaging on the other hand is all-to-all, the code is meant to figure out itself whether it is supposed to process a particular message or leave it for somebody else. And while Chrome also has a concept of communication ports, these can only be distinguished by their name — so my implementations of the SDK modules had to figure out which SDK object a new communication port was meant for by looking at its name. In the end I implemented a hack: since I had exactly one panel, exactly one page and exactly one page worker, I only set the type of the port as its name. Which object it should be associated with? Who cares, there is only one.

And that’s mostly it as far as issues go. Quite surprisingly, fancy JavaScript syntax is no longer an issue as of Chrome 49 — let statements, for..of loops, rest parameters, destructuring assignments, all of this works. The only restrictions I noticed: node lists like document.forms cannot be used in for..of loops, and calling Array.filter() as opposed to Array.prototype.filter.call() isn’t supported (the former isn’t documented on MDN either, it seems to be non-standard). And a bunch of stuff which requires extra code with the Add-on SDK “just works”: pop-up size is automatically adjusted to content, switching tabs closes pop-up, tooltips and form validation messages work inside the pop-up like in every webpage.

The result was a Chrome extension that works just as well as the one for Firefox, with the exception of not being able to show the Easy Passwords icon in pop-up windows (sadly, I suspect that this limitation is intentional). It works in Opera as well and will be available in their add-on store once it is reviewed.

Chrome APIs to Web Extensions?

And what about running the Chrome port in Firefox now? Web Extensions are compatible to Chrome APIs, so in theory it shouldn’t be a big deal. And in fact, after adding applications property to manifest.json the extension could be installed in Firefox. However, after it replaced the version based on the Add-on SDK all the data was gone. This is bug 1214790 and I wonder what kind of solution the Mozilla developers can come up with.

It wasn’t really working either. Turned out, crypto functionality wasn’t working because the code was running in a context without access to Web Extensions APIs. Also, messages weren’t being received properly. After some testing I identified bug 1269327 as the culprit: proxied objects in messages were being dropped silently. Passing the message through JSON.stringify() and JSON.parse() before sending solved the issue, this would create a copy without any proxies.

And then there were visuals. One issue turned out to be a race condition which didn’t occur on Chrome, I guess that I made too many assumptions. Most of the others were due to bug 1225633 — somebody apparently considered it a good idea to apply a random set of CSS styles to unknown content. I filed bug 1269334 and bug 1269336 on the obvious bugs in these CSS styles, and overwrote some of the others in the extension. Finally, the nice pop-up sizing automation doesn’t work in Firefox, so the size of the Easy Passwords pop-up is almost always wrong.

Interestingly, pretty much everything that Chrome does better than the Add-on SDK isn’t working with Web Extensions right now. It isn’t merely the pop-up sizing: HTML tooltips in pop-ups don’t show up, and pop-ups aren’t being closed when switching tabs. In addition, tabs.query() doesn’t allow searching extension pages and submitting passwords produces bogus error messages.

While most of these issues can be worked around easily, some are not. So I guess that it will take a while until I replace the SDK-based version of Easy Passwords by one based on Web Extensions.

Planet MozillaOpen Platform Operations’ logo design

Last year, the Platform Operations organization was born and it brought together multiple teams across Mozilla which empower development with tools and processes.

This year, we've decided to create a logo that identifies us an organization and builds our self-identify.

We've filed this issue for a logo design [1] and we would like to have a call for any community members to propose their designs. We would like to have all applications in by May 13th. Soon after that, we will figure out a way to narrow it down to one logo! (details to be determined).

We would also like to thank whoever made the logo which we pick at the end (details also to be determined).

Looking forward to collaborate with you and see what we create!

[1] https://github.com/mozilla/Community-Design/issues/62


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaBlueGriffon officially recommended by the French Government

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

Planet MozillaDNSSEC on gerv.net

My ISP, the excellent Mythic Beasts, has started offering a managed DNSSEC service for domains they control – just click one button, and you’ve got DNSSEC on your domain. I’ve just enabled it on gerv.net (which, incidentally, as of a couple of weeks ago, is also available over a secure channel thanks to MB and Let’s Encrypt).

If you have any problems accessing any resources on gerv.net, please let me know by email – gerv at mozilla dot org should be unaffected by any problems.

Planet MozillaRelease of SlimerJS 0.10

I'm pleased to announce the release of SlimerJS 0.10!

SlimerJS is a scriptable browser. It is a tool like PhantomJS except it is based on Firefox and and it is not (yet) "headless" (if some Mozillians could help me to have a true headless browser ;-)...).

This new release brings new features and compatibility with Firefox 46. Among of them:

  • support of PDF export
  • support of Selenium with a "web driver mode"
  • support of stdout, stderr and stdin streams with the system module
  • support of exit code with phantom.exit() and slimer.exit()
  • support of node_modules with require()
  • support of special files (/dev/* etc) with the fs module

This version fixes also many bugs and conformance issues with PhantomJS 1.9.8 and 2.x. It fixed also some issues to run CasperJS 1.1.

See change details in release notes. As usual, you can download SlimerJS from the download page.

Note that there isn't anymore "standalone edition" (with embedding of XulRunner), because Mozilla ceased to maintain and build XulRunner. Only the "lightweight" edition is available from now, and you must install Firefox to run SlimerJS.

Consider this release as a "1.0pre". I'll try to release in few weeks the next major version, 1.0. It will only fix bugs found in 0.10 (if any), and will implement last few features to match the PhantomJS 2.1 API.

Planet MozillaOn GitLab growing and OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

Planet MozillaOn GitLab growing an OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

Planet MozillaThis Week in Rust 128

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

  • Rust project changelog for 2016-04-29. Updates to bitflags, lazy_static, regex, rust-mode, rustup, uuid.
  • Xi Editor. A modern editor with a backend written in Rust.
  • rure. A C API for the regex crate.
  • cassowary-rs. A Rust implementation of the Cassowary constraint solving algorithm.
  • Sapper. A lightweight web framework built on async hyper, implemented in Rust language.
  • servo-vdom. A modified servo browser which accepts content patches over an IPC channel.
  • rustr and rustinr. Rust library for working with R, and an R package to generate Rust interfaces.
  • Rorschach. Pretty print binary blobs based on common layout definition.

Crate of the Week

This week's Crate of the Week is arrayvec, which gives us a Vec-like interface over plain arrays for those instances where you don't want the indirection. Thanks to ehiggs for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

92 pull requests were merged in the last week.

New Contributors

  • Andy Russell
  • Brayden Winterton
  • Demetri Obenour
  • Ergenekon Yigit
  • Jonathan Turner
  • Michael Tiller
  • Timothy McRoy
  • Tomáš Hübelbauer

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In general, enough layers of Rc/RefCell will make anything work.

gkoz on TRPLF.

Thanks to birkenfeld for the suggestion.

Submit your quotes for next week!

Planet MozillaNot Testing a Firefox Build (Generic Tasks in TaskCluster)

A few months ago I wrote about my tentative setup of a TaskCluster task that was neither a build nor a test. Since then, gps has implemented “generic” in-tree tasks so I adapted my initial work to take advantage of that.

Triggered by file changes

All along I wanted to run some in-tree tests without having them wait around for a Firefox build or any other dependencies they don’t need. So I originally implemented this task as a “build” so that it would get scheduled for every incoming changeset in Mozilla’s repositories.

But forget “builds”, forget “tests” — now there’s a third category of tasks that we’ll call “generic” and it’s exactly what I need.

In base_jobs.yml I say, “hey, here’s a new task called marionette-harness — run it whenever there’s a change under (branch)/testing/marionette/harness”. Of course, I can also just trigger the task with try syntax like try: -p linux64_tc -j marionette-harness -u none -t none.

When the task is triggered, a chain of events follows:

For Tasks that Make Sense in a gecko Source Checkout

As you can see, I made the build.sh script in the desktop-build docker image execute an arbitrary in-tree JOB_SCRIPT, and I created harness-test-linux.sh to run mozharness within a gecko source checkout.

Why not the desktop-test image?

But we can also run arbitrary mozharness scripts thanks to the configuration in the desktop-test docker image! Yes, and all of that configuration is geared toward testing a Firefox binary, which implies downloading tools that my task either doesn’t need or already has access to in the source tree. Now we have a lighter-weight option for executing tests that don’t exercise Firefox.

Why not mach?

In my lazy work-in-progress, I had originally executed the Marionette harness tests via a simple call to mach, yet now I have this crazy chain of shell scripts that leads all the way mozharness. The mach command didn’t disappear — you can run Marionette harness tests with ./mach python-test .... However, mozharness provides clearer control of Python dependencies, appropriate handling of return codes to report test results to Treeherder, and I can write a job-specific script and configuration.

Planet MozillaThese Weeks In Servo 61

In the last two weeks, we landed 228 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Zhen Zhang and Rahul Sharma were selected as 2016 GSoC students for Servo! They will be working on the File API and foundations for Service Workers respectively.

Notable Additions

  • nox landed Windows support in the upgraded SpiderMonkey - now we just need to land it in Servo!
  • bholley implemented Margin, Padding, font-size, and has_class for the Firefox/Gecko support in Servo’s style system
  • pcwalton fixed a bug that was preventing us from hitting 60fps reliably with browser.html and WebRender!
  • mbrubeck changed to use the line-breaking algorithm from Raph Levien’s xi-unicode project
  • frewsxcv removed the horrific Dock-thrashing while running the WPT and CSS tests on OSX
  • vramana implemented fetch support for file:// URLs
  • fabrice implemented armv7 support across many of our dependencies and in Servo itself
  • larsberg re-enabled gating checkins on Windows builds, now that the Windows Buildbot instance is more reliable
  • asajeffrey added reporting of backtraces to the Constellation during panic!, which will allow better reporting in the UI
  • danl added the style property for flex-basis in Flexbox
  • perlun improved line heights and fonts in input and textarea
  • jdm re-enabled the automated WebGL tests
  • ms2ger updated the CSS tests
  • dzbarsky implemented glGetVertexAttrib
  • jdm made canvas elements scale based on the DOM width and height
  • edunham improved our ability to correctly recognize and validate licenses
  • pcwalton implemented overflow:scroll in WebRender
  • KiChjang added support for multipart/form-data submission
  • fitzgen created a new method for dumping time profile info to an HTML file
  • mrobinson removed the need for StackingLevel info in WebRender
  • ddefisher added initial support for persistent sessions in Servo
  • cgwalters added an option to Homu to support linear commit histories better
  • simonsapin promoted rust-url to version 1.0
  • wafflespeanut made highfive automatically report test failures from our CI infrastructure
  • connorgbrewster finished integrating the experimental XML5 parser
  • emilio added some missing WebGL APIs and parameter validation
  • izgzhen implemented the scrolling-related CSSOM View APIs
  • wafflespeanut redesigned the network error handling code
  • jdm started and in-tree glossary

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Screenshot of Firefox browsing a very simple page using Servo’s Stylo style system implementation: (screenshot)

Logic error that caused the page to redraw after every HTML parser operation: (screenshot)

Meetings and Mailing List

Nick Fitzgerald made a thread describing his incredibly awesome profiler output for Servo: https://groups.google.com/forum/#!topic/mozilla.dev.servo/KmzdXoaKo9s

Planet Mozilla[worklog] Kusunoki, that smell.

This feeling when finally things get fixed after 2 years of negociating. Sometimes things take longer. Sometimes policies change on the other side. All in one, that's very good fortune for the Web and the users. It's bit like that smell for the last two years in summer time in my street, I finally got to ask the gardener of one of the houses around and it revealed what I should have known: Camphor tree (楠). Good week. Tune of the week: Carmina Burana - O Fortuna - Carl Orff.

Webcompat Life

Progress this week:

Today: 2016-05-02T09:21:45.583211
368 open issues
----------------------
needsinfo       4
needsdiagnosis  108
needscontact    35
contactready    93
sitewait        119
----------------------

You are welcome to participate

Londong agenda.

We had a meeting this week: Minutes

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat development

Gecko Bugs

Updating Our Webkit Prefixing Policy

This is the big news of the week. And that's a lot of good for the Web. WebKit (aka Apple) is switching from vendor prefixes to feature flags. What does it mean? It means that new features will be available only to developers who activate them. It allows for testing without polluting the feature-space.

The current consensus among browser implementors is that, on the whole, prefixed properties have hurt more than they’ve helped. So, WebKit’s new policy is to implement experimental features unprefixed, behind a runtime flag. Runtime flags allow us to continue to get experimental features into developers’ hands while avoiding the various problems vendor prefixes had.

Also

We’ll be evaluating existing features on a case-by-case basis. We expect to significantly reduce the number of prefixed properties supported over time but Web compatibility will require us to keep around prefixed versions of some features.

HTTP Cache Invalidation, Facebook, Chrome and Firefox

Facebook is proposing to change the policy for HTTP Cache invalidation. This thread is really interesting. It started as a simple question on changing the behavior of Firefox to align with changes planned for Chrome, but it is evolving into a discussion about how to do cache invalidation the right way. Really cool.

I remember this study below a little while ago (March 3, 2012). And I was wondering if we had similar data for Firefox.

for those users who filled up their cache, - 25% of them fill it up in 4 hours. - 50% of them fill it up within 20 hours. - 75% of them fill it up within 48 hours. Now, that's just wall clock time... but how many hours of "active" browsing does it take to fill the cache? - 25% in 1 hour, - 50% in 4 hours, - and 75% in 10 hours.

Found again through Cache me if you can.

I wonder how many times a resource which is set up with a max-age of 1 year is still around in the cache after 1 year. And if indeed Web developers set a long time for cache invalidation as to mean never reload it, it seems wise to have something in Cache-Control: to allow this. There is must-revalidate, I was wondering if the immutable is another way of saying never-revalidate. Maybe a max-age value is even not necessary at all. Anyway, read the full thread on the bug.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Planet MozillaGuess who got into Google Summer of Code?

I'm really excited to say that I'll be participating in Google Summer of Code this year, with Mozilla! I'm going to be working on the Balrog update server this summer, under Ben's guidance. Thank you Mozilla and Google for giving me this chance!

I'll be optimizing Balrog by devising a mechanism to handle update races. A future blog post will describe how exactly these races occur and how I aim to resolve them. Basically, an algorithm similar to what git uses for 3-way merges is required, but we also need to take care of nesting since the 'Blob' data structure, that we use, has nested data. I will also share my proposal and the timeline I'll be following in the coming weeks.

These three months are going to be amazing and I'm really looking forward to working with the Mozilla RelEng community! I will be blogging weekly once the coding period commences, and I welcome any suggestions that might lead to me presenting a better final product.

Planet MozillaMozillian profile: Jayesh

Hello, SUMO Nation!

Do you still remember Dinesh? Turns out he’s not the only Mozillian out there who’s happy to share his story with us. Today, I have the pleasure of introducing Jayesh, one of the many SUMOzillians among you, with a really inspiring story of his engagement in the community to share. Read on!

Jayesh

I’m Jayesh from India. I’ve been contributing to Mozilla as a Firefox Student Ambassador since 2014. I’m a self-made entrepreneur, tech lover, and passionate traveller. I am also an undergraduate with a Computer Science background.

During my university days I used to waste a lot of time playing games as I did not have a platform to showcase my technical skills. I thought working was only useful when you had a “real” job. I only heard about open source, but in my third year I came to know about open source contributors – through my friend Dinesh, who told me about the FSA program – this inspired me a lot. I thought it was the perfect platform for me to kickstart my career as a Mozillian and build a strong, bright future.

Being a techie, I could identify with Mozilla and its efforts to keep the web open. I registered for the FSA program with the guidance of my friend, and found a lot of students and open source enthusiasts from India contributing to Mozilla in many ways. I was very happy to join the Mozilla India Community.

Around 90% of Computer Science students at the university learn the technology but don’t actually try to implement working prototypes using their knowledge, as they don’t know about the possibility of open source contributions – they just believe that showcasing counts only during professional internships and work training. Thus, I thought of sharing my knowledge about open source contributors through the Mozilla community.

I gained experience conducting events for Mozilla in the Tirupati Community, where my friend was seeking help in conducting events as he was the only Firefox Student Ambassador in that region. Later, to learn more, we travelled to many places and attend various events in Bengaluru and Hyderabad , where we met a very well developed Mozilla community in southern India. We met many Mozilla Representatives and sought help from them. Vineel and Galaxy helped us a lot, guiding us through our first steps.

Later, I found that I was the only Mozillian in my region – Kumbakonam, where I do my undergrad studies – within a 200 miles radius. This motivated me to personally build a new university club – SRCMozillians. I inaugurated the club at my university with the help of the management.

More than 450 students in the university registered for the FSA program in the span of two days, and we have organized more than ten events, including FFOS App days, Moz-Quiz, Web-Development-Learning, Connected Devices-Learning, Moz-Stall, a ponsored fun event, community meet-ups – and more! All this in half a year. For my efforts, I was recognized as FSA of the month, August 2015 & FSA Senior.

The biggest problems we faced while building our club were the studying times, when we’d be having lots of assignments, cycle tests, lab internals, and more – with everyone really busy and working hard, it took time to bridge the gap and realise grades alone are not the key factor to build a bright future.

My contributions to the functional areas in Mozilla varied from time to time. I started with Webmaker by creating educational makes about X-Ray Goggles, App-Maker and Thimble. I’m proud of being recognized as a Webmaker Mentor for that. Later, I focused on Army of Awesome (AoA) by tweeting and helping Firefox users. I even developed two Firefox OS applications (Asteroids – a game and a community application for SRCMozillians), which were available in the Marketplace. After that, I turned my attention to Quality Assurance, as Software Testing was one of the subject in my curriculum. I started testing tasks in One And Done – this helped me understand the key concepts of software testing easily – especially checking the test conditions and triaging bugs. My name was even mentioned on the Mozilla blog about the Firefox 42.0 Beta 3 Test day for successfully testing and passing all the test cases.

I moved on to start localization for Telugu, my native language. I started translating KB articles – with time, my efforts were recognized, and I became a Reviewer for Telugu. This area of contribution proved to be very interesting, and I even started translating projects in Pontoon.

As you can see from my Mozillian story above, it’s easy to get started with something you like. I guarantee that every individual student with passion to contribute and build a bright career within the Mozilla community, can discover that this is the right platform to start with. The experience you gain here will help you a lot in building your future. I personally think that the best aspect of it is the global connection with many great people who are always happy to support and guide you.

– Jayesh , a proud Mozillian

Thank you, Jayesh! A great example of turning one’s passion into a great initiative that enables many people around you understand and use technology better. We’re looking forward to more open source awesomeness from you!

SUMO Blog readers – are you interested in posting on our blog about your open source projects and adventures? Let us know!

Planet MozillaLoading TaskCluster Docker Images

When TaskCluster builds a push to a Gecko repository, it does so in a docker image defined in that very push. This is pretty cool for developers concerned with the build or test environment: instead of working with releng to deploy a change, now you can experiment with that change in try, get review, and land it like any other change. However, if you want to actually download that docker image, docker pull doesn’t work anymore.

The image reference in the task description looks like this now:

"image": {
    "path": "public/image.tar",
    "taskId": "UDZUwkJWQZidyoEgVfFUKQ",
    "type": "task-image"
},

This is referring to an artifact of the task that built the docker image. If you want to pull that exact image, there’s now an easier way:

./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ

will download that docker image:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ
Task ID: UDZUwkJWQZidyoEgVfFUKQ
Downloading https://queue.taskcluster.net/v1/task/UDZUwkJWQZidyoEgVfFUKQ/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
dustin@dustin-moz-devel ~/p/m-c (central) $ docker images
REPOSITORY          TAG                                                                IMAGE ID            CREATED             VIRTUAL SIZE
mozilla-central     f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3   51e524398d5c        4 weeks ago         1.617 GB

But if you just want to pull the image corresponding to the codebase you have checked out, things are even easier: give the image name (the directory under testing/docker), and the tool will look up the latest build of that image in the TaskCluster index:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image desktop-build
Task ID: TjWNTysHRCSfluQjhp2g9Q
Downloading https://queue.taskcluster.net/v1/task/TjWNTysHRCSfluQjhp2g9Q/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073

Planet MozillaA Fast, Constant-time AEAD for TLS

The only TLS v1.2+ cipher suites with a dedicated AEAD scheme are the ones using AES-GCM, a block cipher mode that turns AES into an authenticated cipher. From a cryptographic point of view these are preferable to non-AEAD-based cipher suites (e.g. the ones with AES-CBC) because getting authenticated encryption right is hard without using dedicated ciphers.

For CPUs without the AES-NI instruction set, constant-time AES-GCM however is slow and also hard to write and maintain. The majority of mobile phones, and mostly cheaper devices like tablets and notebooks on the market thus cannot support efficient and safe AES-GCM cipher suite implementations.

Even if we ignored all those aforementioned pitfalls we still wouldn’t want to rely on AES-GCM cipher suites as the only good ones available. We need more diversity. Having widespread support for cipher suites using a second AEAD is necessary to defend against weaknesses in AES or AES-GCM that may be discovered in the future.

ChaCha20 and Poly1305, a stream cipher and a message authentication code, were designed with fast and constant-time implementations in mind. A combination of those two algorithms yields a safe and efficient AEAD construction, called ChaCha20/Poly1305, which allows TLS with a negligible performance impact even on low-end devices.

Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites as specified in the latest draft. We are looking forward to see the adoption of these increase and will, as a next step, work on prioritizing them over AES-GCM suites on devices not supporting AES-NI.

Planet MozillaFirefox 47 Beta 3 Testday, May 6th

Hey everyone,

I am happy to announce that the following Friday, May 6th, we are organizing a new event – Firefox 47 Beta 3 Testday. The main focus will be on Synced Tabs Sidebar and Youtube Embedded Rewrite features. The detailed instructions are available via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you all on Friday!

Planet MozillaWebExtensions in Firefox 48

We last updated you on our progress with WebExtensions when Firefox 47 landed in Developer Edition (Aurora), and today we have an update for Firefox 48, which landed in Developer Edition this week.

With the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Over the last release more than 82 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious how it’s affected by the upcoming changes, please use the lookup tool. There is also a wiki page filled with resources to support you through the changes.

APIs Implemented

Many APIs gained improved support in this release, including: alarms, bookmarks, downloads, notifications, webNavigation, webRequest, windows and tabs.

The options v2 API is now supported so that developers can implement an options UI for their users. We do not plan to support the options v1 API, which is deprecated in Chrome. You can see an example of how to use this API in the WebExtensions examples on Github.

image08

In Firefox 48 we pushed hard to make the WebRequest API a solid foundation for privacy and security add-ons such as Ghostery, RequestPolicy and NoScript. With the current implementation of the onErrorOccurred function, it is now possible for Ghostery to be written as a WebExtension.

The addition of reliable origin information was a major requirement for existing Firefox security add-ons performing cross-origin checks such as NoScript or uBlock Origin. This feature is unique to Firefox, and is one of our first expansions beyond parity with the Chrome APIs for WebExtensions.

Although requestBody support is not in Firefox 48 at the time of publication, we hope it will be uplifted. This change to Gecko is quite significant because it will allow NoScript’s XSS filter to perform much better as a WebExtension, with huge speed gains (20 times or more) in some cases over the existing XUL and XPCOM extension for many operations (e.g. form submissions that include file uploads).

We’ve also had the chance to dramatically increase our unit test coverage again across the WebExtensions API, and now our modules have over 92% test coverage.

Content Security Policy Support

By default WebExtensions now use a Content Security Policy, limiting the location of resources that can be loaded. The default policy for Firefox is the same as Chrome’s:

"script-src 'self'; object-src 'self';"

This has many implications, such as the following: eval will no longer work, inline JavaScript will not be executed and only local scripts and resources are loaded. To relax that and define your own, you’ll need to define a new CSP using the content_security_policy entry in the WebExtension’s manifest.

For example, to load scripts from example.com, the manifest would include a policy configuration that would look like this:

"content_security_policy": "script-src 'self' https://example.com; object-src 'self'"

Please note: this will be a backwards incompatible change for any Firefox WebExtensions that did not adhere to this CSP. Existing WebExtensions that do not adhere to the CSP will need to be updated.

Chrome compatibility

To improve the compatibility with Chrome, a change has landed in Firefox that allows an add-on to be run in Firefox without the add-on id specified. That means that Chrome add-ons can now be run in Firefox with no manifest changes using about:debugging and loading it as a temporary add-on.

Support for WebExtensions with no add-on id specified in the manifest is being added to addons.mozilla.org (AMO) and our other tools, and should be in place on AMO for when Firefox 48 lands in release.

Android Support

With the release of Firefox 48 we are announcing Android support for WebExtensions. WebExtensions add-ons can now be installed and run on Android, just like any other add-on. However, because Firefox for Android makes use of a native user interface, anything that involves user interface interaction is currently unsupported (similar to existing extensions on Android).

You can see what the full list of APIs supported on Android in the WebExtensions documentation on MDN, these include alarms, cookies, i18n and runtime.

Developer Support

In Firefox 45 the ability to load add-ons temporarily was added to about:debugging. In Firefox 48 several exciting enhancements are added to about:debugging.

If your add-on fails to load for some reason in about:debugging (most commonly due to JSON syntax errors), then you’ll get a helpful message appearing at the top of about:debugging. In the past, the error would be hidden away in the browser console.

image02

It still remains in the browser console, but is now visible that an error occurred right in the same page where loading was triggered.

image04

Debugging

You can now debug background scripts and content scripts in the debugging tools. In this example, to debug background scripts I loaded the add-on bookmark-it from the MDN examples. Next click “Enable add-on debugging”, then click “debug”:

image03

You will need to accept the incoming remote debugger session request. Then you’ll have a Web Console for the background page. This allows you to interact with the background page. In this case I’m calling the toggleBookmark API.

image06

This will call the toggleBookmark function and bookmark the page (note the bookmark icon is now blue. If you want to debug the toggleBookmark function,  just add the debugger statement at the appropriate line. When you trigger toggleBookmark, you’ll be dropped into the debugger:image09

You can now debug content scripts. In this example I’ve loaded the beastify add-on from the MDN examples using about:debugging. This add-on runs a content script to alter the current page by adding a red border.

All you have to do to debug it is to insert the debugger statement into your content script, open up the Developer Tools debugger and trigger the debug statement:

image05

You are then dropped into the debugger ready to start debugging the content script.

Reloading

As you may know, restarting Firefox and adding in a new add-on is can be slow, so about:debugging now allows you to reload an add-on. This will remove the add-on and then re-enable the add-on, so that you don’t have to keep restarting Firefox. This is especially useful for changes to the manifest, which will not be automatically refreshed. It also resets UI buttons.

In the following example the add-on just calls setBadgeText to add “Test” onto the browser action button (in the top right) when you press the button added by the add-on.

image03

Hitting reload for that add-on clears the state for that button and reloads the add-on from the manifest, meaning that after a reload, the “Test” text has been removed.

image07

This makes developing and debugging WebExtensions really easy. Coming soon, web-ext, the command line tool for developing add-ons, will gain the ability to trigger this each time a file in the add-on changes.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Update: clarified that no add-on id refers to the manifest as a WebExtension.

Planet Mozillacurl 7.49.0 goodies coming

Here’s a closer look at three new features that we’re shipping in curl and libcurl 7.49.0, to be released on May 18th 2016.

connect to this instead

If you’re one of the users who thought --resolve and doing Host: header tricks with --header weren’t good enough, you’ll appreciate that we’re adding yet another option for you to fiddle with the connection procedure. Another “Swiss army knife style” option for you who know what you’re doing.

With --connect-to you basically provide an internal alias for a certain name + port to instead internally use another name + port to connect to.

Instead of connecting to HOST1:PORT1, connect to HOST2:PORT2

It is very similar to --resolve which is a way to say: when connecting to HOST1:PORT1 use this ADDR2:PORT2. --resolve effectively prepopulates the internal DNS cache and makes curl completely avoid the DNS lookup and instead feeds it with the IP address you’d like it to use.

--connect-to doesn’t avoid the DNS lookup, but it will make sure that a different host name and destination port pair is used than what was found in the URL. A typical use case for this would be to make sure that your curl request asks a specific server out of several in a pool of many, where each has a unique name but you normally reach them with a single URL who’s host name is otherwise load balanced.

--connect-to can be specified multiple times to add mappings for multiple names, so that even following HTTP redirects to other host names etc can be handled. You don’t even necessarily have to redirect the first used host name.

The libcurl option name for for this feature is CURLOPT_CONNECT_TO.

Michael Kaufmann brought this feature.

http2 prior knowledge

In our ongoing quest to provide more and better HTTP/2 support in a world that is slowly but steadily doing more and more transfers over the new version of the protocol, curl now offers --http2-prior-knowledge.

As the name might hint, this is a way to tell curl that you have “prior knowledge” that the URL you specifies goes to a host that you know supports HTTP/2. The term prior knowledge is in fact used in the HTTP/2 spec (RFC 7540) for this scenario.

Normally when given a HTTP:// or a HTTPS:// URL, there will be no assumption that it supports HTTP/2 but curl when then try to upgrade that from version HTTP/1. The command line tool tries to upgrade all HTTPS:// URLs by default even, and libcurl can be told to do so.

libcurl wise, you ask for a prior knowledge use by setting CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE.

Asking for http2 prior knowledge when the server does in fact not support HTTP/2 will give you an error back.

Diego Bes brought this feature.

TCP Fast Open

TCP Fast Open is documented in RFC 7413 and is basically a way to pass on data to the remote machine earlier in the TCP handshake – already in the SYN and SYN-ACK packets. This of course as a means to get data over faster and reduce latency.

The --tcp-fastopen option is supported on Linux and OS X only for now.

This is an idea and technique that has been around for a while and it is slowly getting implemented and supported by servers. There have been some reports of problems in the wild when “middle boxes” that fiddle with TCP traffic see these packets, that sometimes result in breakage. So this option is opt-in to avoid the risk that it causes problems to users.

A typical real-world case where you would use this option is when  sending an HTTP POST to a site you don’t have a connection already established to. Just note that TFO relies on the client having had contact established with the server before and having a special TFO “cookie” stored and non-expired.

TCP Fast Open is so far only used for clear-text TCP protocols in curl. These days more and more protocols switch over to their TLS counterparts (and there’s room for future improvements to add the initial TLS handshake parts with TFO). A related option to speed up TLS handshakes is --false-start (supported with the NSS or the secure transport backends).

With libcurl, you enable TCP Fast Open with CURLOPT_TCP_FASTOPEN.

Alessandro Ghedini brought this feature.

Planet MozillaWhat’s Up with SUMO – 28th April

Hello, SUMO Nation!

Did you know that in Japanese mythology, foxes with nine tails are over a 100 years old and have the power of omniscience? I think we could get the same result if we put a handful of SUMO contributors in one room – maybe except for the tails ;-)

Here are the news from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 4th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Find your people and get organized!
  • We have three upcoming iOS articles that will need localization. Their drafts are still in progress (pending review from the product team). Coming your way real soon – watch your dashboards!
  • New l10n milestones coming to your dashboards soon, as well.

Firefox – RELEEEEAAAAASE WEEEEEEK ;-)

What’s your experience of release week? Share with us in the comments or our forums! We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!

Planet MozillaWeb QA Weekly Meeting, 28 Apr 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Planet MozillaReps weekly, 28 Apr 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaIn Lighter News…

…Windows XP Firefox users may soon be able to properly render poop.

winxpPoo

Here at Mozilla, we take these things seriously.

:chutten


Planet MozillaA Product Journal: Data Up and Data Down

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

We’re in the process of reviewing the KPI (Key Performance Indicators) for Firefox Hello (relatedly I joined the Firefox Hello team as engineering manager in October). Mozilla is trying (like everyone else) to make data-driven decisions. Basing decisions on data has some potential to remove or at least reveal bias. It provides a feedback mechanism that can provide continuity even as there are personnel changes. It provides some accountability over time. Data might also provide insight about product opportunities which we might otherwise miss.

Enter the KPI: for Hello (like most products) the key performance indicators are number of users, growth in users over time, user retention, and user sentiment (e.g., we use the Net Promoter Score). But like most projects those are not actually our success criteria: product engagement is necessary but not sufficient for organizational goals. Real goals might be revenue, social or political impact, or improvement in brand sentiment.

The value of KPI is often summarized as “letting us know how we’re doing”. I think the value KPI offers is more select:

  1. When you think a product is doing well, but it’s not, KPI is revealing.
  2. When you know a product isn’t doing well, KPI let’s you triage: is it hopeless? Do we need to make significant changes? Do we need to maintain our approach but try harder?
  3. When a product is doing well the KPI gives you a sense of the potential. You can also triage success: Should we invest heavily? Stay the path? Is there no potential to scale the success far enough?

I’m skeptical that KPI can provide the inverse of 1: when you think a product is doing poorly, can KPI reveal that it is doing well? Because there’s another set of criteria that defines “success”, KPI is necessary but not sufficient. It requires a carefully objective executive to revise their negative opinion about the potential of a project based on KPI, and they may have reasonably lost faith that a project’s KPI-defined success can translate into success given organizational goals.

The other theoretical value of KPI is that you could correlate KPI with changes to the product, testing whether each change improves your product’s core value. I’m sure people manage to do this, with both very fine grained measurements and fine grained deployments of changes. But it seems more likely to me that for most projects given a change in KPI you’ll simply have to say “yup” and come up with unverified theories about that change.

The metrics that actually support the development of the product are not “key”, they are “incidental”. These are metrics that find bugs in the product design, hint at unexplored opportunities, confirm the small wins. These are metrics that are actionable by the people making the product: how do people interact with the tool? What do they use it for? Where do they get lost? What paths lead to greater engagement?

What is KPI for?

I’m trying to think more consciously about the difference between managing up and managing down. A softer way of phrasing this is managing in and managing out – but in this case I think the power dynamics are worth highlighting.

KPI is data that goes up. It lets someone outside the project – and above the project – make choices: about investment, redirection, cancellation. KPI data doesn’t go down, it does little to help the people doing the work. Feeling joy or despair about your project based on KPI is not actionable for those people on the inside of a project.

Incentive or support

I would also distinguish two kinds of management here: one perspective on management is that the organization should set up the right incentives and consequences so that rewards are aligned with organizational goals. The right incentives might make people adapt their behavior to get alignment; how they adapt is undefined. The right incentives might also exclude those who aren’t in alignment, culling misalignment from the organization. Another perspective is that the organization should work to support people, that misalignment of purpose between a person and the organization is more likely a bug than a misalignment of intention. Are people black boxes that we can nudge via punishment and reward? Are there less mechanical ways to influence change?

Student performance measurement are another kind of KPI. It lets someone on the outside (of the classroom) know if things are going well or poorly for the students. It says little about why, and it doesn’t support improvement. School reform based on measurement presumes that teachers and schools are able to achieve the desired outcomes, but simply not willing. A risk of top-down reform: the people on the top use a perspective from the top. As an authority figure, how do I make decisions? The resulting reform is disempowering, supporting decisions from above, as opposed to using data to support the empowerment of those making the many day-to-day decisions that might effect a positive outcome.

Of course, having data available to inform decisions at all levels – from the executive to the implementor – would be great. But there’s a better criteria for data: it should support decision making processes. What are your most important decisions?

As an example from Mozilla, we have data about how much Firefox is used and its marketshare. How much should we pay attention to this data? We certainly don’t have the granularity to connect changes in this KPI to individual changes we make in the project. The only real way to do that is through controlled experiments (which we are trying). We aren’t really willing to triage the project; no one is asking “should we just give up on Firefox?” The only real choice we can make is: are we investing enough in Firefox, or should we invest more? That’s a question worth asking, but we need to keep our attention on the question and not the data. For instance, if we decide to increase investment in Firefox, the immediate questions are: what kind of investment? Over what timescale? Data can be helpful to answer those questions, but not just any data.

Exploratory data

Weeks after I wrote (but didn’t publish) this post I encountered Why Greatness Cannot Be Planned: The Myth of the Objective, a presentation by Kenneth Stanley:

Setting an objective can block its own achievement. It can be an obstacle to creativity and innovation in general. Without protection of individual autonomy collaboration can become dangerously objective.”

The example he uses is manually searching a space of nonlinear image generation to find interesting images. The positive example is one where people explore, branching from novel examples until something recognizable emerges:

One negative example is one where an algorithm explores with a goal in mind:

Another negative example is selection by voting, instead of personal exploration; a product of convergent consensus instead of divergent treasure hunting:

If you decide what you are looking for, you are unlikely to find it. This generated image search space is deliberately nonlinear, so it’s difficult to understand how actions affect outcomes. Though artificial, I think the example is still valid: in a competitive environment, the thing you are searching for is hard to find, because if it was not hard then someone would have found it. And it’s probably hard because actions affect outcomes in unexpected ways.

You could describe this observation as another way of describing the pitfalls of hill climbing: getting stuck at local maximums. Maybe an easy fix is to add a little randomness, to bounce around, to see what lies past the hill you’ve found. But the hills themselves can be distractions: each hill supposes a measurement. The divergent search doesn’t just reveal novel solutions, but it can reveal a novel rubric for success.

This is also a similar observation to that in Innovator’s Dilemma: specifically that in these cases good management consistently and deliberately keeps a company away from novelty and onto the established track, and it does so by paying attention to the feedback that defines the company’s (current) success. The disruptive innovation, a term somewhat synonymous with the book, is an innovation that requires a change in metrics, and that a large portion of the innovation is finding the metric (and so finding the market), not implementing the maximizing solution.

But I digress from the topic of data. If we’re going to be data driven to entirely new directions, we may need data that doesn’t answer a question, doesn’t support a decision, but just tells us about things we don’t know. To support exploration, not based on a hypothesis which we confirm or reject based on the data, because we are still trying to discover our hypothesis. We use the data to look for the hidden variable, the unsolved need, the desire that has not been articulated.

I think we look for this kind of data more often than we would admit. Why else would we want complex visualizations? The visualizations are our attempt at finding a pattern we don’t expect to find.

In Conclusion

I’m lousy at conclusions. All those words up there are like data, and I’m curious what they mean, but I haven’t figured it out yet.

Planet MozillaDoes Firefox update despite being set to "never check for updates"? This might be why.

If, like me, you have set Firefox to "never check for updates" for some reason, and yet it does sometimes anyway, this could be your problem: the chrome debugger.

The chrome debugger uses a separate profile, with the preferences copied from your normal profile. But, if your prefs (such as app.update.enabled) have changed, they remain in the debugger profile as they were when you first opened the debugger.

App update can be started by any profile using the app, so the debugger profile sees the pref as it once was, and goes looking for updates.

Solution? Copy the app update prefs from the main profile to the debugger profile (mine was at ~/.cache/mozilla/firefox/31392shv.default/chrome_debugger_profile), or just destroy the debugger profile and have a new one created next time you use it.

Just thought you might like to know.

Planet MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Planet MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Planet MozillaAnnouncing git-cinnabar 0.3.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

This is mostly a bug and regression-fixing release.

What’s new since 0.3.1?

  • Fixed a performance regression when cloning big repositories on OSX.
  • git configuration items with line breaks are now supported.
  • Fixed a number of issues with corner cases in mercurial data (such as, but not limited to nodes with no first parent, malformed .hgtags, etc.)
  • Fixed a stack overflow, a buffer overflow and a use-after free in cinnabar-helper.
  • Better work with git worktrees, or when called from subdirectories.
  • Updated git to 2.7.4 for cinnabar-helper.
  • Properly remove all refs meant to be removed when using git version lower than 2.1.

Planet MozillaJoin the Featured Add-ons Community Board

Are you a big fan of add-ons? Think you can help help identify the best content to spotlight on AMO? Then let’s talk!

All the add-ons featured on addons.mozilla.org (AMO) are selected by a board of community members. Each board consists of 5-8 members who nominate and select featured add-ons once a month for six months. Featured add-ons help users discover what’s new and useful, and downloads increase dramatically in the months they’re featured, so your participation really makes an impact.

And now the time has come to assemble a new board for the months July – December.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member. To be considered, please email us at amo-featured@mozilla.org with your name, and tell us how you’re involved with AMO. The deadline is Tuesday, May 10, 2016 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Planet MozillaBroken Add-ons in Firefox 46

A lot of add-ons are being broken by a subtle change in Firefox 46, in particular the removal of legacy array/generator comprehension.

Most of these add-ons (including mine) did not use array comprehension intentionally, but they copied some code from this page on developer.mozilla.org for doing an md5 hash of a string. It looked like this:

var s = [toHexString(hash.charCodeAt(i)) for (i in hash)].join("");

You should search through your source code for toHexString and make sure you aren’t using this. MDN was updated in January to fix this. Here’s what the new code looks like:

var s = Array.from(hash, (c, i) => toHexString(hash.charCodeAt(i))).join("");

The new code will only work in Firefox 32 and beyond. If for some reason you need an older version, you can go through the history of the page to find the array based version.

Using this old code will cause a syntax error, so it will cause much more breakage than you realize. You’ll want to get it fixed sooner than later because Firefox 46 started rolling out yesterday.

As a side note, Giorgio Maone caught this in January, but unfortunately all that was updated was the MDN page.

Planet WebKitRelease Notes for Safari Technology Preview 3

Safari Technology Preview Release 3 is now available for download. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. Release 3 of Safari Technology Preview covers WebKit revisions 199086–199865.

JavaScript

  • Added support for Symbol.isConcatSpreadable per the ES6 spec (r199397)
  • Made RegExp constructor get the Symbol.match property to decide if an object should be constructed like a RegExp object per the ES6 spec (r199106)
  • Changed String.match and String.search to use RegExp constructor per the ES6 spec (r199144)
  • Corrected how function declarations are hoisted per the ES6 spec (r199179)
  • Improved parsing of ES6 arrow functions (r199352)
  • Added RegExp.prototype[@@split] and made String.prototype.split use it per the ES6 spec (r199731)
  • Added RegExp.prototype[@@search] (r199748)
  • Updated the treatment of invoking methods on RegExp.prototype per the ES6 spec (r199545)
  • Made more test cases pass with ES6 RegExp unicode flag (r199523)
  • Added support for caching accesses to arguments.length for a performance speed-up (r199240)
  • Corrected the behavior of throw() for generators yielding to an inner generator per draft ECMAScript spec (r199652)

CSS

  • Implemented the functional :host() pseudo class (r199291)
  • Improved support for SVG cursor images (r199625)
  • Started using OpenType math fonts by default for MathML (r199773)
  • Fixed measurement of hanging punctuation (r199777)
  • Improved hyphenation when the last line in a paragraph only contains one syllable of a word (r199818)
  • Fixed a layout problem affecting CSS Grid items without a static inline position in RTL languages (r199098)
  • Fixed positioned items with gaps for CSS Grid (r199223)
  • Added support for CSS Grid grid-template-columns repeat(auto-fill, …) and repeat(auto-fit, …) (r199343)
  • Fixed positioned items with content alignment in CSS Grids (r199657)
  • Started using grid-template-areas to determine the explicit grid (r199661)
  • Corrected CSS Grid layout by using the margin box for non-auto minimum sizes (r199728)

Web APIs

  • Added support setting and retrieving Blob values in IndexedDB (r199120, r199230, r199499, r199524, r199708, r199730)
  • Corrected MessageEvent.source result once window has been fully created (r199087)
  • Improved stability when the first child of a shadow root is a comment node (r199097)
  • Made CSS be a proper constructor on the window object with static functions (r199112)
  • Exposed the Crypto constructor on the window object (r199159)
  • Added support for display: contents on <slot> elements (r199151)
  • Fixed FontFace so it will be properly reject the returned promise if Content Security Policy blocks all the URLs (r199611)
  • Made FontFaceSet handle null correctly (r199216)
  • Corrected DOMTokenList.contains() so it does not throw an exception (r199296)
  • Made Selection.deleteFromDocument not delete a character when the selection is a caret per the spec (r199585)
  • Improved the IndexedDB bindings to better match the spec (r199750, r199774)
  • Made AudioBufferSourceNode.buffer nullable (r199751)
  • Improved stability handling a wheel event that closes a frame (r199181)

Web Inspector

  • Made it possible to expand objects in the Instances heap snapshot view to see what it retains (r199379)
  • Improved performance dramatically in the Timelines tab when recording pages with a lot of rapid activity and for long recordings (r199747)
  • Improved JavaScript pretty printing performance by using Esprima and by no longer blocking the main thread (r199168, r199169)
  • Improved the profiler’s sampling rate to get closer to a 1ms sample frequency (r199092)
  • Improved filtering in Open Quickly dialog (r199143, r199226)
  • Made the Open Quickly dialog keep its resource list up to date (r199207)
  • Stopped trying to match color patterns in JavaScript source code to improve performance of large resources (r199095)
  • Changed take snapshot navigation button to a camera glyph (r199177)
  • Corrected source code location links in the JavaScript profile Call Trees view (r199201)
  • Made XHRs and Web Workers full-text searchable (r199263)
  • Improved the appearance of DOM nodes in object previews (r199322)
  • Improved the tab bar rendering when the tabs are small (r199325)
  • Corrected dock controls disappearing from the toolbar after leaving fullscreen (r199395)
  • Started remembering the zoom factor as a persistent setting across sessions (r199396)
  • Corrected sourceMappingURL not being used when sourceURL was also set (r199688)
  • Started localizing sizes and times by using Number.prototype.toLocaleString (r199635)
  • Made sourceMappingURL work more reliably across reloads (r199852)

Rendering

  • Improved the time to display for some pages — allowing a short page header to render immediately before other content populates later (r199155)
  • Fixed page tile layers disappearing when graphics acceleration is unavailable (r199130)
  • Made font-size: 0 render as 0 width when text-rendering: optimizeLegibility is used (r199150)
  • Corrected focus ring drawing at incorrect location on image map with a CSS transform (r199247)
  • Made negative letter-spacing affect the right edge of the content’s visual overflow (r199516)
  • Corrected compositing for WebGL based canvases after they changed size (r199536)
  • Started clearing the rendered icon on <input type=file> when an empty files list is set (r199540)
  • Improved performance of border-collapse: collapse on tables (r199552)
  • Improved rendering of select[multiple] to better match other browsers (r199553)
  • Fixed backdrop filter so it honors visibility: hidden (r199862)

Security

  • Made nested browsing context created for <object> or <embed> respect Content Security Policy’s object-src directive (r199527)
  • Started ignoring Content Security Policy meta tags if it is not a descendent of <head> per the spec (r199163)
  • Started ignoring report-only Content Security Policy directives delivered via meta tag per the spec (r199538)
  • Started ignoring paths in Content Security Policy URL matching after redirects per spec (r199612)
  • Removed support for X-Frame-Options in <meta> per the spec (r199696)

Networking

  • Stopped speculatively revalidating cached redirects (r199521)
  • Stopped cacheing responses with Content-Range headers to avoid serving incorrect results (r199090)
  • Fixed clearing the application cache when removing website data in Privacy preferences (r199204)

Accessibility

  • Changed the application role description to “web application” to avoid confusion with the top-level system application description (r199260)
  • Made presentation role be preferred over child <title> and <desc> elements in SVG content (r199588)

Planet MozillaThe Joy of Coding - Episode 55

The Joy of Coding - Episode 55 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaApril 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg

April 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg Regardless of whether an organization is decentralized or command & control, large-scale changes are never simple nor straightforward. There's no silver bullets. And yet, when...

Planet MozillaFirefox 46.0 and SHA512SUMS

In my previous post I introduced the new release process we have been adopting in the 46.0 release cycle.

Release build promotion has been in production since Firefox 46.0 Beta 1. We have discovered some minor issues; some of them are already fixed, some still waiting.

One of the visible bugs is Bug 1260892. We generate a big SHA512SUMS file, which should contain all important checksums. With numerous changes to the process the file doesn't represent all required files anymore. Some files are missing, some have different names.

We are working on fixing the bug, but you can use the following work around to verify the files.

For example, if you want to verify http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe, you need use the following 2 files:

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc

Example commands:

# download all required files
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/KEY
# Import Mozilla Releng key into a temporary GPG directory
$ mkdir .tmp-gpg-home && chmod 700 .tmp-gpg-home
$ gpg --homedir .tmp-gpg-home --import KEY
# verify the signature of the checksums file
$ gpg --homedir .tmp-gpg-home --verify firefox-46.0.checksums.asc && echo "OK" || echo "Not OK"
# calculate the SHA512 checksum of the file
$ sha512sum "Firefox Setup 46.0.exe"
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479  Firefox Setup 46.0.exe
# lookup for the checksum in the checksums file
$ grep c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 firefox-46.0.checksums
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 sha512 46275456 install/sea/firefox-46.0.ach.win64.installer.exe

This is just a temporary work around and the bug will be fixed ASAP.

Planet MozillaSuMo Community Call 27th April 2016

SuMo Community Call 27th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-04-27

Planet MozillaNon-lexical lifetimes: introduction

Over the last few weeks, I’ve been devoting my free time to fleshing out the theory behind non-lexical lifetimes (NLL). I think I’ve arrived at a pretty good point and I plan to write various posts talking about it. Before getting into the details, though, I wanted to start out with a post that lays out roughly how today’s lexical lifetimes work and gives several examples of problem cases that we would like to solve.

The basic idea of the borrow checker is that values may not be mutated or moved while they are borrowed. But how do we know whether a value is borrowed? The idea is quite simple: whenever you create a borrow, the compiler assigns the resulting reference a lifetime. This lifetime corresponds to the span of the code where the reference may be used. The compiler will infer this lifetime to be the smallest lifetime that it can that still encompasses all the uses of the reference.

Note that Rust uses the term lifetime in a very particular way. In everyday speech, the word lifetime can be used in two distinct – but similar – ways:

  1. The lifetime of a reference, corresponding to the span of time in which that reference is used.
  2. The lifetime of a value, corresponding to the span of time before that value gets freed (or, put another way, before the destructor for the value runs).

This second span of time, which describes how long a value is valid, is of course very important. We refer to that span of time as the value’s scope. Naturally, lifetimes and scopes are linked to one another. Specifically, if you make a reference to a value, the lifetime of that reference cannot outlive the scope of that value, Otherwise your reference would be pointing into free memory.

To better see the distinction between lifetime and scope, let’s consider a simple example. In this example, the vector data is borrowed (mutably) and the resulting reference is passed to a function capitalize. Since capitalize does not return the reference back, the lifetime of this borrow will be confined to just that call. The scope of data, in contrast, is much larger, and corresponds to a suffix of the fn body, stretching from the let until the end of the enclosing scope.

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
fn foo() {
    let mut data = vec!['a', 'b', 'c']; // --+ 'scope
    capitalize(&mut data[..]);          //   |
//  ^~~~~~~~~~~~~~~~~~~~~~~~~ 'lifetime //   |
    data.push('d');                     //   |
    data.push('e');                     //   |
    data.push('f');                     //   |
} // <---------------------------------------+

fn capitalize(data: &mut [char]) {
    // do something
}
</figure>

This example also demonstrates something else. Lifetimes in Rust today are quite a bit more flexible than scopes (if not as flexible as we might like, hence this RFC):

  • A scope generally corresponds to some block (or, more specifically, a suffix of a block that stretches from the let until the end of the enclosing block) [1].
  • A lifetime, in contrast, can also span an individual expression, as this example demonstrates. The lifetime of the borrow in the example is confined to just the call to capitalize, and doesn’t extend into the rest of the block. This is why the calls to data.push that come below are legal.

So long as a reference is only used within one statement, today’s lifetimes are typically adequate. Problems arise however when you have a reference that spans multiple statements. In that case, the compiler requires the lifetime to be the innermost expression (which is often a block) that encloses both statements, and that is typically much bigger than is really necessary or desired. Let’s look at some example problem cases. Later on, we’ll see how non-lexical lifetimes fixes these cases.

Problem case #1: references assigned into a variable

One common problem case is when a reference is assigned into a variable. Consider this trivial variation of the previous example, where the &mut data[..] slice is not passed directly to capitalize, but is instead stored into a local variable:

<figure class="code">
1
2
3
4
5
6
7
8
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ 'lifetime
    capitalize(slice);         //   |
    data.push('d'); // ERROR!  //   |
    data.push('e'); // ERROR!  //   |
    data.push('f'); // ERROR!  //   |
} // <------------------------------+
</figure>

The way that the compiler currently works, assigning a reference into a variable means that its lifetime must be as large as the entire scope of that variable. In this case, that means the lifetime is now extended all the way until the end of the block. This in turn means that the calls to data.push are now in error, because they occur during the lifetime of slice. It’s logical, but it’s annoying.

In this particular case, you could resolve the problem by putting slice into its own block:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    {
        let slice = &mut data[..]; // <-+ 'lifetime
        capitalize(slice);         //   |
    } // <------------------------------+
    data.push('d'); // OK
    data.push('e'); // OK
    data.push('f'); // OK
}
</figure>

Since we introduced a new block, the scope of slice is now smaller, and hence the resulting lifetime is smaller. Of course, introducing a block like this is kind of artificial and also not an entirely obvious solution.

Problem case #2: conditional control flow

Another common problem case is when references are used in only match arm. This most commonly arises around maps. Consider this function, which, given some key, processes the value found in map[key] if it exists, or else inserts a default value:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
fn process_or_default<K,V:Default>(map: &mut HashMap<K,V>,
                                   key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => process(value),     // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR.              // |
        }                                  // |
    } // <------------------------------------+
}
</figure>

This code will not compile today. The reason is that the map is borrowed as part of the call to get_mut, and that borrow must encompass not only the call to get_mut, but also the Some branch of the match. The innermost expression that encloses both of these expressions is the match itself (as depicted above), and hence the borrow is considered to extend until the end of the match. Unfortunately, the match encloses not only the Some branch, but also the None branch, and hence when we go to insert into the map in the None branch, we get an error that the map is still borrowed.

This particular example is relatively easy to workaround. One can (frequently) move the code for None out from the match like so:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
fn process_or_default1<K,V:Default>(map: &mut HashMap<K,V>,
                                    key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => {                   // |
            process(value);                // |
            return;                        // |
        }                                  // |
        None => {                          // |
        }                                  // |
    } // <------------------------------------+
    map.insert(key, V::default());
}
</figure>

When the code is adjusted this way, the call to map.insert is not part of the match, and hence it is not part of the borrow. While this works, it is of course unfortunate to require these sorts of manipulations, just as it was when we introduced an artificial block in the previous example.

Problem case #3: conditional control flow across functions

While we were able to work around problem case #2 in a relatively simple, if irritating, fashion. there are other variations of conditional control flow that cannot be so easily resolved. This is particularly true when you are returning a reference out of a function. Consider the following function, which returns the value for a key if it exists, and inserts a new value otherwise (for the purposes of this section, assume that the entry API for maps does not exist):

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
fn get_default<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                               key: K)
                               -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => value,              // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR               // |
            map.get_mut(&key).unwrap()     // |
        }                                  // |
    }                                      // |
}                                          // v
</figure>

At first glance, this code appears quite similar the code we saw before. And indeed, just as before, it will not compile. But in fact the lifetimes at play are quite different. The reason is that, in the Some branch, the value is being returned out to the caller. Since value is a reference into the map, this implies that the map will remain borrowed until some point in the caller (the point 'm, to be exact). To get a better intuition for what this lifetime parameter 'm represents, consider some hypothetical caller of get_default: the lifetime 'm then represents the span of code in which that caller will use the resulting reference:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
fn caller() {
    let mut map = HashMap::new();
    ...
    {
        let v = get_default(&mut map, key); // -+ 'm
          // +-- get_default() -----------+ //  |
          // | match map.get_mut(&key) {  | //  |
          // |   Some(value) => value,    | //  |
          // |   None => {                | //  |
          // |     ..                     | //  |
          // |   }                        | //  |
          // +----------------------------+ //  |
        process(v);                         //  |
    } // <--------------------------------------+
    ...
}
</figure>

If we attempt the same workaround for this case that we tried in the previous example, we will find that it does not work:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
fn get_default1<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => return value,       // |
        None => { }                        // |
    }                                      // |
    map.insert(key, V::default());         // |
    //  ^~~~~~ ERROR (still)                  |
    map.get_mut(&key).unwrap()             // |
}                                          // v
</figure>

Whereas before the lifetime of value was confined to the match, this new lifetime extends out into the caller, and therefore the borrow does not end just because we exited the match. Hence it is still in scope when we attempt to call insert after the match.

The workaround for this problem is a bit more involved. It relies on the fact that the borrow checker uses the precise control-flow of the function to determine what borrows are in scope.

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
fn get_default2<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    if map.contains(&key) {
    // ^~~~~~~~~~~~~~~~~~ 'n
        return match map.get_mut(&key) { // + 'm
            Some(value) => value,        // |
            None => unreachable!()       // |
        };                               // v
    }

    // At this point, `map.get_mut` was never
    // called! (As opposed to having been called,
    // but its result no longer being in use.)
    map.insert(key, V::default()); // OK now.
    map.get_mut(&key).unwrap()
}
</figure>

What has changed here is that we moved the call to map.get_mut inside of an if, and we have set things up so that the if body unconditionally returns. What this means is that a borrow begins at the point of get_mut, and that borrow lasts until the point 'm in the caller, but the borrow checker can see that this borrow will not have even started outside of the if. So it does not consider the borrow in scope at the point where we call map.insert.

This workaround is more troublesome than the others, because the resulting code is actually less efficient at runtime, since it must do multiple lookups.

It’s worth noting that Rust’s hashmaps include an entry API that one could use to implement this function today. The resulting code is both nicer to read and more efficient even than the original version, since it avoids extra lookups on the not present path as well:

<figure class="code">
1
2
3
4
5
6
fn get_default3<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    map.entry(key)
       .or_insert_with(|| V::default())
}
</figure>

Regardless, the problem exists for other data structures besides HashMap, so it would be nice if the original code passed the borrow checker, even if in practice using the entry API would be preferable. (Interestingly, the limitation of the borrow checker here was one of the motivations for developing the entry API in the first place!)

Conclusion

This post looked at various examples of Rust code that do not compile today, and showed how they can be fixed using today’s system. While it’s good that workarounds exist, it’d be better if the code just compiled as is. In an upcoming post, I will outline my plan for how to modify the compiler to achieve just that.

Endnotes

1. Scopes always correspond to blocks with one exception: the scope of a temporary value is sometimes the enclosing statement.

Planet MozillaBay Area Rust Meetup April 2016

Bay Area Rust Meetup April 2016 Rust meetup on the subject of operating systems.

Planet MozillaConnected Devices Weekly Program Review, 26 Apr 2016

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.

Planet MozillaDifferent kinds of storage

I’ve been spending most of my time so far on Project Tofino thinking about how a user agent stores data.

A user agent is software that mediates your interaction with the world. A web browser is one particular kind of user agent: one that fetches parts of the web and shows them to you.

(As a sidenote: browsers are incredibly complicated, not just for the obvious reasons of document rendering and navigation, but also because parts of the web need to run code on your machine and parts of it are actively trying to attack and track you. One of a browser’s responsibilities is to keep you safe from the web.)

Chewing on Redux, separation of concerns, and Electron’s process model led to us drawing a distinction between a kind of ‘profile service’ and the front-end browser itself, with ‘profile’ defined as the data stored and used by a traditional browser window. You can see the guts of this distinction in some of our development docs.

The profile service stores full persistent history and data like it. The front-end, by contrast, has a pure Redux data model that’s much closer to what it needs to show UI — e.g., rather than all of the user’s starred pages, just a list of the user’s five most recent.

The front-end is responsible for fetching pages and showing the UI around them. The back-end service is responsible for storing data and answering questions about it from the front-end.

To build that persistent storage we opted for a mostly event-based model: simple, declarative statements about the user’s activity, stored in SQLite. SQLite gives us durability and known performance characteristics in an embedded database.

On top of this we can layer various views (materialized or not). The profile service takes commands as input and pushes out diffs, and the storage itself handles writes by logging events and answering queries through views. This is the CQRS concept applied to an embedded store: we use different representations for readers and writers, so we can think more clearly about the transformations between them.

Where next?

One of the reasons we have a separate service is to acknowledge that it might stick around when there are no browser windows open, and that it might be doing work other than serving the immediate needs of a browser window. Perhaps the service is pre-fetching pages, or synchronizing your data in the background, or trying to figure out what you want to read next. Perhaps you can interact with the service from something other than a browser window!

Some of those things need different kinds of storage. Ad hoc integrations might be best served by a document store; recommendations might warrant some kind of graph database.

When we look through that lens we no longer have just a profile service wrapping profile storage. We have a more general user agent service, and one of the data sources it manages is your profile data.

Planet MozillaMigrating Popup ALT Attribute from XUL/XPCOM to WebExtensions

Today’s post comes from Piro, the developer of Popup ALT Attribute, in addition to 40 other add-ons. He shares his thoughts about migrating XUL/XPCOM add-ons to WebExtensions, and shows us how he did it with Popup ALT Attribute. You can see the full text of this post on his personal blog.

***

Hello, add-on developers. My name is YUKI Hiroshi aka Piro, a developer of Firefox add-ons. For many years I developed Firefox and Thunderbird add-ons personally and for business, based on XUL and XPCOM.

I recently started to research the APIs are required to migrate my add-ons to WebExtensions, because Mozilla announced that XUL/XPCOM add-ons will be deprecated at the end of 2017. I realized that only some add-ons can be migrated with currently available APIs, and
Popup ALT Attribute is one such add-on.

Here is the story of how I migrated it.

What’s the add-on?

Popup ALT Attribute is an ancient add-on started in 2002, to show what is written in the alt attribute of img HTML elements on web pages. By default, Firefox shows only the title attribute as a tooltip.

Initially, the add-on was implemented to replace an internal function FillInHTMLTooltip() of Firefox itself.

In February 2016, I migrated it to be e10s-compatible. It is worth noting that depending on your add-on, if you can migrate it directly to WebExtensions, it will be e10s-compatible by default.

Re-formatting in the WebExtensions style

I read the tutorial on how to build a new simple WebExtensions-based add-on from scratch before migration, and I realized that bootstrapped extensions are similar to WebExtensions add-ons:

  • They are dynamically installed and uninstalled.
  • They are mainly based on JavaScript code and some static manifest files.

My add-on was easily re-formatted as a WebExtensions add-on, because I already migrated it to bootstrapped.

This is the initial version of the manifest.json I wrote. There were no localization and options UI:

{
  "manifest_version": 2,
  "name": "Popup ALT Attribute",
  "version": "4.0a1",
  "description": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip.",
  "icons": { "32": "icons/icon.png" },
  "applications": {
    "gecko": { "id": "{61FD08D8-A2CB-46c0-B36D-3F531AC53C12}",
               "strict_min_version": "48.0a1" }
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": ["content_scripts/content.js"],
      "run_at": "document_start" }
  ]
}

I had already separated the main script to a frame script and a loader for it. On the other hand, manifest.json can have some manifest keys to describe how scripts are loaded. It means that I don’t need to put my custom loaders in the package anymore. Actually, a script for any web page can be loaded with the content_scripts rule in the above sample. See the documentation for content_scripts for more details.

So finally only 3 files were left.

Before:

+ install.rdf
+ icon.png
+ [components]
+ [modules]
+ [content]
    + content-utils.js

And after:

+ manifest.json (migrated from install.rdf)
+ [icons]
|   + icon.png (moved)
+ [content_scripts]
    + content.js (moved and migrated from content-utils.js)

And I still had to isolate my frame script from XPCOM.

  • The script touched nsIPrefBranch and some XPCOM components via XPConnect, so they were temporarily commented out.
  • User preferences were not available and only default configurations were there as fixed values.
  • Some constant properties accessed, like Ci.nsIDOMNode.ELEMENT_NODE, had to be replaced as Node.ELEMENT_NODE.
  • The listener for mousemove events from web pages was attached to the global namespace for a frame script, but it was re-attached to the document itself of each web page, because the script was now executed on each web page directly.

Localization

For the old install.rdf I had a localized description. In WebExtensions add-ons I had to do it in different way. See how to localize messages for details. In short I did the following:

Added files to define localized descriptions:

+ manifest.json
+ [icons]
+ [content_scripts]
+ [_locales]
    + [en_US]
    |   + messages.json (added)
    + [ja]
        + messages.json (added)

Note, en_US is different from en-US in install.rdf.

English locale, _locales/en_US/messages.json was:

{
  "name": { "message": "Popup ALT Attribute" },
  "description": { "message": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip." }
}

Japanese locale, _locales/ja/messages.json was also included. And, I had to update my manifest.json to embed localized messages:

{
  "manifest_version": 2,
  "name": "__MSG_name__",
  "version": "4.0a1",
  "description": "__MSG_description__",
  "default_locale": "en_US",
  ...

__MSG_****__ in string values are automatically replaced to localized messages. You need to specify the default locale manually via the default_locale key.

Sadly, Firefox 45 does not support the localization feature, so you need to use Nightly 48.0a1 or newer to try localization.

User preferences

Currently, WebExtensions does not provide any feature completely compatible to nsIPrefBranch. Instead, there are simple storage APIs. It can be used like an alternative of nsIPrefBranch to set/get user preferences. This add-on had no configuration UI but had some secret preferences to control its advanced features, so I did it for future migrations of my other add-ons, as a trial.

Then I encountered a large limitation: the storage API is not available in content scripts. I had to create a background script just to access the storage, and communicate with it via the inter-sandboxes messaging system. [Updated 4/27/16: bug 1197346 has been fixed on Nightly 49.0a1, so now you don’t need any hack to access the storage system from content scripts anymore. Now, my library (Configs.js) just provides easy access for configuration values instead of the native storage API.]

Finally, I created a tiny library to do that. I don’t describe how I did it here, but if you hope to know details, please see the source. There are just 177 lines.

I had to update my manifest.json to use the library from both the background page and the content script, like:

  "background": {
    "scripts": [
      "common/Configs.js", /* the library itself */
      "common/common.js"   /* codes to use the library */
    ]
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": [
        "common/Configs.js", /* the library itself */
        "common/common.js",  /* codes to use the library */
        "content_scripts/content.js"
      ],
      "run_at": "document_start" }
  ]

Scripts listed in the same section share a namespace for the section. I didn’t have to write any code like require() to load a script from others. Instead, I had to be careful about the listing order of scripts, and wrote a script requiring a library after the library itself, in each list.

One last problem was: how to do something like the about:config or the MCD — general methods to control secret preferences across add-ons.

For my business clients, I usually provide add-ons and use MCD to lock their configurations. (There are some common requirements for business use of Firefox, so combinations of add-ons and MCD are more reasonable than creating private builds of Firefox with different configurations for each client.)

I think I still have to research around this point.

Options UI

WebExtensions provides a feature to create options pages for add-ons. It is also not supported on Firefox 45, so you need to use Nightly 48.0a1 for now. As I previously said, this add-on didn’t have its configuration UI, but I implemented it as a trial.

In XUL/XPCOM add-ons, rich UI elements like <checkbox>, <textbox>, <menulist>, and more are available, but these are going away at the end of next year. So I had to implement a custom configuration UI based on pure HTML and JavaScript. (If you need more rich UI elements, some known libraries for web applications will help you.)

On this step I created two libraries:

Conclusion

I’ve successfully migrated my Popup ALT Attribute add-on from XUL/XPCOM to WebExtensions. Now it is just a branch but I’ll release it after Firefox 48 is available.

Here are reasons why I could do it:

  • It was a bootstrapped add-on, so I had already isolated the add-on from all destructive changes.
  • The core implementation of the add-on was similar to a simple user script. Essential actions of the add-on were enclosed inside the content area, and no privilege was required to do that.

However, it is a rare case for me. My other 40+ add-ons require some privilege, and/or they work outside the content area. Most of my cases are such non-typical add-ons.

I have to do triage, plan, and request new APIs not only for me but for other XUL/XPCOM add-on developers also.

Thank you for reading.

Planet MozillaUpdate to Firefox Released Today

The latest version of Firefox was released today. It features an improved look and feel for Linux users, a minor security improvement and additional updates for all Firefox users.

The update to Firefox for Android features minor changes, including an improvement to user notifications and clearer homescreen shortcut icons.

More information:

Planet MozillaMartes mozilleros, 26 Apr 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaNightly is where I will live

After some time working on Firefox OS and Connected Devices, I am moving back to Desktop land. Going forward I will be working with the Release Management Team as the Nightly Program Manager. That means I would love to work with all of you all to identify any potential issues in Nightly and help bring them to resolution. To that end, I have done a few things. First, we now have a Telegram Group for Nightly Testers. Feel free to join that group if you want to keep up with issues we are

Planet MozillaHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)

discuss these changes on mozilla.tools.bmo.


Planet WebKitUpdating Our Prefixing Policy

When implementing new features for the Web, it’s important for us to be able to get them into the hands of developers early, so they can give new things a try. (Of course, this also helps us identify and fix bugs!) In the past, browsers did this by using vendor-prefixed names for features. This was intended to protect the Web from the churn of spec and implementation changes. Browsers would eventually implement the standard version with no prefix and drop support for the prefixed version.

Over time this strategy has turned out not to work so well. Many websites came to depend on prefixed properties. They often used every prefixed variant of a feature, which makes CSS less maintainable and JavaScript programs trickier to write. Sites frequently used just the prefixed version of a feature, which made it hard for browsers to drop support for the prefixed variant when adding support for the unprefixed, standard version. Ultimately, browsers felt pressured by compatibility concerns to implement each other’s prefixes.

The current consensus among browser implementors is that, on the whole, prefixed properties have hurt more than they’ve helped. So, WebKit’s new policy is to implement experimental features unprefixed, behind a runtime flag. Runtime flags allow us to continue to get experimental features into developers’ hands while avoiding the various problems vendor prefixes had. Runtime flags also make it easier for us to have different default settings between stable builds and preview builds such as Safari Technology Preview.

We’ll be applying our updated policy to new feature work going forward. Whether or not a runtime flag should be on or off on WebKit trunk (and thus in nightly builds) depends on the maturity of the feature, both in terms of its spec stability and implementation maturity.

What does this mean for Web developers?

Initially, developers shouldn’t notice anything different. In the longer term we hope this change will make it easier for you to try out upcoming features. As always, we encourage you to give in-progress features a try. Feedback and bug reports on experimental features are very welcome.

What about currently prefixed features?

We’ll be evaluating existing features on a case-by-case basis. We expect to significantly reduce the number of prefixed properties supported over time but Web compatibility will require us to keep around prefixed versions of some features.

We invite comments and feedback on the new policy from Web developers, educators, and our colleagues working on other browser engines. Feel free to reach out to me on Twitter (@hober), Jon Davis (@jonathandavis), @webkit, or email me directly at hober@apple.com.

Planet MozillaFirst things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

Planet MozillaAbsorbing 1,000 emails per day

Some people say email is dead. Some people say there are “email killers” and bring up a bunch of chat and instant messaging services. I think those people communicate far too little to understand how email can scale.

I receive up to around 1,000 emails per day. I average on a little less but I do have spikes way above.

Why do I get a thousand emails?

Primarily because I participate on a lot of mailing lists. I run a handful of open source projects myself, each with at least one list. I follow a bunch more projects; more mailing lists. We have a whole set of mailing lists at work (Mozilla) and I participate and follow several groups in the IETF. Lists and lists. I discuss things with friends on a few private mailing lists. I get notifications from services about things that happen (commits, bugs submitted, builds that break, things that need to get looked at). Mails, mails and mails.

Don’t get me wrong. I prefer email to web forums and stuff because email allows me to participate in literally hundreds of communities from a single spot in an asynchronous manner. That’s a good thing. I would not be able to do the same thing if I had to use one of those “email killers” or web forums.

Unwanted email

I unsubscribe from lists that I grow tired from. I stamp down on spam really hard and I run aggressive filters and blacklists that actually make me receive rather few spam emails these days, percentage wise. There are nowadays about 3,000 emails per month addressed to me that my mail server accepts that are then classified as spam by spamassassin. I used to receive a lot more before we started using better blacklists. (During some periods in the past I received well over a thousand spam emails per day.) Only 2-3 emails per day out of those spam emails fail to get marked as spam correctly and subsequently show up in my inbox.

Flood management

My solution to handling this steady high paced stream of incoming data is prioritization and putting things in different bins. Different inboxes.

  1. Filter incoming email. Save the email into its corresponding mailbox. At this very moment, I have about 30 named inboxes that I read. I read them in order, top to bottom as they’re sorted in roughly importance order (to me).
  2. Mails that don’t match an existing mailing list or topic that get stored into the 28 “topic boxes” run into another check: is the sender a known “friend” ? That’s a loose term I use, but basically means that the mail is from an email address that I have had conversations with before or that I know or trust etc. Mails from “friends” get the honor of getting put in mailbox 0. The primary one. If the mail comes from someone not listed as friend, it’ll end up in my “suspect” mailbox. That’s mailbox 1.
  3. Some of the emails get the honor of getting forwarded to a cloud email service for which I have an app in my phone so that I can get a sense of important mail that arrive. But I basically never respond to email using my phone or using a web interface.
  4. I also use the “spam level” in spams to save them in different spam boxes. The mailbox receiving the highest spam level emails is just erased at random intervals without ever being read (unless I’m tracking down a problem or something) and the “normal” spam mailbox I only check every once in a while just to make sure my filters are not hiding real mails in there.

Reading

I monitor my incoming mails pretty frequently all through the day – every day. My wife calls me obsessed and maybe I am. But I find it much easier to handle the emails a little at a time rather than to wait and have it pile up to huge lumps to deal with.

I receive mail at my own server and I read/write my email using Alpine, a text based mail client that really excels at allowing me to plow through vast amounts of email in a short time – something I can’t say that any UI or web based mail client I’ve tried has managed to do at a similar degree.

A snapshot from my mailbox from a while ago looked like this, with names and some topics blurred out. This is ‘INBOX’, which is the main and highest prioritized one for me.

alpine screenshot

I have my mail client to automatically go to the next inbox when I’m done reading this one. That makes me read them in prio order. I start with the INBOX one where supposedly the most important email arrives, then I check the “suspect” one and then I go down the topic inboxes one by one (my mail client moves on to the next one automatically). Until either I get overwhelmed and just return to the main box for now or I finish them all up.

I tend to try to deal with mails immediately, or I mark them as ‘important’ and store them in the main mailbox so that I can find them again easily and quickly.

I try to only keep mails around in my mailbox that concern ongoing topics, discussions or current matters of concern. Everything else should get stored away. It is hard work to maintain the number of emails there at a low number. As you all know.

Writing email

I averaged at less than 200 emails written per month during 2015. That’s 6-7 per day.

That makes over 150 received emails for every email sent.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>