Planet MozillaMozilla Switzerland Goals H1 2016

Back in November we had a Community Meetup. The goal was to get a current status on the Community and define plans and goals for 2016. To do that, we started with a SWOT-Analysis. You can find it here.

With these remarks in mind, we started to define goals for 2016. Since there are a lot of changes within one year, the goals will currently only focus on the first part of the year. Then we can evaluate them, shift metrics if needed, and define new goals. This allows us to be more flexible.

The goals are highly influenced by the OKR (Objective – Key Results) Framework. To document open issues that support this goal, I have created a repository in our MozillaCH GitHub organization. The goal is to assign the “overall goal” label to each issue. You can find a good documentation on GitHub issues in their documentation. There is a template you can use for new issues.

  • Objective 1: The community is vibrant and active due to structured contribution areas
  • Objective 2: MozillaCH is a valuable partner for privacy in Switzerland
  • Objective 3: There is a vibrant community in the “Romandie” which is part of the overall community
  • Objective 4: The MozillaCH website is the place to link to for community topics
  • Objective 5: With talks and events we increase our reach and provide a valuable information source regarding the Open Web
  • Objective 6: Social Media is a crucial part of our activities providing valuable information about Mozilla and the Open Web

CRhvD5rWwAEDmGp

We know that not all of those goals are easily achievable, but this gives us a good way to be ambitious. To a successful first half of 2016, let’s bring our community further and keep rocking the Open Web!

CL52IqpWIAAQN0Y

Planet MozillaImagine there's no Intel transition ...

... and with a 12-core POWER8 workstation, it's easy if you try. Reported to me by a user is the Raptor Engineering Talos Secure Workstation, the first POWER workstation I've seen in years since the last of the PowerPC Intellistations. You can sign up for preorders so they know you're interested. (I did, of course. Seriously. In fact, I'm actually considering buying two, one as a workstation and the second as a new home server.) Since it's an ATX board, you can just stick it in any case you like with whatever power supply and options you want and configure to taste.

Before you start hyperventilating over the $3100 estimated price (which includes the entry-level 8-core CPU), remember that the Quad G5, probably the last major RISC workstation, cost $3300 new and this monster would drive it into the ground. Plus, at "only" 130 watts TDP, it certainly won't run anywhere near as hot as the G5 did either. Likely it will run some sort of Linux, though I can't imagine with its open architecture that the *BSDs wouldn't be on it like a toupee on William Shatner. Let's hope they get enough interest to produce a few, because I'd love to have an excuse to buy one and I don't need much of an excuse.

Planet MozillaHi, I’m Your New AMO Editor

jetpackYou may have wondered who this “Scott DeVaney” is who posted February’s featured add-ons. Well it’s me. I just recently joined AMO as your new Editorial & Campaign Manager. But I’m not new to Mozilla; I’ve spent the past couple years managing editorial for Firefox Marketplace.

This is an exciting deal, because my job will be to not only maintain the community-driven editorial processes we have in place today, but to grow the program and build new endeavors designed to introduce even more Firefox users to the wonders of add-ons.

In terms of background, I’ve been editorializing digital content since 1999 when I got my first internet job as a video game editor for the now-dead CheckOut.com. That led to other editorial gigs at DailyRadar, AtomFilms, Shockwave, Comedy Central, and iTunes (before all that I spent a couple years working as a TV production grunt where my claim to fame is breaking up a cast brawl on the set of Saved by the Bell—The New Class; but that’s a story for a different blog.)

I’m sdevaney on IRC, so don’t be a stranger.

Planet MozillaAdd-on Compatibility for Firefox 45

Firefox 45 will be released on March 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 45 for Developers, so you should also give it a look.

General

UI

XPCOM

Signing

  • Firefox is currently enforcing add-on signing, with a preference to override it. Firefox 46 will remove the preference entirely , which means your add-on will need to be signed in order to run in release versions of Firefox. You can read about your options here.

New

  • Support a simplified JSON add-on update protocol. Firefox now supports a JSON update file for add-ons that manage their own automatic updates, as an alternative to the existing XML format. For new add-ons, we suggest using the JSON format. You shouldn’t immediately switch for older add-ons until most of your users are on 45 and later.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 45, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 44.

Planet MozillaGiving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

Planet MozillaRelEng & RelOps Weekly Highlights - February 5, 2016

This week, we have two new people starting in Release Engineering: Aki Sasaki (:aki) and Rok Garbas (:garbas). Please stop by #releng and say hi!

Modernize infrastructure:

This week, Jake and Mark added check_ami.py support to runner for our Windows 2008 instances running in Amazon. This is an important step towards parity with our Linux instances in that it allows our Windows instances to check when a newer AMI is available and terminate themselves to be re-created with the new image. Until now, we’ve need to manually refresh the whole pool to pick up changes, so this is a great step forward.

Also on the Windows virtualization front, Rob and Mark turned on puppetization of Windows 2008 golden AMIs this week. This particular change has taken a long time to make it to production, but it’s hard to overstate the importance of this development. Windows is definitely *not* designed to manage its configuration via puppet, but being able to use that same configuration system across both our POSIX and Windows systems will hopefully decrease the time required to update our reference platforms by substantially reducing the cognitive overhead required for configuration changes. Anyone who remembers our days using OPSI will hopefully agree.

Improve CI pipeline:

Ben landed a Balrog patch that implements JSONSchemas for Balrog Release objects. This will help ensure that data entering the system is more consistent and accurate, and allows humans and other systems that talk to Balrog to be more confident about the data they’ve constructed before they submit it.

Ben also enabled caching for the Balrog admin application. This dramatically reduces the database and network load it uses, which makes it faster, more efficient, and less prone to update races.

Release:

We’re currently on beta 3 for the Firefox 45. After all the earlier work to unhork gtk3 (see last week’s update), it’s good to see the process humming along.

A small number of stability issues have precipitated a dot release for Firefox 44. A Firefox 44.0.1 release is currently in progress.

Operational:

Kim implemented changes to consume SETA information for Android API 15+ data using data from API 11+ data until we have sufficient data for API 15+ test jobs. This reduced the number of high number of pending counts for the AWS instance types used by Android. (https://bugzil.la/1243877)

Coop (hey, that’s me!) did a long-overdue pass of platform support triage. Lots of bugs got closed out (30+), a handful actually got fixed, and a collection of Windows test failures got linked together under a root cause (thanks, philor!). Now all we need to do is find time to tackle the root cause!

See you next week!

Planet MozillaFoundation Demos February 5 2016

Foundation Demos February 5 2016 Mozilla Foundation Demos February 5 2016

Planet MozillaExtravaganza – February 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Git Submodules are Gone from MDN

First up was jezdez with news about MDN moving away from using git submodules to pull in dependencies. Instead, MDN now uses pip to pull in dependencies during deployment. Hooray!

Careers now on AWS/Deis

Next was giorgos who let us know that careers.mozilla.org has moved over to the Engagement Engineering Deis cluster on AWS. For deployment, the site has Travis CI build a Docker image and run tests against it. If the tests pass, the image is deployed directly to Deis. Neat!

Privacy Day

jpetto helped ship the Privacy Day page. It includes a mailing list signup form as well as instructions for several platforms on how to update your software to stay secure.

Automated Functional Testing for Mozilla.org

agibson shared news about the migration of previously-external functional tests for mozilla.org to live within the Bedrock repository itself. This allows us to run the tests, which previously were run by the WebQA team against live environments, whenever the site is deployed to dev, stage, or production. Having the functional tests be a part of the build pipeline ensures that developers are aware when the tests are broken and can fix them before deploying broken features. A slide deck is available with more details.

Peep 3.x

ErikRose shared news about the 3.0 (and 3.1) release of Peep, which helps smooth the transition from Peep to Pip 8, which now supports hashed requirements natively. The new Peep includes a peep port command for porting Peep-compatible requirements files to the new Pip 8 format.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Jazzband

jezdez shared news about JazzBand, a cooperative experiment to reduce the stress of maintaining Open Source software alone. The group operates as a Github organization that anyone can join and transfer projects to. Anyone in the JazzBand can access JazzBand projects, allowing projects that would otherwise die due to lack of activity thrive thanks to the community of co-maintainers.

Notable projects already under the JazzBand include django-pipeline and django-configurations. The group is currently focused on Python projects and is still figuring out things like how to secure releases on PyPI.

django-configurations 1.0

Speaking of the JazzBand, members of the collective pushed out the 1.0 release of django-configurations, which is an opinionated library for writing class-based settings files for Django. The new release adds Django 1.8+ support as well as several new features.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Travis CI Sudo for Specific Environments

Next was ErikRose with an undocumented tip for Travis CI builds. As seen on the LetsEncrypt travis.yml, you can specify sudo: required for a specific entry in the build matrix to run only that build on Travis’ container-based infrastructure.

Docker on OS X via xhyve

Erik also shared xhyve, which is a lightweight OS X hypervisor. It’s a port of bhyve, and can be used as the backend for running Docker containers on OS X instead of VirtualBox. Recent changes that have made this more feasible include the removal of a 3 gigabyte RAM limit and experimental NFS support that, according to Erik, is faster than VirtualBox’s shared folder functionality. Check it out!


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Planet MozillaMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla runs Winter of Security (MWoS) every year to give folks an opportunity to contribute to ongoing security projects in flight. This year an ambitious group took on the task of creating a new visual interface in our SIEM overlay for Elastic  Search that we call MozDef: The Mozilla Defense Platform.

Security personnel are in high demand and analyst skill sets are difficult to maintain. Rather than only focusing on making people better at security, I’m a firm believer that we need to make security better at people. Interfaces that are easier to comprehend and use seem to be a worthwhile investment in that effort and I’m thrilled with the work this team has done.

They’ve wrapped up their project with a great demo of their work. If you are interested in security automation tools and alternative user interfaces, take a couple minutes and check out their work over at air mozilla.

 

Planet MozillaMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla Winter of Security-2015 MozDef: Virtual Reality Interface MWOS Students give an awesome demo of their work adding a unique interface to MozDef: The Mozilla Defense Platform.

Planet MozillaWelcome (back), Aki!

<figure class="alignright">Aki in Slave UnitThis actually is Aki.</figure>

In addition to Rok who also joined our team week, I’m ecstatic to welcome back Aki Sasaki to Mozilla release engineering.

If you’ve been a Mozillian for a while, Aki’s name should be familiar. In his former tenure in releng, he helped bootstrap the build & release process for both Fennec *and* FirefoxOS, and was also the creator of mozharness, the python-based script harness that has allowed us to push so much of our configuration back into the development tree. Essentially he was devops before it was cool.

Aki’s first task in this return engagement will be to figure out a generic way to interact with Balrog, the Mozilla update server, from TaskCluster. You can follow along in bug 1244181.

Welcome back, Aki!

Planet MozillaWelcome, Rok!

<figure class="alignright">The RockThis is *not* our Rok.</figure>

I’m happy to announce a new addition to the Mozilla release engineering. This week, we are lucky to welcome Rok Garbas to the team.

Rok is a huge proponent of Nix and NixOS. Whether we end up using those particular tools or not, we plan to leverage his experience with reproducible development/production environments to improve our service deployment story in releng. To that end, he’s already working with Dustin who has also been thinking about this for a while.

Rok’s first task is to figure out how the buildbot-era version of clobberer, a tool for clearing and resetting caches on build workers, can be rearchitected to work with TaskCluster. You can follow along in bug 1174263 if you’re interested.

Welcome, Rok!

Planet MozillaSheriff Newsletter for January 2016

Hi,
To give a little insight into our work and make our work more visible to our Community we decided to create a  monthly report of what’s going on in the Sheriffs Team.
If you have questions or feedback, just let us know!
In case you don’t know who the sheriffs are, or to check if there are current issues on the tree, see:
Topics of this month!
1. How-To article of the month
2. Get involved
3. Statistics for January
4. Orange Factor
5. Contact
1. How-To article of the month and notable things!
-> In the Sheriff Newsletter we mentioned the “Orange Factor” but what is this ?  It is simply the ratio of oranges (test failures) to test runs. The ideal value is, of course, zero.

Practically, this is virtually impossible for a code base of any substantial size,so it is a matter of policy as to what is an acceptable orange factor.

It is worth noting that the overall orange factor indicates nothing about the severity of the oranges. [4]

The main site where you can checkout the “Orange Factor” is at https://brasstacks.mozilla.com/orangefactor/  and some interesting info’s are here https://wiki.mozilla.org/Auto-tools/Projects/OrangeFactor
-> As you might be aware Firefox OS has moved into Tier 3 Support [5] – this means that there is no Sheriff Support anymore for the b2g-inbound tree.

Also with moving into tier-3 – b2g tests have also moved to tier 3 and this tests are by default “hidden” on treeherder. To view test results as example on treeherder for mozilla-central you need to click on the checkbox in the treeview “show/hide excluded jobs”.

2. Get involved!
Are you interested in helping out by becoming a Community Sheriff? Let us know!
3. Statistics
Intermittent Bugs filed in January  [1]: 667
and of those are closed: 107 [2]
For Tree Closing times and reasons see:
4. Orange Factor
Current Orangefactor [3]: 12.92
5.  How to contact us
There are a lot of ways to contact us. The fastest one is to contact
the sheriff on duty (the one with the |sheriffduty tag on their nick
:) or by emailing sheriffs @ mozilla dot org.

Planet Mozilla[worklog] Outreach is hard, Webkit aliasing big progress

Tunes of the week: Earth, Wind and Fire. Maurice White, the founder, died at 74.

WebCompat Bugs

WebKit aliasing

  • When looking for usage of -webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
  • Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
  • The code for the max-width issue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.

Webcompat Life and Working with Developer Tools

  • Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists: about:config?filter=webkit and it's bookmark-able.
  • Bug 1245365 - searching attribute values through CSS selectors override the search terms
  • A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.

Firefox OS Bugs to Firefox Android Bugs

  • Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
  • Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
    • the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
    • the site fixed the initial issue
    • the layout.css.prefixes.webkit; true fixes it (see Bug 1213126)
    • the site has moved to a responsive design

Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block

This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!

Testing Google Search On Gecko With Different UA Strings

So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.

Reading List

Follow Your Nose

Otsukare!

Planet MozillaSteps Before Considering a Bug "Ready for Outreach"

Sometimes another team of Mozilla will ask help from Webcompat team for contacting site owners to fix an issue on their Web site which hinders the user experience on Firefox. Let's go through some tips to maximize the chances of getting results when we outreach.

Bug detection

Bug

A bug has been reported by a user or a colleague. They probably had the issue at the moment they tested. The source of the issue is still quite unknown. Network glitch, specific addon configuration, particular version of Firefox, broken build of Nightly. Assess if the bug is reproducible in the most neutral possible environment. And if it's not already done, write in the comments "Steps to reproduce" and describe all the steps required to reproduce the bug.

Analyzing the issue

Bug

You have been able to reproduce. It is time to understand it. Explain it in very clear terms. Think about the person on the other hand who will need to fix the bug. This person might not be an English native speaker. This person might not be as knowledgeable as you for Web technologies. Provide links and samples to the code with the issue at stake. This will help the person to find the appropriate place in the code.

Providing a fix for the issue

Bug

When explaining the issue, you might have also find out how to fix it or at least one way to fix it. It might not be the way the contacted person will fix it. We do not know their tools, but it will help them to create an equivalent fix that fits in their process. If your proposal is a better practice explained why it is beneficial for performance, longevity, resilience, etc.

Partly a Firefox bug

Bug

The site is not working but it's not entirely their fault. Firefox changed behavior. The browser became more compliant. The feature had a bug which is in the process of being fixed. Think twice before asking for outreach. Sometimes it's just better to push a bit more on fixing the bug in Firefox. It has more chances to be useful for all the unknown sites using the feature. If the site is a big popular site, you might want to ask for outreach, but you need a very good incentive such as improving performances.

Provide a contact hint

Bug

If by chance, you already have contacts in this company, share the data, even try directly to contact that person. If you have information that even bookies don't know about the company, be sure to pass it on for maximizing the chances of outreach. The hardest part is often to find the right person who can help you fix the issue.

Outreach might fail

And here the dirty secret: The outreach might now work or might not be effective right away. Be patient. Be perseverant.

Bug

Fixing a Web site costs a lot more than you can imagine. Time and frustration are part of the equation. Outreach is not a magical bullet. Sometimes it takes months to years to fix an issue. Some reasons why the outreach might fail:

  • Impossible to find the right contacts. Sometimes you can send bug reports through the official channels of communications from the company and have your bug being ignored, misunderstood, unusual. For one site, I had reported for months through the bug reporting system until I finally decided to try a back door with emailing specifically a developer I happened to find the information online. The bug was fixed in a couple of days.
  • Developers have bosses. They first need to comply with what their bosses told them to do. They might not be in a very good position in the company, have conflicts with the management, etc. Or they just don't have the freedom to take the decision that will fix the issue, even a super simple one.
  • Another type of bosses is the client. The client had been sold a Web site with a certain budget. Maintenance is always a contentious issue. The Web agencies are not working for free and even if the bug is there in the first place. The client might not have asked for them to test in that specific browser. Channeling up a bug to the client will mean for the Web agency to bill the client. The client might not want to pay.
  • Sometimes, you will think that you got an easy win. The bug has been solved right away. What you do not know is that the developer in charge had just a put a hack in his code with a beautiful TOFIX that will be crushed at the next change of tools or updates.
  • You just need to upgrade to version X of your JS library: Updating the library will break everything else or will require to test all the zillion of other features that are using this lib in the rest of the site. In a cost/benefit scenario, you have to demonstrate to the dev that the fix is worth his time and the test.
  • Wrong department. Sometimes you get the press service, sometimes the communications department, sometimes the IT department in charge of the office backend or commercial operations systems, but not the Web site.
  • The twitter person is not techy. This happens very often. With the blossoming of social managers (do we still say that?), the people on the front-line are usually helpless when it's really technical. Your only chance is to convince them to communicate with the tech team. But the tech team is despising them because too often they brought bugs which were just useless. If the site is an airline company, a bank, a very consumers oriented service, just forget trying to contact them through twitter.
  • The twitter person is a bot. Check the replies on this twitter account, if there is no meaningful interaction with the public, just find another way.
  • You contacted. Nothing happened. People on the other side forget. I'm pretty sure you are also late replying this email or patching this annoying bug. People ;)
  • The site is just not maintained anymore. No budget. No team. No nobody for moving forward the issue.
  • You might have pissed off someone when contacting. You will never know why. Maybe it was not the right day, maybe it was something in your signature, maybe it was the way you addressed them. Be polite, have empathy.

In the end my message is look for the bare necessities of life.

Bugs images from American entomology : or description of the insects of North America, illustrated by coloured figures from original drawings executed from nature. Thanks to the New-York Public Library.

Otsukare!

Planet MozillaGoing beyond NS_ProcessNextEvent

If you’ve been debugging Gecko, you’ve probably hit the frustration of having the code you’re inspecting being called asynchronously, and your stack trace rooting through NS_ProcessNextEvent, which means you don’t know at first glance how your code ended up being called in the first place.

Events running from the Gecko event loop are all nsRunnable instances. So at some level close to NS_ProcessNextEvent, in your backtrace, you will see Class::Run. If you’re lucky, you can find where the nsRunnable was created. But that requires the stars to be perfectly aligned. In many cases, they’re not.

There comes your savior: rr. If you don’t know it, check it out. The downside is that you must first rr record a Firefox session doing what you’re debugging. Then, rr replay will give you a debugger with the capabilities of a time machine.

Note, I’m kind of jinxed, I don’t do much C++ debugging these days, so every time I use rr replay, I end up hitting a new error. Tip #1: try again with rr’s current master. Tip #2: roc is very helpful. But my takeaway is that it’s well worth the trouble. It is a game changer for debugging.

Anyways, once you’re in rr replay and have hit your crasher or whatever execution path you’re interested in, and you want to go beyond that NS_ProcessNextEvent, here is what you can do:

(rr) break nsEventQueue.cpp:60
(rr) reverse-continue

(Adjust the line number to match wherever the *aResult = mHead->mEvents[mOffsetHead++]; line is in your tree).

(rr) disable
(rr) watch -l mHead->mEvents[mOffsetHead]
(rr) reverse-continue
(rr) disable

And there you are, you just found where the exact event that triggered the executed code you were looking at was put on the event queue. (assuming there isn’t a nested event loop processed during the first reverse-continue)

Rinse and repeat.

Planet Mozillarr Talk At linux.conf.au

For the last few days I've been attending linux.conf.au, and yesterday I gave a talk about rr. The talk is now online. It was a lot of fun and I got some good questions!

Planet MozillaFebruary 2016 Featured Add-ons

Pick of the Month: Proxy Switcher

by rNeomy
Access all of Firefox’s proxy settings right from the toolbar panel.

“Exactly what I need to switch on the fly from Uni/Work to home.”

Featured: cyscon Security Shield

by patugo GmbH
Cybercrime protection against botnets, malvertising, data breaches, phishing, and malware.

“The plugin hasn’t slowed down my system in any way. Was especially impressed with the Breach notification feature—pretty sure that doesn’t exist anywhere else.”

Featured: Decentraleyes

by Thomas Rientjes
Evade ad tracking without breaking the websites you visit. Decentraleyes works great with other content blockers.

“I’m using it in combination with uBlock Origin as a perfect complement.”

Featured: VimFx

by akhodakivkiy, lydell
Reduce mouse usage with these Vim-style keyboard shortcuts for browsing and navigation.

“It’s simple and the keybindings are working very well. Nice work!!”

Featured: Saved Password Editor

by Daniel Dawson
Adds the ability to create and edit entries in the password manager.

“Makes it very easy to login to any sight, saves the time of manually typing everything in.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Planet MozillaWhat’s up with SUMO – 4th February

Hello, SUMO Nation!

Last week went by like lightning, mainly due to FOSDEM 2016, but also due to the year speeding up – we’re already in February! What are the traditional festivals in your region this month? Let us know in the comments!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Philipp – for his continuous help with Firefox Desktop and many other aspects of Mozilla and SUMO – Vielen Dank!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • is happening on Monday the 8th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

Social

Support Forum

Knowledge Base

Localization

  • Please check the for iOS section below for an important announcement!

Firefox

And that’s it – short and sweet for your reading pleasure. We hope you have a great weekend and we are looking forward to seeing you on Monday! Take it easy and keep rocking the helpful web. Over & out!

Planet MozillaWeb QA Weekly Meeting, 04 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Planet MozillaReps weekly, 04 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaMOSS Applications Still Open

I am currently running the MOSS (Mozilla Open Source Support) program, which is Mozilla’s program for assisting other projects in the open source ecosystem. We announced the first 7 awardees in December, giving away a total of US$533,000.

The application assessment process has been on hiatus while we focussed on getting the original 7 awardees paid, and while the committee were on holiday for Christmas and New Year. However, it has now restarted. So if you know of a software project that could do with some money and that Mozilla uses or relies on (note: that list is not exhaustive), now is the time to encourage them to apply. :-)

Planet WebKitClaudio Saavedra: Thu 2016/Feb/04

We've opened a few positions for developers in the fields of multimedia, networking, and compilers. I could say a lot about why working in Igalia is way different to working on your average tech-company or start-up, but I think the way it's summarized in the announcements is pretty good. Have a look at them if you are curious and don't hesitate to apply!

Planet MozillaAhmedabad, India


Filed under: Mozilla Tagged: India, market

Planet MozillaWhy was Tab Groups (Panorama) removed?

Firefox 44 has been released, and it has started warning users of Tab Groups about its removal in Firefox 45. There were a number of reasons that led to the removal of Tab Groups. This post will aim to talk about each in a little bit more detail.

The removal happened in the context of “Great or Dead”, where we examine parts of Firefox, look at their cost/benefit balance, and sometimes decide to put resources into improving them, and sometimes decide to recognize that they don’t warrant that and remove that part of the browser.

For Tab Groups, here are some of the things we considered:

  • It had a lot of bugs. A number of serious issues relating to performance, a lot of its tests failed intermittently, group and window closing was buggy, as well as a huge pile of smaller issues: you couldn’t move tabs represented as large squares to groups represented as small squares; sometimes you could get stuck in it, or groups would randomly move ; and the list goes on – the quality simply wasn’t what it should be, considering we ship it to millions of users, which was part of the reason why it was hidden away as much as it was.
  • The Firefox team does not believe that the current UI is the best way to manage large numbers of tabs. Some of the user experience and design folks on our team have ideas in this area, and we may revisit “managing large numbers of tabs” at some point in the future. We do know that we wouldn’t choose to re-implement the same UI again. It wouldn’t make sense to heavily invest in a feature that we should be replacing with something else.
  • It was interfering with other important projects, like electrolysis (multi-process Firefox). When using separate processes for separate tabs, we need to make certain behaviours that used to be synchronous deal with being asynchronous. The way that Tab Groups’ UI was interwoven with the tabbed browser code, and the way the UI effectively hid all the tabs and showed thumbnails for all of them instead, made this harder for things like tab switching and tab closing.
  • It had a number of serious code architecture problems. Some of the animation and library choices caused intermittent issues for users as linked to earlier. All of the groups were stored with absolute pixel positions, creating issues if you change your window size, use a different screen or resolution, etc. When we added a warning banner to the bottom of the UI telling users we were going to remove it, that interfered with displaying search results. The code is very fragile.
  • It was a large feature. By removing tab groups we removed more than 24,000 lines of code from Firefox.

With all these issues in mind, we had to decide if it was better to invest in making it a great feature in Firefox, or remove the code and focus on other improvements to Firefox. When we investigated usage data, we found that only a extremely small fraction of Firefox users were making use of Tab Groups. Around 0.01%. Such low usage couldn’t justify the massive amount of work it would take to improve Tab Groups to an acceptable quality level, and so we chose to remove it.

If you use Tab Groups, don’t worry: we will preserve your data for you, and there are add-ons available that can make the transition completely painless.

Planet MozillaA quiz about ES2015 block-scoped function declarations (in a with block statement)

Quiz time, nerds.

Given the following bit of JS, what's the value of window.f when lol gets called outside of the with statement?

with (NaN) {
  window.f = 1;
  function lol(){window.f = 2};
  function lol(){window.f = 3};
}
lol()

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Trick question, your program crashed before it got to call lol()!

According to ES2015, it should be a SyntaxError, because you're redefining a function declaration in the same scope. Just like if you were re-declaring a let thingy more than once.

However, the real trick is that Chrome and Firefox will just ignore the first declaration so your program doesn't explode (for now, anyways). So the answer is really just 3 (which you probably guessed).

Double-tricked!

Not surprisingly there are sites out there that depend on this funky declared-my-function-twice-in-the-same-scope pattern. webex.com was one (bug here), but they were super cool and fixed their code already. The Akamai Media Player on foxnews.com (bug here) is another (classic foxnews.com move).

It would be really cool if browsers didn't have to do this, so if you know anybody who works on the Akamai Advanced Media Player, tell them to delete the second declaration of RecommendationsPlugin()? And if you see an error in Firefox that says redeclaration of block-scoped function 'coolDudeFunction' is deprecated, go fix that too — it might stop working one day.

Now don't forget to like and subscribe to my Youtube ES2016 Web Compat Pranks channel.

(The with(NaN){} bit isn't important, but it was lunch time when I wrote this.)

Planet WebKitMichael Catanzaro: On Subresource Certificate Validation

Ryan Castellucci has a quick read on subresource certificate validation. It is accurate; I fixed this shortly after joining Igalia. (Update: This was actually in response to a bug report from him.) Run his test to see if your browser is vulnerable.

Epiphany, Xombrero, Opera Mini and Midori […] were loading subresources, such as scripts, from HTTPS servers without doing proper certificate validation. […] Unfortunately Xombrero and Midori are still vulnerable. Xombrero seems to be dead, and I’ve gotten no response from them. I’ve been in touch with Midori, but they say they don’t have the resources to fix it, since it would require rewriting large portions of the code base in order to be able to use the fixed webkit.

I reported this to the Midori developers in late 2014 (private bug). It’s hard to understate how bad this is: it makes HTTPS completely worthless, because an attacker can silently modify JavaScript loaded via subresources.

This is actually a unique case in that it’s a security problem that was fixed only thanks to the great API break, which has otherwise been the cause of many security problems. Thanks to the API break, we were able to make the new API secure by default without breaking any existing applications. (But this does no good for applications unable to upgrade.)

(A note to folks who read Ryan’s post: most mainstream browsers do silently block invalid certificates, but Safari will warn instead. I’m not sure which behavior I prefer.)

Planet MozillaPrototyping Firefox Mobile (or, Designers Aren’t Magicians)

I like to prototype things. To me—and surely not me alone—design is how something looks, feels, and works. These are very hard to gauge just by looking at a screen. Impossible, some would argue. Designs (and by extension, designers) solve problems. They address real needs. During the creative process, I need to see them in action, to hold them in my hand, on the street and on the train. In real life. So, as often as not, I need to build something.

Recently we’ve been exploring a few interesting ideas for Firefox Mobile. One advantage we have, as a mobile browser, is the idea of context—we can know where you are, what time it is, what your network connection is like—and more so than on other platforms utilize that context to provide a better experience. Not many people shop at the mall or wait in line at the bank with their laptop open, but in many of Firefox’s primary markets, people will have their phone with them. Some of this context could help us surface better content or shortcuts in different situations… we think.

The first step was to decide on scope. I sketched a few of these ideas out and decided on which I would test as a proof of concept: location aware shortcut links, a grouped history view, and some attempt at time-of-day recommendations. I wanted to test these ideas with real data (which, in my opinion, is the only legitimate way to test features of this nature), so I needed to find a way to make my history and other browser data available to my prototype. This data is available in our native apps, so whatever form my prototype took it would need to have access to this data in some way. In many apps or products, the content is the primary focus of the experience, so making sure you shift from static/dummy content to real/dynamic content as quickly as possible is important. Hitting edge cases like ultra-long titles or poor-quality images are real problems your design should address, and these will surface far sooner if you’re able to see your design with real content.

Next I decided (quickly) on some technology. The only criteria here was to use things that would get me to a testable product as quickly as possible. That meant using languages I know, frameworks to take shortcuts, and to ask for help when I was beyond my expertise. Don’t waste time writing highly abstracted, super-modular code or designing an entire library of icons for your prototypes… take shortcuts, use open-source artwork or frameworks, and just write code that works.

I am most comfortable with web technologies—I do work at Mozilla, after all—so I figured I’d make something with HTML and CSS, and likely some Javascript. However, our mobile clients (Firefox for Android and Firefox for iOS) are written in native code. I carry an iPhone most days, so I looked at our iOS app, which is written in Swift. I figured I could swap out one of the views with a web view to display my own page, but I still needed some way to get my browsing data (history, bookmarks, etc.) down into that view. Turns out, the first step in my plan was a bit of a roadblock.

Thankfully, I work with a team of incredible engineers, and my oft-co-conspirator Steph said he could put something together later that week. It took him an afternoon, I think. Onward. Even if I thought I hack this together, I wasn’t sure, and didn’t want to waste time.

🔑 Whenever possible, use tools and frameworks you’ve used before. It sounds obvious, but I could tell you some horror stories of times where I wasted countless hours just trying to get something new to work. Save it for later.

In the meantime, I got my web stack all set up: using an off-the-shelf boilerplate for webpack and React (which I had used before), I got the skeleton of my idea together. Maybe overkill at this point, but having this in place would let me quickly swap new components in and out to test other ideas down the road, so I figured the investment was worth it. Because the location idea was not dependant on the users existing browser data, I could get started on that while Steph built the WebPanel for me.

Working for now in Firefox on the desktop, I used the Geolocation API to get the current coordinates of the user. Appending that to a Foursquare API url and performing a GET request, I now had a list of nearby locations. Using Lodash.js I filtered them to only include records with attached URLs, then sorted by proximity.


var query = "FoursquareAPI+MyClientID"

navigator.geolocation.getCurrentPosition(function(position){
  var ll = position.coords.latitude + "," + position.coords.longitude
  $.get(query + ll, function(data) {
    data = _.filter(data.response.venues, function(venue){ 
        return venue.url != null 
    })
    comp.setState({
      foursquareData: _.sortBy(data, function(venue){
        return venue.location.distance 
      })
  });
});

 

Step 1 of my prototype, done. Well, it worked in the desktop browser at least. I knew our mobile web view supported the same Geo API, so I was confident this would work there as well (and, it did).

At this point, Steph had built some stuff I could work with. By building a special branch of Firefox iOS, I now had a field in the settings app which let me define a URL which would load in one of my home panels instead of the default native views. One of the benefits of this approach is that I could update the web-app remotely and not have to rebuild/redeploy the native app with each change. And by using a tool like ngrok I could actually have that panel powered by a dev server running on my machine’s localhost.

Simulator Screen Shot Feb 3, 2016, 11.05.57 AM

Steph’s WebPanel.swift provided me with a simple API to query the native profile for data, seen here:


window.addEventListener("load", function () {
  webkit.messageHandlers.mozAPI.postMessage({
    method: "getSitesByLastVisit",
    params: {
      limit: 10000
    },
    callback: "receivedHistory"
  });
});

Here, I’m firing off a message to our mozAPI once the page has loaded, and passing it some parameters: the method I’d like to run and the limit on the number of records returned. Lastly, the name of a callback for the iOS app to pass the result of the query to.


window.receivedHistory = function(err, data) {
  store.dispatch(updateHistory(data));
}

This is the callback in my app, which just updates the flux store with the data passed from the native code.

At this point, I had a flux-powered app that could display native browser data through react views. This was enough to get going with, and let me start to build some of the UI.

Steph had stubbed out the API for me and was passing down a JSONified collection of history visits, including the URL and title for each visit. To build the UI I had in mind, however, I needed the timestamps and icons, too. Thankfully, I contributed a few hundred lines of Swift to Firefox 1.0, and could hack these in:


extension Site: DictionaryView {
    func toDictionary() -> [String: AnyObject] {
        let iconURL = icon?.url != nil ? icon?.url : ""
        return [
            "title": title,
            "url": url,
            "date": NSNumber(unsignedLongLong: (latestVisit?.date)!),
            "iconURL": iconURL!,
        ]
    }
}

Which gave me the following JSON passed to the web view:


[
  {
    title: “We’re building a better internet — Mozilla”,
    url: “http://mozilla.org”,
    date: “1454514630131”,
    iconUrl: “/media/img/favicon.52506929be4c.ico”
  },
  …
]

Firstly, try not to judge my Swift skills. The purpose here was to get it working as quickly as possible, not to ship this code. Hacks are allowed, and encouraged, when prototyping. I added a date and iconURL field to the history record object and before long, I was off to the races.

With timestamps and icons in hand, I could build the rest of the UI. A simple history view that batched visits by domain (so 14 Gmail links would collapse to 3 and “11 more…”), and a quick attempt at a time-based recommendation engine.

This algorithm may be ugly, but naively does one thing: depending on what time of day and day of the week it was, return to me some guesses of which sites I may be interested in (based on past browsing behaviour). It worked simply by following these steps:

  1. Filter my entire history to only include visits from the same day type (weekday vs. weekend)
  2. Exclude domains that are useless, like t.co or bit.ly
  3. Further filter the set of visits to only include visits +/- some buffer around the current time: the initial prototype used a buffer of +/- one hour
  4. Group the visits by their TLD + one level of path (i.e. google.com/document), which gave me better groups to work with
  5. Sort these groups by length, to provide an array with the most popular domain at the beginning (and limit this output to the top n domains, 10 in my case)

The output is similar to the following:


[
  {
    domain: “http://flickr.com/photos”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 3
  },
  {
    domain: “www.dpreview.com/forums”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 2
  },
  …
]

Awesome. Now I have a Foursquare-powered component at the top which lists nearby URLs. Below that, a component that shows me the 5 websites I visit most often around this time of day. And next, a component that shows my history in a slightly improved format, with domain-grouping and truncation of long lists of related visits. All with my actual data, ready for me to use this week and evaluate these ideas.

One problem surfaces, though. Any visits that are from another device (through Firefox Sync) have no icon attached to them (right now, we don’t sync favicons across devices), which leaves us with large series of visits with no icon. One of the hypotheses we want to confirm is that the favicon (among other visual cues) help the user parse and understand the lists of URLs we present them with.

🔑 Occasionally I’ll be faced with a problem like this: one where I know the desired outcome but have not tackled something like this before and so have low confidence in my ability to fix it quickly. I know I need some way to get icons for a set of URLs, but not how exactly that will work. At this point its crucial to remember one of the goals of a prototype: get to a testable artifact as quickly as possible. Often in this situation I’ll time box myself: if I can get something working in a few hours, great. If not, move on or just fake it (maybe having a preset group of icons I could assign at random would help address the question).

Again, I turn to my trusty toolbox, where I know the tools and how to use them. In this case that was Node and Express, and after a few hours I had an app running on Heroku with a simple API. I could POST an array of URLs to my endpoint /icons and my Node app would spin up a series of parallel tasks (using async.parallel). Each task would load the url via node’s request module, and would hand the HTML in the response over to cheerio, a server-side analog for jQuery. Using cheerio I could grab all the <meta> tags, check for a number of known values (‘icon’, ‘apple-touch-icon’, etc) and grab the URL associated with it. While I was there, I figured I might as well capture a few other tags, such as Facebooks OpenGraph og:image tag.

Once each of the parallel requests either completed or timed out, I combined all the extracted data into one JSON object and sent it back down the wire. A sample request may look like this:


POST to ‘/icons’

{ urls: [“facebook.com”] }

And the response would look like this (keyed to the URL so the app which requested it could associate icons/images to the right URL… the above array could contain any number of URLs, and the blow response would just have more top-level keys):


{
  “facebook.com”: {
    “icons”: [
      {
        “type”: “shortcut-icon”,
        “url”: “https://static.xx.fbcdn.net/rsrc.php/yV/r/hzMapiNYYpW.ico”
      }, …
    ],
    “images”: [
      {
        “type”: “og-image”,
        “url”: “http://images.apple.com/ca/home/images/og.jpg?201601060653”
      }, …
    ]
  }
}

Again, maybe not the best API design, but it works and only took a few hours. I added a simple in-memory cache so that subsequent requests for icons or images for URLs we’ve already fetched are returned instantly and with no delay. The entire Express app was 164 lines of Javascript, including all requires, comments, and error handling. It’s also generic enough that I can now use it for other prototypes where metadata such as favicons or lead images are needed for any number or URLs.

panel-blog

So why do all this work? Easy: because we have to. Things that just look pretty have become a commodity, and beyond being nice to look at they don’t serve much purpose. As designers, product managers, engineers—anyone who makes things—we are responsible for delivering real value to our users. Features and apps they will actually use, and when they use them, they will work well. They will work as expected, and even go out of their way to provide a moment of delight from time to time. It should be clear that the people who designed this “thing” actually used it. That it went through a number of iterations to get right. That it was no accident or coincidence that what you are holding in your hands ended up they way it is. That the designers didn’t just guess at how to solve the problem, but actually tried a few things to really understand it at a fundamental level.

Designers are problem solvers, not magicians. It is relatively cheap to pivot an idea or tweak an interface in the design phase, versus learning something in development (or worse, post-launch) and having to eat the cost of redesigning and rebuilding the feature. Simple ideas often become high-value features once their utility is seen with real use. Sometimes you get 95% of the way, and see how a minor revision can really push an idea across the finish line. And, realistically, sometimes great ideas on paper utterly flop in the field. Better to crash and burn in the hands of your trusted peers than out in the market, though.

Test your ideas, test them with real content or data, and test them with real people.

Planet MozillaThe Joy of Coding - Episode 43

The Joy of Coding - Episode 43 mconley livehacks on real Firefox bugs while thinking aloud.

Planet Mozilla38 hours

TrainMapIndiaFrom Bangaluru to Ahmedabad, immersed into the train ecosystem for 38hours.
Caravan 2016 India-24Caravan 2016 India-33Caravan 2016 India-37Caravan 2016 India-49Caravan 2016 India-51
Caravan 2016 India-14Caravan 2016 India-21

<figure class="wp-caption alignnone" id="attachment_390" style="width: 5269px;">Caravan 2016 India-441608km 37:45 hours</figure>
Filed under: Mozilla, Photography, Research Tagged: Connected Spaces, ethnography, India, journey, mobile office, research, train

Planet Mozillarr 4.1.0 Released

This release mainly improves replay performance dramatically, as I documented in November. It took a while to stabilize for release, partly because we ran into a kernel bug that caused rr tests (and sometimes real rr usage) to totally lock up machines. This release contains a workaround for that kernel bug. It also contains support for the gdb find command, and fixes for a number of other bugs.

Planet MozillaWebExtensions in Firefox 46

We last updated you on our progress with WebExtensions when Firefox 45 landed in Developer Edition (Aurora), and today we have an update for Firefox 46, which landed in Developer Edition last week.

While WebExtensions will remain in an alpha state in Firefox 46, we’ve made lots of progress, with 40 bugs closed since the last update. As of this update, we are still on track for a milestone release in Firefox 48 when it hits Developer Edition. We encourage you to get involved early with WebExtensions, since this is a great time to participate in its evolution.

A focus of this release was quality. All code in WebExtensions now pass eslint, and we’ve fixed a number of issues with intermittent test failures and timeouts. We’ve also introduced new APIs in this release that include:

  • chrome.notifications.getAll
  • chrome.runtime.sendMessage
  • chrome.webRequest.onBeforeRedirect
  • chrome.tabs.move

Create customizable views

In addition to the new APIs, support was added for second-level popup views in bug 1217129, giving WebExtension add-ons the ability to create customizable views.

Check out this example from the Whimsy add-on:
browser-action-1217129

Create an iFrame within a page

The ability to create an iFrame that is connected to the content script was added in bug 1214658. This allows you to create an iFrame within a rendered page, which gives WebExtension add-ons the ability to add additional information to a page, such as an in-page toolbar:

demo-1214658

For additional information on how to use these additions to WebExtensions, (and WebExtensions in general), please check out the examples on MDN or GitHub.

Upload and sign on addons.mozilla.org (AMO)

WebExtension add-ons can now be uploaded to and signed on addons.mozilla.org (AMO). This means you can sign WebExtension add-ons for release. Listed WebExtension add-ons can be uploaded to AMO, reviewed, published and distributed to Firefox users just like any other add-on. The use of these add-ons on AMO is still in beta and there are areas we need to improve, so your feedback is appreciated in the forum or as bugs.

Get involved

Over the coming months we will work our way towards a beta in Firefox 47 and the first stable release in Firefox 48. If you’d like to jump in to help, or get your APIs added, please join us on our mailing list or at one of our public meetings, or check out this wiki page.

Planet MozillaWebdev Extravaganza: February 2016

Webdev Extravaganza: February 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Planet MozillaFirefox Desktop automation goals Q1 2016

As promised in my last blog posts I don’t want to only blog about the goals from last quarters, but also about planned work and what’s currently in progress. So this post will be the first one which will shed some light into my active work.

First lets get started with my goals for this quarter.

Execute firefox-ui-tests in TaskCluster

Now that our tests are located in mozilla-central, mozilla-aurora, and mozilla-beta we want to see them run on a check-in basis including try. Usually you will setup Buildbot jobs to get your wanted tasks running. But given that the build system will be moved to Taskcluster in the next couple of months, we decided to start directly with the new CI infrastructure.

So how will this look like and how will mozmill-ci cope with that? For the latter I can say that we don’t want to run more tests as we do right now. This is mostly due to our limited infrastructure I have to maintain myself. Having the needs to run firefox-ui-tests for each check-in on all platforms and even for try pushes, would mean that we totally exceed the machine capacity. Therefore we continue to use mozmill-ci for now to test nightly and release builds for en-US but also a couple of other locales. This might change later this year when mozmill-ci can be replaced by running all the tasks in Taskcluster.

Anyway, for now my job is to get the firefox-ui-tests running in Taskcluster once a build task has been finished. Although that this can only be done for Linux right now it shouldn’t matter that much given that nothing in our firefox-puppeteer package is platform dependent so far. Expanding testing to other platforms should be trivial later on. For now the primary goal is to see test results of our tests in Treeherder and letting developers know what needs to be changed if e.g. UI changes are causing a regression for us.

If you are interested in more details have a look at bug 1237550.

Documentation of firefox-ui-tests and mozmill-ci

We are submitting our test results to Treeherder for a while and are pretty stable. But the jobs are still listed as Tier-3 and are not taking care of by sheriffs. To reach the Tier-2 level we definitely need proper documentation for our firefox-ui-tests, and especially mozmill-ci. In case of test failures or build bustage the sheriffs have to know what’s necessary to do.

Now that the dust caused by all the refactoring and moving the firefox-ui-tests to hg.mozilla.org settles a bit, we want to start to work more with contributors again. To allow an easy contribution I will create various project documentation which will show how to get started, and how to submit patches. Ultimately I want to see a quarter of contribution project for our firefox-ui-tests around mid this year. Lets see how this goes…

More details about that can be found on bug 1237552.

Planet MozillaAll the small things at Awwwards Amsterdam

Last week, I cut my holiday in the Bahamas short to go to the Awwwards conference in Amsterdam and deliver yet another fire and brimstone talk about performance and considering people outside of our sphere of influence.

me at awwwardsPhoto by Trine Falbe

The slides are on SlideShare:

The screencast of the talk is on YouTube:

I want to thank the organisers for allowing me to vent a bit and I was surprised to get a lot of good feedback from the audience. Whilst the conference, understandably, is very focused on design and being on the bleeding edge, some of the points I made hit home with a lot of people.

Especially the mention of Project Oxford and its possible implementations in CMS got a lot of interest, and I’m planning to write a larger article for Smashing Magazine on this soon.

Planet MozillaMartes mozilleros, 02 Feb 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaEncapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Redux is one of the most discussed JavaScript front-end technologies. It has defined few paradigms that vastly improve developer efficiency and give us a solid foundation for building apps in a simple yet robust way. No framework is perfect though, and even Redux has its drawbacks.

Redux has the special power make you feel like a super-programmer who can effortlessly write flawless code. Enjoyable though this may be, it's actually a huge issue because it blinds you with its simplicity and may therefore lead to situations where your shiny code turns into an unmaintainable mess. This is especially true for those who are starting a new project without any prior experience with Redux.

Encapsulation Matters

Global application state is arguably the biggest benefit of Redux. On the other hand, it is a bit of a mind-bender because we were always told that global stuff is evil. Not necessarily: there is nothing wrong with keeping the entire application state snapshot in one place, as long as you don't break encapsulation. Unfortunately, with Redux this is all too easy.

Just imagine a simple case where we have a team of two people: Dan and Andre. Both of them were given the very simple task to implement an independent component. Dan is supposed to create Foo, and Andre is supposed to create Bar. Both of the components are similar in that they should contain only a button which acts as a toggle for some string. (Obviously this is an artificial example provided for illustrative purposes.)

Encapsulation in Redux: the Right Way to Write Reusable Components

Since the state of each Component is distinct, and there are no dependencies between those them, we can encapsulate them simply by using combineReducers. This gives each reducer its own slice of application state, so it can only access the relevant part: fooReducer will get foo and barReducer will get bar. Andre and Dan can work independently, and they can even open-source the components because they are completely self-contained.

But one day, some evil person (from marketing, no doubt) tells Andre that he must extend the app and should embed a Qux component in his Bar. Qux is a simple counter with a button that just increments a number when clicked. So far so good: Andre just needs to embed the Qux in his Bar and provide an isolated application state slice so that Bar does not know anything about the internal implementation of Qux. This ensures that the principle of encapsulation is not violated. The application state shape could look like this:

{
  foo: {
    toggled: false
  },
  bar: {
    toggled: false,
    qux: {
      counter: 0
    }
  }
}

However, there was one more requirement: Andre should also cooperate with Dan, because the counter must be incremented anytime Foo is toggled. Now it's getting more complicated, and it looks like some sort of interface will be needed.

Encapsulation in Redux: the Right Way to Write Reusable Components

Now we can see the problem with the way Redux blinds developers to potential architectural missteps. People tend to seize on the fact that Redux provides global application state. The simplest solution would therefore be to get rid of combineReducers and provide the entire application state to fooReducer, which can increment counter internally in the bar branch of the app state tree. Naturally this is totally wrong and breaks the principle encapsulation. You never want to do this because the logic hidden behind incrementing the counter may be much more complicated than it seems. As a result, this solution does not scale. Anytime you change the implementation of qux, you'll need to change the implementation of foo as well, which is a maintainability nightmare.

"But wait!" I hear you saying. The whole point of Redux is that you can handle a given action in multiple reducers, right? This suggests that we should handle the TOGGLE_FOO action in quxReducer and increment the counter there. This is a bad idea for a couple of reasons.

For starters, Qux becomes coupled with Foo because it needs to know about its internal details. More importantly, Reto Schläpfer makes a compelling case that it quickly becomes difficult to reason about code where the results of an action are spread across the codebase. It is much better to compose your reducers so that a higher-level reducer handles each action is a single place and delegates processing to one or more lower-level reducers.

Composition is the New Encapsulation

As you might have spotted, we are suggesting that in addition to composing components and composing state, we should compose reducers as well.

Concretely, this means that Andre needs to define a public interface for his Qux component: an incrementClicks function.

export const incrementClicks = quxState => ({...quxState, clicked: quxState.clicked + 1});  

This implementation is completely encapsulated and is specific to the Qux component. The function is exported because it is part of the public interface. Now we use this function to implement the reducer:

const quxReducer = (quxState = {clicked: 0}, { type }) => {  
  switch (type) {
    case 'QUX_INCREMENT':
      return incrementClicks(quxState); // The public method is called
    default:
      return quxState;
  }
};

Because Bar embeds Qux's application state slice, barReducer is responsible for delegating all actions to quxReducer using straightforward function composition. The barReducer might look something like this:

const barReducer = (barState = {toggled: false}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_BAR':
      return {...barState, toggled: !barState.toggled};
    default:
      return {
        ...barState,
        qux: quxReducer(barState.qux, action) // Reducer composition
      };
  }
};

Now we are ready for some more advanced reducer composition, since we know that Qux should also increment when Foo is toggled. At the same time, we want Foo to be completely unaware of Qux's internals. We can define a top-level reducer that holds state for Foo and Bar and delegates the right portion of the app state to incrementClicks. Only rootReducer, which aggregates fooReducer and barReducer, will be aware of the interdependency between the two.

const rootReducer = (appState = {}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_FOO':
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: {...appState.bar, qux: incrementClicks(appState.bar.qux)}
      };

    default:
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: barReducer(appState.bar, action)
      };
  }
};

The default case acts exactly like combineReducers. The TOGGLE_FOO handler, on the other hand, glues the reducers together while handling the interdependency. There is still one line which is pretty ugly:

bar: {...appState.bar, qux: incrementClicks(appState.bar.qux)}  

The problem is that this line depends on implementation details of Bar (state shape). We would rather encapsulate these details behind the public interface of the Bar reducer:

export const incrementClicksExposedByBar = barState => ({...barState, qux: incrementClicks(barState.qux)});  

And now it's just a matter of using the public interface:

    ...
    case 'TOGGLE_FOO':
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: incrementClicksExposedByBar(appState.bar)
      };

I've prepared a complete working codepen so that you can try this out on your own.

Planet MozillaEnd of MozCI QoC term

We recently completed another edition of Quarter of Contribution and I had the privilege to work with MikeLingF3real & xenny.
I want to take a moment to thank all three of you for your hard work and contributions! It was a pleasure to work together with you during this period.

Some of the highlights of this term are:

You can see all other mozci contributions in here.

One of the things I learned from this QoC term:
  • Prepare sets of issues that are related which build towards a goal or a feature.
    • The better you think it through the easier it will be for you and the contributors
    • In GitHub you can create milestones of associated issues
  • Remind them to review their own code.
    • This is something I try to do for my own patches and saves me from my own embarrassment :)
  • Put it on the contributors to test their code before requesting formal review
    • It forces them to test that it does what they expect it to do
  • Set expectations for review turn around.
    • I could not be reviewing code every day since I had my own personal deliverables. I set Monday, Wednesday and Friday as code review days.
It was a good learning experience for me and I hope it was beneficial for them as well.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet Mozillahappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1243246] Attachment data submitted via REST API must always be base64 encoded
  • [1213424] The Bugzilla autocomplete dropdown should expand the width to show the full text of a match
  • [1241667] Trying to report a bug traps the user in an infinite loop
  • [1188236] “Congratulations on having your first patch approved” email should be clearer about how to get the patch landed.
  • [1243051] Create one off script to output cpanfile with all modules and their current versions to be used for version pinning
  • [1244604] configure nagios alerting for the bmo/reviewboard connector
  • [1244996] add a script to manage a user’s settings

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Dev.OperaOpera 35 released

Opera 35 (based on Chromium 48) for Mac, Windows, Linux and Android is out! To find out what’s new for users, see our Desktop and Mobile blogs. Here’s what it means for web developers.

ES6: more well-known symbols

Two more well-known symbols have been implemented:

  • Symbol.isConcatSpreadable points to a boolean value that indicates if an object should be flattened to its array elements by Array.prototype.concat (true) or not (false).
  • Symbol.toPrimitive points to a method that converts an object to a corresponding primitive value.

CSS: improved auto

The implied minimum size of a flex item (i.e. min-width: auto or min-height: auto) is now also computed correctly when flex-basis is not auto.

CSS Writing Modes updates

The CSS properties text-combine-upright, text-orientation, and writing-mode are now available without the -webkit- prefix, and with new syntax matching the spec.

The isolate, isolate-override, and plaintext values for the unicode-bidi CSS property can now be used without the -webkit- prefix. Support for the non-standard horizontal-bt value (as in -webkit-writing-mode: horizontal-bt) has been removed.

Unprefixed CSS font-feature-settings

The CSS font-feature-settings property now works without the -webkit- prefix. This property provides low-level control over OpenType font features.

The -webkit--prefixed version is now deprecated and will be removed in a future release.

CSS Font Loading API improvements

Our implementation of the FontFaceSet interface (e.g. document.fonts) is now set-like, matching the spec. This means it has entries(), keys(), and values() iterators, and is itself an iterator (over the individual FontFace entries).

Also, the add() and remove() methods don’t throw InvalidModificationError anymore when adding or deleting a CSS-connected font-face to the same FontFaceSet. Here’s an example of that:

<style>
	@font-face {
		font-family: Test;
		src: local('Helvetica');
	}
</style>
<script>
	var face;
	document.fonts.forEach(function(f) { face = f; });
	document.fonts.add(face);    // no-op
	document.fonts.remove(face); // no-op, returns `false`
</script>

Previously, the above add() and remove() both threw InvalidModificationError exceptions.

A demo is available.

Fetch API: data: and blob: URL scheme support

Our Fetch API implementation now supports the data: and blob: URL schemes.

IndexedDB API additions

The IndexedDB getAll() and getAllKeys() methods are now supported on the IDBObjectStore and IDBIndex interfaces. Additionally, IDBObjectStore.prototype.openKeyCursor() and IDBTransaction.prototype.objectStoreNames have been implemented.

A demo is available.

MediaStreamTrack.prototype.remote

The remote property on WebRTC MediaStreamTrack instances is now available. It can be used to determine whether a stream track is from a remote source or a local one.

ServiceWorkerRegistration.prototype.update() more spec-compliant

The update method on ServiceWorkerRegistration instances used to always bypass the browser cache. Now, it only bypasses the browser cache if the previous update check occurred over 24 hours ago.

Touch and TouchEvent constructors

The Touch and TouchEvent constructors make it easy to programmatically create Touch and TouchEvent instances from an object literal.

const t = new Touch({
	'identifier': 42,
	'target': document.body,
	'clientX': 200,
	'clientY': 200,
	'screenX': 300,
	'screenY': 300,
	'pageX': 250,
	'pageY': 250,
	'radiusX': 2.5,
	'radiusY': 2.5,
	'rotationAngle': 10,
	'force': 0.5
});

A demo with more examples is available.

KeyboardEvent.prototype.code

The code property on KeyboardEvent instances is now implemented. The value of this property identifies the physical key that generated the keyboard event, regardless of the user’s current keyboard layout. For example, the physical Q key represents the symbol q on a QWERTY keyboard layout, but represents a on an AZERTY keyboard layout — but its .code value is 'KeyQ' in both configurations.

A demo is available.

Web Audio API: .connect() chaining

The connect method on AudioNode and AudioParam instances now supports chaining, as per the spec.

Before this change, you’d write code like this:

sourceNode.connect(gainNode);
sourceNode.connect(filterNode);
sourceNode.connect(destination);

Now that can be simplified as follows:

sourceNode
	.connect(gainNode)
	.connect(filterNode)
	.connect(destination);

A demo is available.

Deprecated and removed features

Support for the non-standard CSS values intrinsic and min-intrinsic has been removed. Use the standardized values max-content and min-content instead.

Support for CSS composite-mode: darker has been removed since darker is a non-standard value.

The glyph-orientation-horizontal and glyph-orientation-vertical CSS properties for SVG elements have been removed. Use text-orientation instead, just like you would for HTML elements.

The offsetParent, offsetTop, offsetLeft, offsetWidth, and offsetHeight properties on SVGElement instances are now deprecated and will be removed in an upcoming release. Per the spec, these properties should only exist on HTMLElements. For SVGElements, you can use getBoundingClientRect() instead.

The getTransformToElement method on SVGGraphicsElement instances has been removed, matching the spec.

The SVGPathSeg and SVGPathSegList constructors have been removed. They were part of the old SVG 1.1 spec, but have since been removed from the standard. If you rely on these APIs, you’ll be pleased to hear that a polyfill is available for them.

The getSVGDocument method is no longer available on HTMLFrameElement instances. We now match the spec, which makes this method available on HTMLEmbedElement, HTMLIFrameElement, and HTMLObjectElement instances only.

The non-standard item() method on TextTrackList and TextTrackCueList has been removed. Use tracks[index] instead of tracks.item(index).

Following the recommendation in RFC 7465, support for the RC4 cipher has finally been removed. HTTPS connections that rely on RC4 exclusively cannot be considered secure anymore and will therefore fail from now on.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaOur Evolving Approach to the Curriculum Database

When we first envisioned the Curriculum Database, we had a variety of different focuses. The biggest one, in my mind, was helping people sort through the many different types of learning materials offered by various teams and projects at Mozilla in order to find the right content for their needs. Secondarily, we wanted to improve the creation process—make it easier to publish, remix, and localize content.

Since then, the vision has changed a bit, or rather, our approach has changed. Several of our designers (Luke, Sabrina, and Ricardo) have been conducting a small research project to find out how educators currently find learning materials. We’ve also been learning from conversations with the cross-team Curriculum working group, led by Chad Sansing. Finally, we’ve been looking at existing offerings in the world.

Content Creation

Our approach has evolved so that now we’re focusing on content creation first. Luke and Chad have been working together to improve the web literacy curriculum template.

Screen Shot 2016-02-01 at 3.28.59 PM

Luke also worked with the Science Lab team to design new templates for their learning handouts:

Screen Shot 2016-02-01 at 3.30.19 PM

Next up is working with the Mozilla Clubs team to create new templates for their onboarding and training materials. The goal with each of these projects is to provide scalable, reusable templates that, once created and refined, don’t require involvement from designers or developers to continue using as new content is created. Right now we’re using a combination of Thimble projects, markdown -> html conversion, and github pages. We’re in experimentation mode.

Editing, localization, quality control, and more

Some of the next hurdles to cross are around mechanisms for localization and allowing for community contribution. One thing I’ve heard from the Curriculum working group is that collaborative editing workflows are needed. One idea I really like came from Emma Irwin who suggested an “Ask for Help” workflow that triggers a ticket in a github repo. That repo is triaged and curated by community contributors, so this could be a useful way to solicit testing and edits on new content.

I’ve also heard a (resolvable) tension between wanting to encourage remixes, but also wanting to have clear quality control mechanisms. User ratings and comments can help with this. Emma has also suggested an Education Council that would review materials and mark those meeting certain criteria as “Mozilla Approved.”

And then what?

All of the above ideas fall under the “content creation” half of the puzzle. We’ve not forgotten about the “content discovery” half, but we’ve put creation first. Once we’ve developed and tested these new workflows for creation, collaborative editing, remixing, quality control, and localization, it will make sense to then switch our focus to content distribution and discovery.

Please do comment below if you have any questions or ideas about this approach.


Planet MozillaAnnouncing the 2016 Open Web Fellows Program Host Organizations

Last year was a big year for the open Web: net neutrality became a mainstream phrase in the United States, data retention and surveillance were hotly contested at government levels in the European Union, and India’s government suspended operations of Free Basic’s zero-rating practices despite Mark Zuckerberg’s insistence that he was working in the interest of the poor. Much of this was done in collaboration with organizations that share the mission to protect the open Web as a global public resource. It’s partnerships and knowledge sharing initiatives that support these movements.

Once such initiative is the Ford-Mozilla Open Web Fellows program, an international leadership program that brings together technology talent and civil society organizations to advance and protect the open Web. The Fellows embedded at these organizations will work on salient issues like privacy, access, and online rights. And this Fellowship program offers unique opportunities to learn, innovate, and gain credentials in a supportive environment while working to protect the open Web.

We are proud to announce our second cohort of host organizations, who are looking for 8 talented individuals to advise, build, and learn during their 10-month fellowships.

Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016


Centre for Intellectual Property and Information Technology Law (CIPIT)
CIPIT is an evidence-based research and trainingcenter based at Strathmore Law School in Nairobi, Kenya. Working with communities in extreme stances of censorship, their mission is to study and share knowledge on the development of cyberspace, and conduct research from a multidisciplinary approach. In 2016 CIPIT will be focusing on Internet Freedom in Eastern Africa, intellectual property in African development, and network measurements in election monitoring.

CIPIT is looking for an inquisitive, focused Fellow with tech expertise who can consult on a policy-oriented research process. This Fellow could help shape the next generation of Internet laws in Africa, and see the real-life needs of the tools and code they generate. For example, the Fellow could develop user-focused tools that help real-life events – like the Ugandan election. Learn more here.


Citizen Lab
Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs, University of Toronto that focuses on advanced research and development at the intersection of ICTs, human rights, and global security. They provide impartial,evidence-based, peer-reviewed research on information controls to help advocacy and policy engagement on an open and secure Internet, and help secure civil society organizations from targeted attacks.

Citizen Lab is looking for a Fellow who is motivated to apply their technical skills to questions concerning technology and human rights, and brings excellent communications and technical skills. The Fellow could develop new tools to measure Internet filtering and network interference, investigate malware attacks or the privacy and security of apps and social media, and empower citizens by developing platform for corporate and public transparency. Learn more here.


ColorOfChange
ColorOfChange is a leading civil rights organization that works to strengthen the voice of Black America and create positive change around political and social issues that affect the Black community. ColorOfChange supports net neutrality and the reclassification of broadband as a public utility, and works to give their members a voice — hugely consequential, as Black and brown Americans are least able to afford the paybooths and obstacles that come with a closed Internet.

ColorOfChange is looking for a Fellow who is passionate about ensuring the US national conversation around net neutrality includes arguments in favor of net neutrality from a civil rights perspective. This Fellow would have the opportunity to pioneer tools for rapid-response campaigning that could be replicated and used by millions, find a compelling approach for users to engage with data that is integrated in the presentation itself, leverage mobile (and wearables??) for activism. Learn more here.


Data & Society
Data & Society is a research institute that is committed to identifying issues at the intersection of technology and society. They focus on social, cultural, and ethical issues arising from data-centric technological development. In 2016, they will focus on identifying major emergent issues stemming from new data-driven technologies, develop tools to help people better understand issues, and build a diverse network of researchers and practitioners.

Data & Society is looking for a Fellow who is deeply versed in technical conversations, and understands that new massive technologies are creating disruption. This Fellow would work with people from other fields to raise the technical capacity of others in the network, and engage technical communities core to Data & Society’s mission. Learn more here.


Derechos Digitales
Derechos Digitales is an organization that promotes human rights in digital environments. Their work focuses on the nuanced realities of Latin American countries, and bring these perspectives to discussions around issues like cybersecurity and corporate transparency. They work to shape policy-making on issues such as mass surveillance, digital threats to activists, and legislative work on Internet governance. In 2016 they will focus on privacy, freedom of expression and access to knowledge.

Derechos Digitales is looking for a Fellow with tech expertise who is passionate about working at the intersection of human rights and tech policy in the global south. The Fellow could provide technical advise on the tools and resources needed in these contexts, and develop tech policy documents that can bridge the human rights and tech communities. Derechos Digitales is looking for a Spanish-speaking Fellow who would be comfortable supporting capacity building sessions with local civil society organizations. Learn more here.


European Digital Rights (EDRi)
EDRi is an association of 33 civil rights organizations from across Europe, and works to promote, protect and uphold civil and human rights in the digital environment in the European Union. Their four key priorities for 2016 are data protection and privacy, mass surveillance, copyright reform and net neutrality. EDRi supports Europe’s data protection reform and campaigned against EU state surveillance proposals. The current onslaught of “counter-terrorism” proposals after recent attacks sees European governments adopting new laws with little consideration of effectiveness, proportionality, or whether privacy is being sacrificed.

EDRi is looking for a Fellow who is passionate about raising awareness about EU digital rights, and can use their technical expertise to help educate the general public, tech-policy community, and policy-makers. For example, the Fellow could explain existing data collection practices and newly gained online rights to users via an app or other tool, depending on the Fellow’s talents and preferences. The Fellow could provide technical assistance to help policy-makers and regulators understand the tools used by online companies for tracking and monitoring. Learn more here.


Freedom of the Press Foundation
Freedom of the Press Foundation is a non-profit organization that supports and defends journalism dedicated to transparency and accountability. They believe one of the most critical press freedom issues of the 21st Century is digital security, and work to ensure journalists can use technology to do their jobs safely and without the constant fear of surveillance.

Freedom of the Press Foundation is looking for a Fellow with strong technical abilities and is interested in helping journalists work safely and communicate securely.  The Fellow would apply their skills to build and support tools like SecureDrop with Freedom of the Press Foundation’s talented staff of technologists and engineers that help journalists communicate securely with sources and whistleblowers. Learn more here.


Privacy International
Privacy International focuses on privacy issues around the world. They advocate for strong privacy protection laws, investigate government surveillance, conduct research to enact policy change, and raise awareness amongst the public about technologies that place privacy at risk. In 2016 Privacy International is working partnering with organizations in the global south to identify privacy challenges, and more work on data exploitation.

Privacy International is looking for a Fellow who’s eager to learn and find new challenges. The Fellow would use their strong technical skills to translate technology to policy-makers, and help others around the world do the same. The Fellow would work with Privacy International’s Tech Team to analyze surveillance documentation and data, identify and analyze new technologies, and help develop briefings and educational programming with a technical understanding. Learn more here.


Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016

Planet WebKitMichael Catanzaro: On WebKit Security Updates

Linux distributions have a problem with WebKit security.

Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.

This is the story of how that process has gone wrong for WebKit.

Before we get started, a few disclaimers. I want to be crystal clear about these points:

  1. This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
  2. WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
  3. The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.

Browser Security in a Nutshell

Web engines are full of security vulnerabilities, like buffer overflows, null pointer dereferences, and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).

For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)

WebKit Ports

To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.

While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.

Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.

If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.

Everything else is broken.

Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.

(This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)

The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.

Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.

None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.

It’s broken, too.

QtWebKit

QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.

After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.

Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.

WebKitGTK+

WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.

Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.

So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.

This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.

During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)

Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.

Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.

Distribution Updates

There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.

The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.

The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.

So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)

Ubuntu

Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.

By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.

I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.

Debian

Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:

Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.

For general web browser use we recommend Iceweasel or Chromium.

Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.

(Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)

Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)

The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.

Recommended Distributions

We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.

The Great API Break

So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.

WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.

Intermission: Certificate Verification

Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.

Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.

Removing WebKit1

WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.

Maybe it should have remained that way.

But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.

A lot of good that does for applications using the API that was removed.

WebKit2 Adoption

While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.

As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?

(It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)

Fixing Things

How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.

For applications stuck on WebKitGTK+ 2.4, I see a few different options:

  1. We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
  2. We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
  3. Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)

Or, a far more likely possibility: we could do nothing, and keep using insecure software.

Planet Mozilla“Distributed” ER#5 now available!

“Distributed” Early Release #5 is now publicly available, just Book Cover for Distributed23 days after ER#4 came out.

Early Release #5 (ER#5) contains everything in ER#4 plus:
* Ch.12 group chat etiquette
* In Ch.2, the section on diversity was rewritten and enhanced by consolidating a few different passages that had been scattered across different chapters.
* Many tweaks/fixes to pre-existing Chapters.

You can buy ER#5 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1, ER#2, ER#3 or ER#4 should get prompted with a free update to ER#5 – if you don’t please let me know! And yes, you’ll get updated when ER#6 comes out later this month.

Thanks again to everyone for their ongoing encouragement, proof-reading help and feedback so far. I track all of them and review/edit/merge as fast as I can. To make sure that any feedback doesn’t get lost or caught in spam filters, please email comments to feedback at oduinn dot com.

Now time to brew more coffee and get back to typing!

John.
=====
ps: For the curious, here is the current list of chapters and their status:

Chapter 1: Remoties trend – AVAILABLE
Chapter 2: The real cost of an office – AVAILABLE
Chapter 3: Disaster Planning – AVAILABLE
Chapter 4: Mindset – AVAILABLE
Chapter 5: Physical Setup – AVAILABLE
Chapter 6: Video Etiquette – AVAILABLE
Chapter 7: Own your calendar – AVAILABLE
Chapter 8: Meetings – AVAILABLE
Chapter 9: Meeting Moderator – AVAILABLE
Chapter 10: Single Source of Truth
Chapter 11: Email Etiquette – AVAILABLE
Chapter 12: Group Chat Etiquette – AVAILABLE
Chapter 13: Culture, Trust and Conflict
Chapter 14: One-on-Ones and Reviews – AVAILABLE
Chapter 15: Joining and Leaving
Chapter 16: Bring Humans Together – AVAILABLE
Chapter 17: Career path – AVAILABLE
Chapter 18: Feed your soul – AVAILABLE
Chapter 19: Final Chapter

=====

Planet MozillaFirefox Accounts on AMO

In order to provide a more consistent experience across all Mozilla products and services, addons.mozilla.org (AMO) will soon begin using Firefox Accounts.

During the first stage of the migration, which will begin in a few weeks, you can continue logging in with your current credentials and use the site as you normally would. Once you’re logged in, you will be asked to log in with a Firefox Account to complete the migration. If you don’t have a Firefox Account, you can easily create one during this process.

Once you are done with the migration, everything associated with your AMO account, such as add-ons you’ve authored or comments you’ve written, will continue to be linked to your account.

A few weeks after that, when enough people have migrated to Firefox Accounts, old AMO logins will be disabled. This means when you log in with your old AMO credentials, you won’t be able to use the site until you follow the prompt to log in with or create a Firefox Account.

For more information, please take a look at the Frequently Asked Questions below, or head over to the forums. We’re here to help, and we apologize for any inconvenience.

Frequently asked questions

What happens to my add-ons when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to accounts.firefox.com, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

Planet MozillaThis Week In Servo 49

In the last week, we landed 87 PRs in the Servo organization’s repositories.

Mátyás Mustoha has been doing awesome work bringing fixes and stability to our cross-compilation to both AArch64 and ARM 32-bit. He has a nightly build that runs here, which we hope to integrate into our own CI systems. Thanks for your great work improving the experience targeting those platforms!

Notable Additions

  • pcwalton removed some potential deadlock situations in ipc-channel
  • ms2ger moved gaol (our sandboxing library) to the Servo organization and enabled CI support for it
  • manish upgraded homu to pick up a bunch of new updates, including UI cleanups!
  • nox upgraded our Rust build to January 31st
  • mmatyas added AArch64 support to gaol, and generally cleaned it up in other repos, too
  • larsberg added some instructions on how to build and run Servo on Windows
  • simon landed CSS Multicolumn support with block fragmentation

New Contributors

Screenshot

No screenshot this week.

Meetings

We had a meeting on changing our weekly meeting time, our build time trend, potential student projects, and the Windows support.

Planet MozillaDr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors

As we just posted on the Mozilla Blog, today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice. Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our […]

Planet MozillaDr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors

Image from Twitter @klakhani

Image from Twitter @klakhani

Today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice.

Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our Board of Directors to reflect a broader range of perspectives on people, products, technology and diversity. That diversity encompasses many factors: from geography to gender identity and expression, cultural to ethnic identity, expertise to education.

Born in Pakistan and raised in Canada, Karim received his Ph.D. in Management from Massachusetts Institute of Technology (MIT) and is Associate Professor of Business Administration at the Harvard Business School, where he also serves as Principal Investigator for the Crowd Innovation Lab and NASA Tournament Lab at the Harvard University Institute for Quantitative Social Science.

Karim’s research focuses on open source communities and distributed models of innovation. Over the years I have regularly reached out to Karim for advice on topics related to open source and community based processes. I’ve always found the combination of his deep understanding of Mozilla’s mission and his research-based expertise to be extremely helpful. As an educator and expert in his field, he has developed frameworks of analysis around open source communities and leaderless management systems. He has many workshops, cases, presentations, and journal articles to his credit. He co-edited a book of essays about open source software titled Perspectives on Free and Open Source Software, and he recently co-edited the upcoming book Revolutionizing Innovation: Users, Communities and Openness, both from MIT Press.

However, what is most interesting to me is the “hands-on” nature of Karim’s research into community development and activities. He has been a supporter and ready advisor to me and Mozilla for a decade.

Please join me now in welcoming Dr. Karim Lakhani to the Board of Directors. He supports our continued investment in open innovation and joins us at the right time, in parallel with the Katharina Borchert’s transition off of our Board of Directors into her role as our new Chief Innovation Officer. We are excited to extend our Mozilla network with these additions, as we continue to ensure that the Internet stays open and accessible to all.

Mitchell

Planet MozillaBetter Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user-tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users can account for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from addons.mozilla.org, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the addons.mozilla.org website.

Planet MozillaMozilla, Caribou Digital Release Report Exploring the Global App Economy

Mozilla is a proud supporter of research carried out by Caribou Digital, the UK-based think tank dedicated to building sustainable digital economies in emerging markets. Today, Caribou has released a report exploring the impact of the global app economy and international trade flows in app stores. You can find it here.

The findings highlight the app economy’s unbalanced nature. While smartphones are helping connect billions more to the Web, the effects of the global app economy are not yet well understood. Key findings from our report include:

  • Most developers are located in high-income countries. The geography of where app developers are located is heavily skewed toward the economic powerhouses, with 81% of developers in high-income countries — which are also the most lucrative markets. The United States remains the dominant producer, but East Asia, fueled by China, is growing past Europe.
  • Apps stores are winner-take-all. The nature of the app stores leads to winner-take-all markets, which skews value capture even more heavily toward the U.S. and other top producers. Conversely, even for those lower-income countries that do have a high number of developers — e.g., India — the amount of value capture is disproportionately small to the number of developers participating.
  • The emerging markets are the 1% — meaning, they earn 1% of total app economy revenue. 95% of the estimated value in the app economy is captured by just 10 countries, and 69% of the value is captured by just the top three countries. Excluding China, the 19 countries considered low- or lower-income accounted for only 1% of total worldwide value.
  • Developers in low-income countries struggle to export to the global stage. About one-third of developers in the sample appeared only in their domestic market. But this inability to export to other markets was much more pronounced for developers in low-income countries, where 70% of developers were not able to export, compared to high-income countries, where only 29% of developers were not able to export. For comparison, only 3% of U.S. developers did not export.
  • U.S. developers dominate almost all markets. On average, U.S. apps have 30% of the market across the 37 markets studied, and the U.S. is the dominant producer in every market except for China, Japan, South Korea, and Taiwan.

Mozilla is proud to support Caribou Digital’s research, and the goal of working toward a more inclusive Internet, rich with opportunity for all users. Understanding the effects of the global app economy, and helping to build a more inclusive mobile Web, are key. We invite readers to read the full report here, and Caribou Digital’s blog post here.

Planet MozillaFxOS QA at FOSDEM 2016

Several members of the QA team attended FOSDEM this year, and gave presentations on a variety of subjects – both the BuddyUp Pilot Project and FxOS Automation were presented. All of the FOSDEM presentations were recorded and will eventually be available online. Mozilla also had a booth, and we had a group of community volunteers who volunteered to sit at the booth and answer questions. There was a VR display as well as some FxOS devices on display.

You can read more about the event here.

Pictures of the event are here.

Planet MozillaAn email conversation summary visualization

We’ve been overhauling the Firefox OS Gaia Email app and its back-end to understand email conversations.  I also created a react.js-based desktop-ish development UI, glodastrophe, that consumes the same back-end.

My first attempt at summaries for glodastrophe was the following:

old summaries; 3 message tidbits

The back-end derives a conversation summary object from all of the messages that make up the conversation whenever any message in the conversation changes.  While there are some things that are always computed (the number of messages in the conversation, whether there are any unread messages, any starred/flagged messages, etc.), the back-end also provides hooks for the front-end to provide application logic to do its own processing to meet its UI needs.

In the case of this conversation summary, the application logic finds the first 3 unread messages in the conversation and stashes their date, author, and extracted snippet (if any) in a list of “tidbits”.  This also is used to determine the height of the conversation summary in the conversation list.  (The virtual list is aware of a quantized coordinate space where each conversation summary object is between 1 and 4 units high in this case.)

While this is interesting because it’s something Thunderbird’s thread pane could not do, it’s not clear that the tidbits are an efficient use of screen real-estate.  At least not when the number of unread messages in the conversation exceeds the 3 we cap the tidbits at.

time-based thread summary visualization

But our app logic can actually do anything it wants.  It could, say, establish the threading relationship of the messages in the conversation to enable us to make a highly dubious visualization of the thread structure in the conversation as well as show the activity in the conversation over time.  Much like the visualization you already saw before you read this sentence.  We can see the rhythm of the conversation.  We can know whether this is a highly active conversation that’s still ongoing, or just that someone has brought back to life.

Here’s the same visualization where we still use the d3 cluster layout but don’t clobber the x-position with our manual-quasi-logarithmic time-based scale:

the visualization without time-based x-positioning

Disclaimer: This visualization is ridiculously impractical in cases where a conversation has only a small number of messages.  But a neat thing is that the application logic could decide to use textual tidbits for small numbers of unread and a cool graph for larger numbers.  The graph’s vertical height could even vary based on the number of messages in the conversation.  Or the visualization could use thread-arcs if you like visualizations but want them based on actual research.

If you’re interested in the moving pieces in the implementation, they’re here:

Planet WebKitXabier Rodríguez Calvar: Web Engines Hackfest according to me

And once again, in December we celebrated the hackfest. This year happened between Dec 7-9 at the Igalia premises and the scope was much broader than WebKitGTK+, that’s why it was renamed as Web Engines Hackfest. We wanted to gather people working on all open source web engines and we succeeded as we had people working on WebKit, Chromium/Blink and Servo.

The edition before this I was working with Youenn Fablet (from Canon) on the Streams API implementation in WebKit and we spent our time on the same thing again. We have to say that things are much more mature now. During the hackfest we spent our time in fixing the JavaScriptCore built-ins inside WebCore and we advanced on the automatic importation of the specification web platform tests, which are based on our prior test implementation. Since now they are managed there, it does not make sense to maintain them inside WebKit too, we just import them. I must say that our implementation is fairly complete since we support the current version of the spec and have almost all tests passing, including ReadableStream, WritableStream and the built-in strategy classes. What is missing now is making Streams work together with other APIs, such as Media Source Extensions, Fetch or XMLHttpRequest.

There were some talks during the hackfest and we did not want to be less, so we had our own about Streams. You can enjoy it here:

You can see all hackfest talks in this YouTube playlist. The ones I liked most were the ones by Michael Catanzaro about HTTP security, which is always interesting given the current clumsy political movements against cryptography and the one by Dominik Röttsches about font rendering. It is really amazing what a browser has to do just to get some letters painted on the screen (and look good).

As usual, the environment was amazing and we had a great time, including the traditional Street Fighter‘s match, where Gustavo found a worthy challenger in Changseok :)

Of course, I would like to thank Collabora and Igalia for sponsoring the event!

And by the way, quite shortly after that, I became a WebKit reviewer!

Planet MozillaDavid Weir: friendly with belief in team work and contribution

David Weir has been involved with Mozilla since 2009. He is from Glasgow, Scotland where he has recently graduated from Glasgow Kelvin College with skills in digital media. In his spare time, he volunteers at local organisations that aim to promote the quality of life in Glasgow’s East End community.


David is from Scotland in Europe.

David is from Scotland in Europe.

Hi David! How did you discover the Web?

I used to write letters the old-fashioned way with ink and paper till I got an email address and discovered the Internet. I started going online for stuff like applying for jobs. That’s how I discovered the Web.

How did you hear about Mozilla?

I used Internet Explorer before I found out about Firefox from an advertisement on Facebook.

How and why did you start contributing to Mozilla?

I was a newbie Firefox user and I liked it. As I got to understand it better, I decided to help out other users on live chat. I became a part of SUMO. To date, I’ve answered 39 questions, written 27 documents and earned 3 badges on SUMO.

Have you contributed to any other Mozilla projects in any other way?

I am a community contributor to the QA team. I actively participate in discussions during team meetings, email threads, and IRC. I’ve recently arranged testdays for Windows 10, Windows Nightly 64-bit, Firefox for Android and Firefox for Desktop.

I contribute code to SuMoBot, an IRC bot in Mozilla’s #SuMo IRC channel.

I’m part of Firefox Friends, a team of social-sharers and word-spreaders to promote Firefox. I help run the Mozilla contributor group on Facebook, and I keep an eye out for Mozilla-related news spreading around social channels.

I am a Mozilla Rep and actively recruit Mozillians.

What’s the contribution you’re the most proud of?

I have some disability in the form of visual impairment and autism; my hand-eye co-ordination is not perfect. I help to make the web more accessible for people with disability. I look at Mozilla websites and if I find things like the text is too dark to read, I notify the developers to make fixes for better accessibility. See bugs 721518, 746251, 770248 and 775318.

You belong to the Mozilla UK community. Please tell us more about your community. Is there anything you find particularly interesting or special about it?

The Mozilla UK community consists of a small number of employees and volunteers scattered around the United Kingdom. There is a Community Space in London. Every year in November, community members help to host the Mozilla Festival. Since only a few employees work in the London office, most meetings happen online. You can find us on the #uk IRC channel. Community discussion happens on Discourse.

A recent landmark achievement for the UK community was the rollout of the en-GB locale for Mozilla’s web properties like mozillians.org, addons.mozilla.org, marketplace.firefox.com, input.mozilla.org, webmaker.org and the main Mozilla website, mozilla.org. I personally contributed to the (en-GB) localization of addons.mozilla.org. See bugs 1190535 and 1188470.

There is a Scottish community within the larger UK community that can download Mozilla products localized in Gaelic language and discuss support issues on the Gaelic language discussion forum Fòram na Gàidhlig.

What advice would you give to someone who is new and interested in contributing to Mozilla?

Mozilla is one of the most friendly communities I have ever volunteered with. The whole staff is behind you.

If you had one word or sentence to describe Mozilla, what would it be?

Lots of stuff happening – get involved!

What exciting things do you envision for you and Mozilla in the future?

A Scottish community space would be nice.


The Mozilla QA and SUMO teams would like to thank David Weir for his contributions over the past 7 years.

David has contributed to the Mozilla project for a few years now. I’ve frequently had the opportunity to interact with him through IRC. We would also get to say “hi!” to him face-to-ace every so often, because he would attend our weekly team meetings throughteleconferencing. I remember he initially started out attending “testdays” where he would help us test new features in Firefox. Later, his collaboration evolved into organizing his own testdays to address issues he identified as problematic. He’s been a very enthusiastic contributor, and he’s never been shy about pointing out when and where we could be doing better for example, in terms of sharing documentation, or any other information that could be helpful to other contributors. He has made a memorable impression on me and enriched my Mozilla experience, and I hope he keeps participating in the project. – Juan Carlos Becerra

Every team at Mozilla would be lucky to have a contributor like David (IRC nick satdav). He’s committed, the first to know about anything new going on in our social contributor community, and always open with ideas for how we can improve our programs. – Elizabeth Hull

Over the last few years satdav has stayed on top of many support and QA issues, often bringing new bugs that affect the user community to developer attention. That’s so helpful! He shows up to a wide range of Firefox meetings and irc channels, and has a good idea of who to ask to get more information on a bug. Because he has a broad and general interest and is not afraid to ask questions, he also sometimes works as a cross team communicator letting people know what’s going on in other meetings or discussions. I think of him as one of those people who in a science fiction future, would be in a spaceship mission control center with 20 monitors, listening on many channels at once. It has been cool to see his enthusiasm on Mozilla projects and to see his knowledge deepen! – Liz Henry

Planet MozillaTesting Google Search On Gecko With Different UA Strings

Google is serving very different versions of its services to individual browsers and devices. A bit more than one year ago, I had listed some of the bugs (btw, I need to go through this list again), Firefox was facing when accessing Google properties. Sometimes, we were not served the tier 1 experience that Chrome was receiving. Sometimes it was just completely broken.

We have an open channel of discussions with Google. Google is also not a monolithic organization. Some services have different philosophy with regards to fixing bugs or Web compatibility issues. The good news is that it is improving.

Three Small Important Things About Google Today

  1. mike was looking for usage of -webkit-mask-* CSS property on the Web. I was about to reply "Google search!" which was sending it to Chrome browser but decided to look at the bug again. They were using -webkit-mask-image. To my big surprise, they switched to an SVG icon. Wonderful!
  2. So it was time for me to testing one more time Google Search on Gecko with Firefox Android UA and Chrome UA. See below.
  3. Tantek started some discussion in the CSS Working Group about Web compatibility issues, including one about getting the members of the CSS Working Group to fix their Web properties.

Testing Google Search on Gecko and Blink

For each test, the first two screenshots are on the mobile device itself (Chrome, then Firefox). The third screenshot shows the same site with a Chrome user agent string but as displayed on Gecko on Desktop. Basically, this 3rd one is testing if Google was sending the same version to Firefox on Android that they serve to Chrome, would it work?

Home page

We reached the home page of Google.

home page

Home page - search term results

We typed the word "Japan".

home page with search term

Home page - scrolling down

We scrolled down a bit.

scrolling down the page

Home page - bottom

We reached the bottom of the page.

Bottom of the page

Google Images with the search term

We go back to the top of the page and tap on Images menu.

Accessing Google image

Google Images when image has been tapped

We tap on the first image.

focus on the image

Conclusion

We are not there yet the issue is complex, because of the big number of versions which are served to different browsers/devices, but definitely there is progress. At first sight, the version sent to Chrome is compatible with Firefox. We would need to test with being logged too and all the corner cases of the tools and menus. But it's a lot, lot, better that what it was used to be in the past. We have never been that close from an acceptable user experience.

Planet MozillaMy HTTP/2 slide updates

My first HTTP/2 talk of the year I did for OWASP Stockholm on January 27th, and I subsequently updated my public slide set:

On slideshare here: Http2
I then did a shorter talk at FOSDEM 2016 on January 30th that I called “an HTTP/2 update”. In 25 rushed minutes I presented these slides:

Planet MozillaThis Week in Rust 116

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Updates from Rust Core

114 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Ali Clark
  • Daan Sprenkels
  • ggomez
  • tgor
  • Thomas Wickham
  • Tomasz Miąsko
  • Vincent Esche

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is herbie-lint, a miraculous compiler plugin to check the numerical stability of floating-point operations in the code. Another reason to have a nightly Rust handy.

Thanks to redditor protestor for the suggestion.

Submit your suggestions for next week!

Quote of the Week

imo: the opinionated version of mio

durka42 on #rust

Thanks to Steve Klabnik for the suggestion.

Submit your quotes for next week!

Planet Mozillahaxx.se, HTTPS and h2

I previously mentioned my slow-moving plan to get all my sites and servers onto HTTPS and HTTP/2. As of now, I’ve started to activate HTTPS for sites that run on our server and that I admin. First out in the list of sites are this host (daniel.haxx.se) and the curl web site (curl.haxx.se). There are plenty more to setup but the plan is to have the most important ones on HTTPS really soon.

If you experience problems with any of these, let me know. The long-term plan involves going HTTPS-only for all of them.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>