Planet MozillaIntel CPU Bug Affecting rr Watchpoints

I investigated an rr bug report and discovered an annoying Intel CPU bug that affects rr replay using data watchpoints. It doesn't seem to be hit very often in practice, which is good because I don't know any way to work around it. It turns out that the bug is probably covered by an existing Intel erratum for Skylake and Kaby Lake (and probably later generations, but I'm not sure), which I even blogged about previously! However, the erratum does not mention watchpoints and the bug I've found definitely depends on data watchpoints being set.

I was able to write a stand-alone testcase to characterize the bug. The issue seems to be that if a rep stos (and probably rep movs) instruction writes between 1 and 64 bytes (inclusive), and you have a read or write watchpoint in the range [64, 128) bytes from the start of the writes (i.e., not triggered by the instruction), then one spurious retired conditional branch is (usually) counted. The alignment of the writes does not matter, and it's not related to speculative execution.

If you find rr failing during replay with watchpoints set, and the failures go away if you remove the watchpoints, it could well be this bug. Broadwell and earlier don't seem to have the bug.

A possible workaround would be to disable "fast-string optimization" in the kernel at boot time. I don't think there's any way for users to force this to happen in Linux, currently, but someone could write a kernel patch adding a command-line option for that and send it upstream. It would be great if they did!

Fortunately this sort of bug does not affect Pernosco.

Planet MozillaReps Weekly Meeting, 24 May 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaThese Weeks in Firefox: Issue 39

Highlights

Friends of the Firefox team

Introductions

  • Kanika Saini started last week an Outreachy internship working on Enterprise Policies
  • Trisha Gupta is working on improving certificate error pages as an Outreachy intern!

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

Activity Stream

Browser Architecture

  • 3 XBL bindings have been converted to Custom Elements (stringbundle, deck, dropmarker).
  • Proof of concept implementing XULStore on top of rkv is posted.
  • Improving support for top-level HTML windows (meta).
    • Width/height/position is remembered with the [persist] attr on the <html> tag.
    • Looking into supporting context menus next.

Lint

Fluent

Performance

Policy Engine

  • First new policy checked in by Outreachy intern 🎉🔥

Privacy/Security

Search and Navigation

Address Bar & Search
Places

Sync / Firefox Accounts

  • Welcome Carol Ng, who’s joining the Sync team for the summer! 🎉
  • Mark is adding a new dialog that gives folks an option to wipe local data when they disconnect Sync. 🙈
  • The team is working on a Rust adapter for the Sync server, including a Rust client for FxA and an iOS demo app, to bridge the gap between the existing Sync, and the next version of Sync that’s powered by Mentat. 🦀
    • Thom has basic record storage working now. 📦
    • Ed continues to plug away on the Rust FxA client. The demo app can authenticate with FxA and get an OAuth token. 🔐
    • Mark refactored authentication with the token server, and starting on a telemetry adapter. 📓
    • Kit ported the Sync auth state machine from iOS. 🛠

Test Pilot

  • Mobile news:
    • Android prototypes in progress for Send and Notes experiments
    • Support for mobile experiments coming soon to the Test Pilot website
  • Upcoming experiments
  • Upcoming Shield studies:
    • Cloud Storage: Dropbox / Google Drive integration into Download menu
      • Waiting on StudyUtils v5 and final QA
    • Min Vid: picture-in-picture video viewer
      • Shield study cancelled due to performance concerns
      • Moving forward with a PRD for implementation in Firefox
  • Screenshots updates:
    • Work continues on Chrome webextension prototype, annotation tools
    • Android prototype just getting underway, demo coming at work week

Web Payments

Planet MozillaLetting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop, and setup a pagekite frontend on my personal server and a pagekite backend on my laptop.

Frontend setup

Setting up my server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was initially quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically) at boot.

I ended up putting the following in /etc/pagekite.d/20_frontends.rc on my server:

#defaults

isfrontend
rawports=virtual
ports=10022
domain=raw:pagekite.fmarier.org:Password1

as well as removing the following lines from /etc/pagekite.d/10_account.rc:

# Delete this line!
abort_not_configured

before restarting the pagekite daemon using:

service pagekite restart

Planet MozillaSpectre Number 4, STEP RIGHT UP!

Updated based on IBM's documentation.

In the continuing saga of Meltdown and Spectre (tl;dr: G4/7400, G3 and likely earlier 60x PowerPCs don't seem vulnerable at all; G4/7450 and G5 are so far affected by Spectre while Meltdown has not been confirmed, but IBM documentation implies "big" POWER4 and up are vulnerable to both) is now Spectre variant 4. In this variant, the fundamental issue of getting the CPU to speculatively execute code it mistakenly predicts will be executed and observing the effects on cache timing is still present, but here the trick has to do with executing a downstream memory load operation speculatively before other store operations that the CPU (wrongly) believes the load does not depend on. The processor will faithfully revert the stores and the register load when the mispredict is discovered, but the loaded address will remain in the L1 cache and be observable through means similar to those in other Spectre-type attacks.

The G5, POWER4 and up are so aggressively out of order with memory accesses that they are almost certainly vulnerable. In an earlier version of this post, I didn't think the G3 and 7400 were vulnerable (as they don't appear to be to other Spectre variants), but after some poring over IBM's technical documentation I now believe with some careful coding it could be possible -- just not very probable. The details have to do with the G3 (and 7400)'s Load-Store Unit, or LSU, which is responsible for reading and writing memory. Unless a synchronizing instruction intervenes, up to one load instruction can execute ahead of a store, which makes the attack theoretically possible. However, the G3 and 7400 cannot reorder multiple stores in this fashion, and because only a maximum of two instructions may be dispatched to the LSU at any time (in practice less since those two instructions are spread across all of the processor's execution units), the victim load and the confounding store must be located immediately together or have no LSU-issued instructions between them. Even then, reliably ensuring that both instructions get dispatched in such a way that the CPU will reorder them in the (attacker-)desired order wouldn't be trivial.

The 7450, as with other Spectre variants, makes the attack a bit easier. It can dispatch up to four instructions to its execution units, which makes the attack more feasible because there is more theoretical flexibility on where the victim load can be located downstream (especially if all four instructions go to its LSU). However, it too can execute at most just one load instruction ahead of a store, and it cannot reorder stores either.

That said, as a practical matter, Spectre in any variant (including this one) is only a viable attack vector on Power Macs through native applications, which have far more effective methods of pwning your Power Mac at their disposal than an intermittently successful attempt to read memory. Although TenFourFox has a JavaScript JIT, no 7450 and probably not even the Quad is fast enough to obtain enough of a memory timing delta to make the attack functional (let alone reliable), and we disabled the high-resolution timers necessary for the exploit "way back" in FPR5 anyway. The new variant 4 is a bigger issue for Talos II owners like myself because such an attack is possible and feasible on the POWER9, but we can confidently expect that there will be patches from IBM and Raptor to address it soon.

Planet MozillaReady for GDPR: Firefox Focus Offers Additional Tracking Protection Against Advertisers

It’s been nearly a year since we launched Firefox Focus for Android, and it has become one of the most popular privacy browsers for mobile around the world. In light of recent events, more and more consumers have growing awareness for privacy and secure products. The upcoming implementation of the General Data Protection Regulation (GDPR) in Europe later this month reflects this and, at the same time, highlights how important privacy is for all users.

At Mozilla, we’ve always valued people’s privacy and given them the opportunity to determine the data they want to share. Last year we updated our Privacy Notice to make it simple, clear and usable, and we’ve been transparent about how we collect user data. We feel well prepared for GDPR coming into effect and Firefox Focus is one of the best examples of why: This mobile browser has been ahead of its time and is well positioned as the go-to mobile product in the Age of GDPR. Now, we’re making it even more private and convenient.

Less tracking for more privacy

Up until now, Firefox Focus blocked all first party trackers of sites that were commonly known to follow users from site to site, also known as “cross-site tracking.” From there, these sites collect “cookies” which are small data files stored by your browser. It helps publishers collect data to personalize your experiences with them. Again, Firefox Focus blocks first party trackers on the Disconnect list. Today, we are announcing a cookie management feature that also gives you control over the source of trackers that are following you. Users can now protect the visibility of their online activity through cookies on a site from other sites (third party), all sites – or not at all if they choose. You can find this under Settings, Privacy & Security, “Cookies and Site Data” to make your selection. There is a small chance that it it might not work on some sites, so we’re giving users the choice to turn it on or off. For example, advertisers use third party cookies to track your visits to various websites.

 

Once you click on “Block Cookies” a menu will pop-up with options to choose the different types of cookies

 

Autocomplete is Complete

In our previous release, we’ve included the ability to add favorite sites to an autocomplete list by adding them manually under Settings. We’ve noticed that this way might not be the quickest setup for some users. Starting today, our users will be able to conveniently and easily long-press the URL bar to select the site to add to their URL Autocomplete list. Now adding your frequently visited sites is even easier and will get you to where you want to go even faster.

The latest version of Firefox Focus for Android can be downloaded on Google Play.

 

The post Ready for GDPR: Firefox Focus Offers Additional Tracking Protection Against Advertisers appeared first on The Mozilla Blog.

Planet MozillaThe General Data Protection Regulation and Firefox

We are only a few days away from May 25th, when the European General Data Protection Regulation (GDPR) will go into full effect. Since we were founded, Mozilla has always stood for and practiced a set of data privacy principles that are at the heart of privacy laws like the GDPR. And we have applied those principles, not just to Europe, but to all our users worldwide.  We feel like the rest of the world is catching up to where we have been all along.

GDPR has implications for many different parts of Mozilla. Rather than give you a laundry list of GDPR stuff, in this post, we want to focus specifically on Firefox and drill down specifically into how we think about privacy-by-design and data protection impact assessments within our browser product.

Privacy By People Who Care About Privacy

Firefox, the web browser that runs on your device, is your gateway to the internet. Your browser will manage a lot of information about the websites you visit, but that information stays on your device. Mozilla, the company that makes Firefox, doesn’t collect it unless you give us permission.

Mozilla does collect a set of data that helps us to understand how people use Firefox. We’ve purposely designed our data collection with privacy protections in mind. So while the browser knows so much about you, Mozilla still knows very little.

Building a browser that is so powerful yet still respectful of our users takes a lot of effort. At Mozilla, we have teams of privacy and security engineers who are responsible for building a trustworthy browser. More than that, we have a workforce and a volunteer community that takes Mozilla’s responsibility to protect you seriously and personally. This responsibility cuts across all areas of Mozilla, including our security engineers, platform and data engineers, data scientists, product managers, marketing managers and so on. We basically have an army of people who have your back.

Rather than Privacy By Design, we do Privacy By People Who Care About Privacy.

It is important to keep this in mind when we think about the GDPR’s privacy-by-design requirements. Regardless of any regulatory requirement, including GDPR, if an organization and its people aren’t rooted in a commitment to privacy, any privacy-by-design process will fail.  It is our people’s commitment to the Mozilla mission that undergirds our design processes and serves as the most important backstop for protecting our users.

Our Process

Okay, enough throat clearing. At Mozilla, we do have plenty of design processes to identify and deeply engage on privacy risks; code reviews, security and privacy reviews, intensive product and infrastructure audits, and public forums for anyone to contribute concerns and solutions.

Our Firefox data collection review process is the cornerstone of our effort to meaningfully practice privacy-by-design and assess privacy impacts to our users. We believe it is consistent with the GDPR’s requirements for privacy impact assessments. Mozilla has had this process in place for several years and revamped it in 2017.

Here are a few key pieces of that process:

  1. Before we look at any privacy risk, we need to know there is a valid analytic basis for the data collection. That is why our review process starts with a few simple questions about why Mozilla needs to collect the data, how much data is necessary, and what specific measurements will be taken. Mozilla employees who propose additional data collection must first answer these questions on our review form.
  2. Second, our Data Stewards – designated individuals on our Firefox team – will review the answers, ensure there is public documentation for data collection, and make sure users can turn data collection on and off.
  3. Third, we categorize data collection by different levels of privacy risk, which you can find in more detail here. The data category for the proposed collection must be identified as part of the review. For proposals to collect data in higher risk categories, the data collection must be default off.
  4. Complex data collection requests, such as those to collect more sensitive data or those that call for a new data collection mechanism, will escalate from our Data Stewards to our Trust and Legal teams. Further privacy, policy, or legal analysis will then be done to assess privacy impact and identify appropriate mitigations.

The results of this review process, as well as in depth descriptions of our data categories and the process itself, can be found publicly on the web. And you can find the full documentation for Firefox data collection here.

But Wait, There’s More!

This process is just one of the many tools we have to protect and empower the people who use our products.  Last year, we completely rewrote our privacy notice to provide clear, simple language about the browser. The notice includes links directly to our Firefox privacy settings page, so users can turn off data collection if they read something on the notice they don’t like.

We redesigned those privacy settings to make them easier to use (check out about:preferences#privacy in the Firefox Browser). This page serves as a one-stop shop for anyone looking to take control of their privacy in Firefox. And we revamped Firefox onboarding by showing new users the Firefox privacy notice right on the second tab the very first time they use the browser.

It’s easier today than ever before to take control of your privacy in the Firefox browser. As you can see, limited data, transparency, choice – all GDPR principles – are deeply embedded in how all of us at Mozilla think about and design privacy for you.

The post The General Data Protection Regulation and Firefox appeared first on The Mozilla Blog.

Planet MozillaThe Joy of Coding - Episode 140

The Joy of Coding - Episode 140 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaSpeaker Series: Tracking In The Open with Arvind Narayanan

Speaker Series: Tracking In The Open with Arvind Narayanan This event is being streamed live on the new AirMozilla: https://onlinexperiences.com/Launch/Event.htm?ShowKey=44908&DisplayItem=E242198 and YouTube: https://www.youtube.com/watch?v=EZg1vIpno6I. Webcasts will be migrated from the legacy AirMozilla platform (this one)...

Planet MozillaTags are now available in Pontoon to help you prioritize your work

Almost a couple of years ago I started working on a concept called string tiers. The goal was twofold: on one side help locales, especially those starting from scratch, to prioritize their work on a project as large as Firefox, with currently over 11 thousand strings. On the other hand, give project managers a better understanding of the current status of localization.

Given the growth in complexity and update frequency of Developer Tools within Firefox (currently almost 2,600 strings), finding a solution to this problem became more urgent. For example, is a locale in bad shape because it misses thousands of strings? The answer would not automatically be ”yes”, since the missing strings might have a low priority.

The string tiers concept assigns priority to strings based on their target – who is meant to see them – and their visibility. The idea is quite simple: a string warning the user about an error, or requiring an action from them, is more important than one targeting developers or website owners, and buried in the Error Console of the browser.

If you’re interested in knowing more about string tiers, the full document is available here.

In the past few months the Pontoon team – Ryan and Matjaz in particular – has been working hard to implement the back-end for supporting string tiers. Since the system can be used for more than just priority, we decided to use a more generic term: tags. Translation resources – files in the case of Firefox, not individual strings – can be associated to one or more tags.

A dashboard is available in each localization page, for example for the German localization of Firefox:

At a glance you have access to all the information you need: the priority associated to a tag, its translation status and latest activity. You can use the progress bar to access the strings, for example to translate missing strings or review suggestions, but note that tags are also available in filters.

Pontoon’s documentation is already up to date with information about tags.

Project managers can now see the overall translation status of each tag, but also a breakdown with the status of each locale for a specific tag.

This is a brand new feature, and required a lot of code changes in Pontoon. If you find bugs, or want to add features, feel free to file a bug.

Planet MozillaTwo-step authentication in Firefox Accounts

Two-step authentication in Firefox Accounts

 

Starting on 5/23/2018, we are beginning a phased rollout to allow Firefox Accounts users to opt into two-step authentication. If you enable this feature, then in addition to your password, an additional security code will be required to log in.

We chose to implement this feature using the well-known authentication standard TOTP (Time-based One-Time Password). TOTP codes can be generated using a variety of authenticator applications. For example, Google Authenticator, Duo and Authy all support generating TOTP codes.

Additionally, we added support for single-use recovery codes in the event you lose access to the TOTP application. It is recommend that you save your recovery codes in a safe spot since they can be used to bypass TOTP.

To enable two-step authentication, go to your Firefox Accounts preferences and click “Enable” on the “Two-step authentication” panel.

Note: If you do not see the Two-step authentication panel, you can manually enable it by following these instructions.

Using one of the authenticator applications, scan the QR code and then enter the security code it displays. Doing this will confirm your device, enable TOTP and show your recovery codes.

Note: After setup, make sure you download and save your recovery codes in a safe location! You will not be able to see them again, unless you generate new ones.

Once two-step authentication is enabled, every login will require a security code from your TOTP device.

Thanks to everyone that helped to work on this feature including UX designers, engineers, quality assurance and security teams!

Planet MozillaProgressive Web Games

With the recent release of the Progressive Web Apps core guides on MDN, it’s easier than ever to make your website look and feel as responsive as native on mobile devices. But how about games?

In this article, we’ll explore the concept of Progressive Web Games to see if the concept is practical and viable in a modern web development environment, using PWA features built with Web APIs.

Let’s look at the Enclave Phaser Template (EPT) — a free, open sourced mobile boilerplate for HTML5 games that I created using the Phaser game engine. I’m using it myself to build all my Enclave Games projects.

The template was recently upgraded with some PWA features: Service Workers provide the ability to cache and serve the game when offline, and a manifest file allows it to be installed on the home screen. We also provide access to notifications, and much more. These PWA features are already built-in into the template, so you can focus on developing the game itself.

We will see how those features can solve problems developers have today: adding to home screen and working offline. The third part of this article will introduce the concept of progressive loading.

Add to Home screen

Web games can show their full potential on mobile, especially if we create some features and shortcuts for developers. The Add to Home screen feature makes it easier to build games that can compete with native games for screen placement and act as first class citizens on mobile devices.

Progressive Web Apps can be installed on modern devices with the help of this feature. You enable it by including a manifest file — the icon, modals and install banners are created based on the information from ept.webmanifest:

{
  "name": "Enclave Phaser Template",
  "short_name": "EPT",
  "description": "Mobile template for HTML5 games created using the Phaser game engine.",
  "icons": [
    {
      "src": "img/icons/icon-32.png",
      "sizes": "32x32",
      "type": "image/png"
    },
    // ...
    {
      "src": "img/icons/icon-512.png",
      "sizes": "512x512",
      "type": "image/png"
    }
  ],
  "start_url": "/index.html",
  "display": "fullscreen",
  "theme_color": "#DECCCC",
  "background_color": "#CCCCCC"
}

It’s not the only requirement though — be sure to check the Add to Home Screen article for all the details.

Offline capabilities

Developers often have issues getting desktop games (or mobile-friendly games showcased on a PC with a monitor) to work offline. This is especially challenging when demoing a game at a conference with unreliable wifi! Best practice is to plan ahead and have all the files of the game available locally, so that you can launch them offline.

Screenshot of Hungry Fridge web game on the desktop

Offline builds can be tricky, as you’ll have to manage the files yourself, remember your versions, and whether you’ve applied the latest patch or fixed that bug from previous conferences, work out the hardware setup, etc. This takes time and extra preparation.

Web games are easier to handle online when you have reliable connectivity: You point the browser to a URL and you have the latest version of your game running in no time. The network connection is the problem. It would be nice to have an offline solution.

Image of HTML5 Games demo station with no offline gameplay

The good news is that Progressive Web Apps can help — Service Workers cache and serve assets offline, so an unstable network connection is not the problem it used to be.

The Service Worker file in the Enclave Phaser Template contains everything we need. It starts with the list of files to be cached:

var cacheName = 'EPT-v1';
var appShellFiles = [
  './',
  './index.html',
  // ...
  './img/overlay.png',
  './img/particle.png',
  './img/title.png'
];

Then the install event is handled, which adds all the files to the cache:

self.addEventListener('install', function(e) {
  e.waitUntil(
    caches.open(cacheName).then(function(cache) {
      return cache.addAll(appShellFiles);
    })
  );
});

Next comes the fetch event, which will serve content from cache and add a new one, if needed:

self.addEventListener('fetch', function(e) {
  e.respondWith(
    caches.match(e.request).then(function(r) {
      return r || fetch(e.request).then(function(response) {
        return caches.open(cacheName).then(function(cache) {
          cache.put(e.request, response.clone());
          return response;
        });
      });
    })
  );
});

Be sure to check the Service Worker article for a detailed explanation.

Progressive loading

Progressive loading is an interesting concept that can provide many benefits for web game development. Progressive loading is basically “lazy loading” in the background. It’s not dependent on a specific API, but it follows the PWA approach and uses several of the key features we’ve just described, focused on games, and their specific requirements.

Games are heavier than apps in terms of resources — even for small and casual ones, you usually have to download 5-15 MB of assets, from images to sounds. This is supposed to be instantaneous, but you still have to wait through the loading screen when everything is downloaded. Or, if might be problematic if the player has a poor connection: the longer the download time, the bigger the chance that gameplay will be abandoned and the tab will be closed.

But what if instead of downloading everything, you loaded only what’s really needed first, and then downloaded the rest in the background? This way the player would see the main menu of your game way faster than with the traditional approach. They would spend at least a few seconds looking around while the files for the gameplay are retrieved in the background invisibly. And even if they clicked the play button really quickly, we could show a loading animation while everything else is loaded.

Instant Games are gaining in popularity, and a game developer building casual mobile HTML5 games should probably consider putting them on the Facebook or Google platforms. There are some requirements to meet, especially concerning the initial file size and download time, as the games are supposed to be instantly available for play.

Using the lazy loading technique, the game will feel faster than it would otherwise, given the amount of data required for it to be playable. You can achieve these results using the Phaser framework, which is pre-built to load most of the assets after the main menu resources arrive. You can also implement this yourself in pure JavaScript, using the link prefetch/defer mechanism. There’s more than one way to achieve progressive loading – it’s a powerful idea that’s independent of the specific approach or tools you choose, following the principles of Progressive Web Apps.

Conclusion

Do you have any more ideas on how to enhance the gaming experience? Feel free to play and experiment, and shape the future of web games. Drop a comment here if you’ve got ideas to share.

Planet MozillaMozilla Open Leaders - Round 5 Final Demos (Open Internet Ninja Foxes)

Mozilla Open Leaders - Round 5 Final Demos (Open Internet Ninja Foxes) http://public.etherpad-mozilla.org/p/ol5-demos-a Meet our open leadership grads: https://medium.com/read-write-participate/meet-our-open-leadership-grads-232800db1e21 Timestamps: 3:20 - Observed City // Fiona @observedcity 7:18 - MBac-Taking a closer look on how bacteria move!...

Planet MozillaThis Week in Rust 235

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Thunder, a crate for creating simple command-line programs. Thanks to Bujiraso for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

140 pull requests were merged in the last week

New Contributors

  • bstrie
  • Daniel Mueller
  • George Burton
  • Jane Lusby
  • Kyle Stachowicz
  • Mikela
  • Robin Krahl
  • SHA Miao

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaOSCAL'18 Debian, Ham, SDR and GSoC activities

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

Planet MozillaPSA: stop using mozilla::PodZero and mozilla::PodArrayZero

I’ve blogged about surprising bits of the C++ object model before, and I’m back with more.

Executive summary: Don’t use mozilla::PodZero or mozilla::PodArrayZero. Modern C++ provides better alternatives that don’t presume that writing all zeroes will always correctly initialize the given type. Use constructors, in-class member initializers, and functions like std::fill to zero member fields.

The briefest recap of salient parts of the C++ object model

C++ as a language really wants to know when objects are created so that compilers can know that this memory contains an object of this type. Compilers then can assume that writing an object of one type, won’t conflict with reads/writes of incompatible types.

double foo(double* d, int* i, int z)
{
  *d = 3.14;

  // int/double are incompatible, so this write may be
  // assumed not to change the value of *d.
  *i = z;

  // Therefore *d may be assumed to still be 3.14, so this
  // may be compiled as 3.14 * z without rereading *d.
  return *d * z;
}

You can’t use arbitrary memory as your desired type after a cast. An object of that type must have been explicitly created there: e.g. a local variable of that type must be declared there, a field of that type must be defined and the containing object created, the object must be created via new, &c.

Misinterpreting an object using an incompatible type violates the strict aliasing rules in [basic.lval]p11.

memsetting an object

memset lets you write characters over memory. C code routinely used this to fill an array or struct with zeroes or null pointers or similar, assuming all-zeroes writes the right value.

C++ code also sometimes uses memset to zero out an object, either after allocating its memory or in the constructor. This doesn’t create a T (you’d need to placement-new), but it often still “works”. But what if T changes to require initialization? Maybe a field in T gains a constructor (T might never be touched!) or a nonzero initializer, making T a non-trivial type. memset could hide that fresh initialization requirement or (depending when the memset happens) overwrite a necessary initialization.

Problem intensifies

Unfortunately, Mozilla code has provided and promoted a PodZero function that misuses memset this way. So when I built with gcc 8.0 recently (I usually use a home-built clang), I discovered a torrent of build warnings about memset misuse on non-trivial types. A taste:

In file included from /home/jwalden/moz/after/js/src/jit/BitSet.h:12,
                 from /home/jwalden/moz/after/js/src/jit/Safepoints.h:10,
                 from /home/jwalden/moz/after/js/src/jit/JitFrames.h:13,
                 from /home/jwalden/moz/after/js/src/jit/BaselineFrame.h:10,
                 from /home/jwalden/moz/after/js/src/vm/Stack-inl.h:15,
                 from /home/jwalden/moz/after/js/src/vm/Debugger-inl.h:12,
                 from /home/jwalden/moz/after/js/src/vm/DebuggerMemory.cpp:29,
                 from /home/jwalden/moz/after/js/src/dbg/js/src/Unified_cpp_js_src32.cpp:2:
/home/jwalden/moz/after/js/src/jit/JitAllocPolicy.h: In instantiation of ‘T* js::jit::JitAllocPolicy::maybe_pod_calloc(size_t) [with T = js::detail::HashTableEntry<js::HashMapEntry<JS::Value, unsigned int> >; size_t = long unsigned int]’:
/home/jwalden/moz/after/js/src/dbg/dist/include/js/HashTable.h:1293:63:   required from ‘static js::detail::HashTable<T, HashPolicy, AllocPolicy>::Entry* js::detail::HashTable<T, HashPolicy, AllocPolicy>::createTable(AllocPolicy&, uint32_t, js::detail::HashTable<T, HashPolicy, AllocPolicy>::FailureBehavior) [with T = js::HashMapEntry<JS::Value, unsigned int>; HashPolicy = js::HashMap<JS::Value, unsigned int, js::jit::LIRGraph::ValueHasher, js::jit::JitAllocPolicy>::MapHashPolicy; AllocPolicy = js::jit::JitAllocPolicy; js::detail::HashTable<T, HashPolicy, AllocPolicy>::Entry = js::detail::HashTableEntry<js::HashMapEntry<JS::Value, unsigned int> >; uint32_t = unsigned int]’
/home/jwalden/moz/after/js/src/dbg/dist/include/js/HashTable.h:1361:28:   required from ‘bool js::detail::HashTable<T, HashPolicy, AllocPolicy>::init(uint32_t) [with T = js::HashMapEntry<JS::Value, unsigned int>; HashPolicy = js::HashMap<JS::Value, unsigned int, js::jit::LIRGraph::ValueHasher, js::jit::JitAllocPolicy>::MapHashPolicy; AllocPolicy = js::jit::JitAllocPolicy; uint32_t = unsigned int]’
/home/jwalden/moz/after/js/src/dbg/dist/include/js/HashTable.h:92:69:   required from ‘bool js::HashMap<Key, Value, HashPolicy, AllocPolicy>::init(uint32_t) [with Key = JS::Value; Value = unsigned int; HashPolicy = js::jit::LIRGraph::ValueHasher; AllocPolicy = js::jit::JitAllocPolicy; uint32_t = unsigned int]’
/home/jwalden/moz/after/js/src/jit/LIR.h:1901:38:   required from here
/home/jwalden/moz/after/js/src/jit/JitAllocPolicy.h:101:19: warning: ‘void* memset(void*, int, size_t)’ clearing an object of type ‘class js::detail::HashTableEntry<js::HashMapEntry<JS::Value, unsigned int> >’ with no trivial copy-assignment [-Wclass-memaccess]
             memset(p, 0, numElems * sizeof(T));
             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/jwalden/moz/after/js/src/dbg/dist/include/js/TracingAPI.h:11,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/GCPolicyAPI.h:47,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/RootingAPI.h:22,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/CallArgs.h:73,
                 from /home/jwalden/moz/after/js/src/jsapi.h:29,
                 from /home/jwalden/moz/after/js/src/vm/DebuggerMemory.h:10,
                 from /home/jwalden/moz/after/js/src/vm/DebuggerMemory.cpp:7,
                 from /home/jwalden/moz/after/js/src/dbg/js/src/Unified_cpp_js_src32.cpp:2:
/home/jwalden/moz/after/js/src/dbg/dist/include/js/HashTable.h:794:7: note: ‘class js::detail::HashTableEntry<js::HashMapEntry<JS::Value, unsigned int> >’ declared here
 class HashTableEntry
       ^~~~~~~~~~~~~~
Unified_cpp_js_src36.o
In file included from /home/jwalden/moz/after/js/src/dbg/dist/include/js/HashTable.h:19,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/TracingAPI.h:11,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/GCPolicyAPI.h:47,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/RootingAPI.h:22,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/CallArgs.h:73,
                 from /home/jwalden/moz/after/js/src/dbg/dist/include/js/CallNonGenericMethod.h:12,
                 from /home/jwalden/moz/after/js/src/NamespaceImports.h:15,
                 from /home/jwalden/moz/after/js/src/gc/Barrier.h:10,
                 from /home/jwalden/moz/after/js/src/vm/ArgumentsObject.h:12,
                 from /home/jwalden/moz/after/js/src/vm/GeneratorObject.h:10,
                 from /home/jwalden/moz/after/js/src/vm/GeneratorObject.cpp:7,
                 from /home/jwalden/moz/after/js/src/dbg/js/src/Unified_cpp_js_src33.cpp:2:
/home/jwalden/moz/after/js/src/dbg/dist/include/mozilla/PodOperations.h: In instantiation of ‘void mozilla::PodZero(T*) [with T = js::NativeIterator]’:
/home/jwalden/moz/after/js/src/vm/Iteration.cpp:578:15:   required from here
/home/jwalden/moz/after/js/src/dbg/dist/include/mozilla/PodOperations.h:32:9: warning: ‘void* memset(void*, int, size_t)’ clearing an object of type ‘struct js::NativeIterator’ with no trivial copy-assignment; use assignment or value-initialization instead [-Wclass-memaccess]
   memset(aT, 0, sizeof(T));
   ~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/jwalden/moz/after/js/src/vm/JSCompartment-inl.h:14,
                 from /home/jwalden/moz/after/js/src/vm/JSObject-inl.h:32,
                 from /home/jwalden/moz/after/js/src/vm/ArrayObject-inl.h:15,
                 from /home/jwalden/moz/after/js/src/vm/GeneratorObject.cpp:11,
                 from /home/jwalden/moz/after/js/src/dbg/js/src/Unified_cpp_js_src33.cpp:2:
/home/jwalden/moz/after/js/src/vm/Iteration.h:32:8: note: ‘struct js::NativeIterator’ declared here
 struct NativeIterator
        ^~~~~~~~~~~~~~

Fixing the problem by not using mozilla::PodZero

Historically you’d have to add every single member-initialization to your constructor, duplicating names and risking missing one, but C+11’s in-class initializers allow an elegant fix:

// Add " = nullptr" to initialize these function pointers.
struct AsmJSCacheOps
{
    OpenAsmJSCacheEntryForReadOp openEntryForRead = nullptr;
    CloseAsmJSCacheEntryForReadOp closeEntryForRead = nullptr;
    OpenAsmJSCacheEntryForWriteOp openEntryForWrite = nullptr;
    CloseAsmJSCacheEntryForWriteOp closeEntryForWrite = nullptr;
};

As long as you invoke a constructor, the members will be initialized. (Constructors can initialize a member to override in-class initializers.)

List-initialization using {} is also frequently helpful: you can use it to zero trailing (or all) members of an array or struct without naming/providing them:

class PreliminaryObjectArray
{
  public:
    static const uint32_t COUNT = 20;

  private:
    // All objects with the type which have been allocated. The pointers in
    // this array are weak.
    JSObject* objects[COUNT] = {}; // zeroes

  public:
    PreliminaryObjectArray() = default;

    // ...
};

Finally, C++ offers iterative-mutation functions to fill a container:

#include <algorithm>

// mozilla::Array's default constructor doesn't initialize array
// contents unless the element type is a class with a default
// constructor, and no Array overload exists to zero every
// element.  (You could pass 1024 zeroes, but....)
mozilla::Array<uint32_t, 1024> page; // array contents undefined

std::fill(page.begin(), page.end(), 0); // now contains zeroes
std::fill_n(page.begin(), page.end() - page.begin(), 0); // alternatively

After a long run of fixes to sundry bits of SpiderMonkey code to fix every last one of these issues last week, I’ve returned SpiderMonkey to warning-free with gcc (excluding imported ICU code). The only serious trickiness I ran into was a function of very unusual SpiderMonkey needs that shouldn’t affect code generally.

Fixing these issues is generally very doable. As people update to newer and newer gcc to build, the new -Wclass-memaccess warning that told me about these issues will bug more and more people, and I’m confident all these problems triggered by PodZero can be fixed.

mozilla::PodZero and mozilla::PodArrayZero are deprecated

PodZero and its array-zeroing variant PodArrayZero are ill-fitted to modern C++ and modern compilers. C++ now offers clean, type-safe ways to initialize memory to zeroes. You should avoid using PodZero and PodArrayZero in new code, replacing it with the initializer syntaxes mentioned above or with standard C++ algorithms to fill in zeroes.

As PodZero is used in a ton of places right now, it’ll likely stick around for some time. But there’s a good chance I’ll rename it to DeprecatedPodZero to highlight its badness and the desire to remove it. You should replace existing uses of it wherever and whenever you can.

Planet MozillaRedeploying Taskcluster: Hosted vs. Shipped Software

The Taskcluster team’s work on redeployability means switching from a hosted service to a shipped application.

A hosted service is one where the authors of the software are also running the main instance of that software. Examples include Github, Facebook, and Mozillians. By contrast, a shipped application is deployed multiple times by people unrelated to the software’s authors. Examples of shipped applications include Gitlab, Joomla, and the Rust toolchain. And, of course, Firefox!

Hosted Services

Operating a hosted service can be liberating. Blog posts describe the joys of continuous deployment – even deploying the service multiple times per day. Bugs can be fixed quickly, either by rolling back to a previous deployment or by deploying a fix.

Deploying new features on a hosted service is pretty easy, too. Even a complex change can be broken down into phases and accomplished without downtime. For example, changing the backend storage for a service can be accomplished by modifying the service to write to both old and new backends, mirroring existing data from old to new, switching reads to the new backend, and finally removing the code to write to the old backend. Each phase is deployed separately, with careful monitoring. If anything goes wrong, rollback to the old backend is quick and easy.

Hosted service developers are often involved with operation of the service, and operational issues can frequently be diagnosed or even corrected with modifications to the software. For example, if a service is experiencing performance issues due to particular kinds of queries, a quick deployment to identify and reject those queries can keep the service up, followed by a patch to add caching or some other approach to improve performance.

Shipped Applications

A shipped application is sent out into the world to be used by other people. Those users may or may not use the latest version, and certainly will not update several times per day (the heroes running Firefox Nightly being a notable exception). So, many versions of the application will be running simultaneously. Some applications support automatic updates, but many users want to control when – and if – they update. For example, upgrading a website built with a CMS like Joomla is a risky operation, especially if the website has been heavily customized.

Upgrades are important both for new features and for bugfixes, including for security bugs. An instance of an application like Gitlab might require an immediate upgrade when a security issue is discovered. However, especially if the deployment is several versions old, that critical upgrade may carry a great deal of risk. Producers of shipped software sometimes provide backported fixes for just this purpose, at least for long term support (LTS) or extended support release (ESR) versions, but this has a substantial cost for the application developers.

Upgrading services like Gitlab or Joomla is made more difficult because there is lots of user data that must remain accessible after the upgrade. For major upgrades, that often requires some kind of migration as data formats and schemas change. In cases where the upgrade spans several major versions, it may be necessary to apply several migrations in order. Tools like Alembic help with this by maintaining and applying step-by-step database migrations.

Taskcluster

Today, Taskcluster is very much a hosted application. There is only one “instance” of Taskcluster in the world, at taskcluster.net. The Taskcluster team is responsible for both development and operation of the service, and also works closely with the Firefox build team as a user of the service.

We want to make Taskcluster a shipped application. As the descriptions above suggest, this is not a simple process. The following sections highlight some of the challenges we are facing.

Releases and Deployment

We currently deploy Taskcluster microservices independently. That is, when we make a change to a service like taskcluster-hooks, we deploy an upgrade to that service without modifying the other services. We often sequence these changes carefully to ensure continued compatibility: we expect only specific combinations of services to run together.

This is a far more intricate process than we can expect users to follow. Instead, we will ship Taskcluster releases comprised of a set of built Docker images and a spec file identifying those images and how they should be deployed. We will test that this particular combination of versions works well together.

Deploying a release involves combining that spec file with some deployment-specific configuration and some infrastructure information (implemented via Terraform) to produce a set of Kubernetes resources for deployment with kubectl. Kubernetes and Terraform both have limited support for migration from one release to another: Terraform will only create or modify changed resources, and Kubernetes will perform a phased roll-out of any modified resources.

By the way, all of this build-and-release functionality is implemented in the new taskcluster-installer.

Service Discovery

The string taskcluster.net appears quite frequently in the Taskcluster source code. For any other deployment, that hostname is not valid – but how will the service find the correct hostname? The question extends to determining pulse exchange names, task artifact hostnames, and so on. There are also security issues to consider: misconfiguration of URLs might enable XSS and CSRF attacks from untrusted content such as task artifacts.

The approach we are taking is to define a rootUrl from which all other URLs and service identities can be determined. Some are determined by simple transformations encapsulated in a new taskcluster-lib-urls library. Others are fetched at runtime from other services: pulse exchanges from the taskcluster-pulse service, artifact URLs from the taskcluster-queue service, and so on.

The rootUrl is a single domain, with all Taskcluster services available at sub-paths such as /api/queue. Users of the current Taskcluster installation will note that this is a change: queue is currently at https://queue.taskcluster.net, not https://taskcluster.net/queue. We have solved this issue by special-casing the rootUrl https://taskcluster.net to generate the old-style URLs. Once we have migrated all users out of the current installation, we will remove that special-case.

The single root domain is implemented using routing features supplied by Kubernetes Ingress resources, based on an HTTP proxy. This has the side-effect that when one microservice contacts another (for example, taskcluster-hooks calling queue.createTask), it does so via the same Ingress, a more circuitous journey than is strictly required.

Data Migrations

The first few deployments of Taskcluster will not require great support for migrations. A staging environment, for example, can be completely destroyed and re-created without any adverse impact. But we will soon need to support users upgrading Taskcluster from earlier releases with no (or at least minimal) downtime.

Our Azure tables library (azure-entities) already has rudimentary support for schema updates, so modifying the structure of table rows is not difficult, although refactoring a single table into multiple tables would be difficult.

As we transition to using Postgres instead of Azure, we will need to adopt some of the common migration tools. Ideally we can support downtime-free upgrades like azure-entities does, instead of requiring downtime to run DB migrations synchronously. Bug 1431783 tracks this work.

Customization

As a former maintainer of Buildbot, I’ve had a lot of experience with CI applications as they are used in various organizations. The surprising observation is this: every organization thinks that their approach to CI is the obvious and only way to do things; and every organization does things in a radically different way. Developers gonna develop, and any CI framework will get modified to suit the needs of each user.

Lots of Buildbot installations are heavily customized to meet local needs. That has caused a lot of Buildbot users to get “stuck” at older versions, since upgrades would conflict with the customizations. Part of this difficulty is due to a failure of the Buildbot project to provide strong guidelines for customization. Recent versions of Buildbot have done better by providing clearly documented APIs and marking other interfaces as private and subject to change.

Taskcluster already has strong APIs, so we begin a step ahead. We might consider additional guidelines:

  • Users should not customize existing services, except to make experimental changes that will eventually be merged upstream. This frees the Taskcluster team to make changes to services without concern that those will conflict with users’ modifications.

  • Users are encouraged, instead, to develop their own services, either hosted within the Taskcluster deployment as a site-specific service, or hosted externally but following Taskcluster API conventions. A local example is the tc-coalesce service, developed by the release engineering team to support Mozilla-specific task-superseding needs and hosted outside of the Taskcluster installation. On the other hand, taskcluster-stats-collector is deployed within the Firefox Taskcluster deployment, but is Firefox-specific and not part of a public Taskcluster release.

  • While a Taskcluster release will likely encompass some pre-built worker images for various cloud platforms, sophisticated worker deployment is the responsibility of individual users. That may mean deploying workers to hardware where necessary, perhaps with modifications to the build configurations or even entirely custom-built worker implementations. We will provide cloud-provisioning tools that can be used to dynamically instantiate user-specified images.

Generated Client Libraries

The second point above raises an interesting quandry: Taskcluster uses code generation to create its API client libraries. Historically, we have just pushed the “latest” client to the package repository and carefully choreographed any incompatible changes. For users who have not customized their deployment, this is not too much trouble: any release of Taskcluster will have a client library in the package repository corresponding to it. We don’t have a great way to indicate which version that is, but perhaps we will invent something.

But when Taskcluster installations are customized by adding additional services, progress is no longer linear: each user has a distinct “fork” of the Taskcluster API surface containing the locally-defined services. Development of Taskcluster components poses a similar challenge: if I add a new API method to a service, how do I call that method from another service without pushing a new library to the package repository?

The question is further complicated by the use of compiled languages. While Python and JS clients can simply load a schema reference file at runtime (for example, a file generated at deploy time), the Go and Java clients “bake in” the references at compile time.

Despite much discussion, we have yet to settle on a good solution for this issue.

Everything is Public!

Mozilla is Open by Design, and so is Taskcluster: with the exception of data that must remain private (passwords, encryption keys, and material covered by other companies’ NDAs), everything is publicly accessible. While Taskcluster does have a sophisticated and battle-tested authorization system based on scopes, most read-only API calls do not require any scopes and thus can be made with a simple, un-authenticated HTTP request.

We take advantage of the public availability of most data by passing around simple, authentication-free URLs. For example, the action specification describes downloading a decision task’s public/action.json artifact. Nowhere does it mention providing any credentials to fetch the decision task, nor to fetch the artifact itself.

This is a rather fundamental design decision, and changing it would be difficult. We might embark on that process, but we might also declare Taskcluster an open-by-design system, and require non-OSS users to invent other methods of hiding their data, such as firewalls and VPNs.

Transitioning from taskcluster.net

Firefox build, test, and release processes run at massive scale on the existing Taskcluster instance at https://taskcluster.net, along with a number of smaller Mozilla-associated projects. As we work on this “redeployability” project, we must continue to deploy from master to that service as well – the rootUrl special-case mentioned above is a critical part of this compatibility. We will not be running either new or old instances from long-living Git branches.

Some day, we will need to move all of these projects to a newly redeployed cluster and delete the old. That day is still in the distant future. It will likely involve some running of tasks in parallel to expunge any leftover references to taskcluster.net, then a planned downtime to migrate everything over (we will want to maintain task and artifact history, for example). We will likely finish up by redeploying a bunch of permanent redirects from taskcluster.net domains.

Conclusion

That’s just a short list of some of the challenges we face in transmuting a hosted service into a shipped application.

All the while, of course, we must “keep the lights on” for the existing deployment, and continue to meet Firefox’s needs. At the moment that includes a project to deploy Taskcluster workers on arm64 hardware in https://packet.net, development of the docker-engine to replace the aging docker worker, using hooks for actions to reduce the scopes afforded to level-3 users, improving taskcluster-github to support defining decision tasks, and the usual assortment of contributed pull requests, issue debugging, service requests.

Planet MozillaFirefox 61 Beta 6 Testday Results

Hello Mozillians!

As you may already know, last Friday – May 18th – we held a new Testday event, for Firefox 61 Beta 6.

Thank you all for helping us make Mozilla a better place: gaby2300, Michal, micde, Jarrod Michell, Petri Pollanen, Thomas Brooks.

From India team: Aishwarya Narasimhan, Mohamed Bawas, Surentharan and Suren, amirthavenkat, krish.

Results:

– several test cases executed for Accessibility Inspector: Developer Tools, Audio Context using sampleRate and Web Compatibility.

– 1 bug verified: 1449200

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaMore Roads And Faster Browsers

In the paper, The Fundamental Law of Road Congestion: Evidence from US cities, Turner and Duranton decipher this rule:

New roads will create new drivers, resulting in the intensity of traffic staying the same.

Basically adding more roads or more lanes usually does not improve the state of traffic into a city. By making it easy to reach the city, we just increase the capacity to clutter more the city.

And it's exactly what is happening with our Web pages. Browsers become more performant. So developers instead of using this extra performance to make the page extra-blazingly fast, we use it to pack more DOM nodes, CSS animations and JavaScript driven user experiences.

Why? Because we can. That's the sad part of it.

Otsukare!

Planet MozillaA little Talos of your very own

I haven't had as much time to work on getting QEMU and Firefox functional/useable on the Talos II over the last few days because of work complications (I'll be reporting on that in a few weeks), but Raptor has heard those of you who are still suffering sticker PTSD from the Talos II and announced the Talos II Lite.

Yes, think of it as the Mac mini G4 to the Talos II's Quad G5. This comparison is not completely inappropriate because the T2L has only one CPU socket (the T2 has two) and thus only 24 PCIe lanes, split amongst an x16 and an x8 (the T2 fully loaded has two x8s and three x16s), and "only" 8 DDR4 slots (the T2 has 16). You can still cram one of the 22-core demons into one of those, though. Starting price is "just" $1399.99, though as with the Talos II the CPU is extra ($375 for 4-core to $2575 for 22-core), the RAM is extra ($255 for 16GB to $2950 for 128GB), and the storage is extra (Microsemi SAS starts at $300 plus drives, or a Samsung 960 EVO NVMe 500GB for $350, or a four-port SATA controller for $50 plus drives). You can also add the same Radeon WX 7100 workstation card that's in the big T2 ($800), too, or just use the same onboard VGA controller that comes with the T2 (built-in). It has USB 3.0 and dual Gig Ethernet, just like the big fella, though it doesn't seem to come with a BD-ROM.

However, the mini:Quad analogy falls down when you look at the actual size of the Lite. It, too, is an EATX behemoth, despite the leaner spec. Personally I would have hoped for something a little more manageably dimensioned. Raptor is taking about offering a smaller board but that would require a redesign and this was probably an artifact of getting it launched cheap(er)ly.

So would I have saved money with my T2 going Lite? Let's price it out: $1400 for the system (includes 500W PSU and EATX case), $595 for the octocore POWER9 (my T2 has two 4-core chips), $535 for 32GB ECC DDR4 RAM, $350 for the SAS card, $800 for the AMD Radeon WX 7100, $50 for the 4-port SATA card (this came installed "free" in my T2) and $350 for the 500GB Samsung NVMe SSD. Sticker price for that configuration is $4080 plus applicable tax and shipping; I repriced the same configuration for the Talos II and got a sticker cost of $7360, about $250 more than what I paid personally (the benefit of being an early adopter), so let's say a cost difference of $3300. That's substantial and a whole lot more palatable. $4080 is actually within Quad G5 range -- I paid not much less than that for my Quad G5 back in the day with the 7800GT and 8GB of RAM. A cheap SATA DVD-RW or something wouldn't add much more to the price if you want an optical drive.

There's a small problem here though: the Lite can't actually accommodate that loadout because there's not enough PCIe slots to get it all in there. In fact, I've got another 1GB NVMe drive to install in my T2, and I'm probably going to pull the now unused Sonnet FireWire/USB PCIe card (I prefer FireWire hubs) from the G5 to install in it too, which may mean temporarily pulling the SAS card until I'm ready to populate the front bays. Also, the Talos II out of the box doesn't support PCIe bifurcation, so I really do need both those slots for my SSDs. Per Raptor it can: with changes to the machine XML definition it could be made to "trifurcate" the x16 endpoint on slot 3 (CPU 2, PHB2) into an x8 and two x4, but that would mean that the available 4-way M.2 NVMe multicards would only have at most three slots available, and the system doesn't ship that way anyhow. Besides, even if you did get bifurcation working on the Lite, you'd only have the remaining x8 for anything else which couldn't be used for an x16 workstation video card. UPDATE: Per Raptor, the Lite's x16 can't be bifurcated due to a hardware limitation, so that is only an option for the big system.

But let's say you're not a maniac like me and you want a basic "budget" config. Let's drop the workstation card and the SAS card, and drop to a 4-core with 16GB, and we have a $2430 system. Wow! Not bad! You've still got the NVMe card and storage expansion over SATA, and you've still got USB ports for audio and the onboard VGA. But you've used up all your PCIe slots, so let's hope you don't need anything else to go internal (let alone 3D acceleration). If you really want that x16 slot back, drop the NVMe card and add some SATA drives ($2080 + devices), but now you're starting to strip this system down more than you might like to, and it doesn't get much cheaper that way.

Overall, that $3300 really does translate into greatly improved expandability in addition to the beefier power supplies, and thus it was never really an option for my needs personally. Maybe my mini:Quad analogy wasn't so off base. But if you want to join the POWER9 revolution on a budget and give Chipzilla the finger, as all right-thinking nerds should, you've now got an option that only requires passing a kidneystone of just half the size or less. It ships starting in July.

Another interesting thing Raptor pointed out: in the Phoronix performance tests, the Talos was running with full Spectre and Meltdown protections, but the x86 wasn't! Boooo! And if you really want to turn Spectre protections off on the Talos for even more grunty, you can do that. Meanwhile, as we speak, Intel is making people take down their firmware documentation and trying to stymie efforts to reverse engineer them. What system would you rather support?

Planet MozillaReality Redrawn Opens At The Tech

The Tech Museum of Innovation in San Jose was filled on Thursday with visitors experiencing new takes on the issue of fake news by artists using mixed reality, card games and even scratch and sniff cards. These installations were the results of Mozilla’ Reality Redrawn challenge. We launched the competition last December to make the power of misinformation and its potential impacts visible and visceral. Winners were announced in February.

One contributor, Australian artist Sutu was previously commissioned by Marvel and Google to create Tilt Brush Virtual Reality paintings and was the feature subject of the 2014 ABC documentary, ‘Cyber Dreaming’. For Breaking News at the Tech, he used AR animation to show the reconstruction of an article in real time and illustrate the thought process behind creating a fake news story. Using the AR app EyeJack, you can see the front page of the New York Times come to life with animation and sound as the stories are deconstructed and multiple viewpoints are presented simultaneously:

Breaking News, by Sutu
(Photography by Nick Leoni)

Visitors on opening night of this limited run exhibition also enjoyed conversation on stage around the topic from Marketplace Tech Host Molly Wood, Wired Contributing Editor Fred Vogelstein, BBC North America Technology Correspondent Dave Lee and our own Fellow on Media, Misinformation and Trust, Renée DiResta. There was a powerful message by video from the Miami Herald’s reporter Alex Harris. She found herself the target of a misinformation campaign while reporting on the tragedy at Marjory Stoneman Douglas High School in Parkland, Florida.

Reality Redrawn is open until June 2 at the Tech and admission is included with entry to the museum. Follow the link to find out more about ticket prices for the Tech.”>link to find out more about ticket prices for the Tech. If you’re visiting the Bay Area soon I hope you’ll make time to see how it’s possible to make some sense of the strange journeys our minds take when attacked by fake news and other misinformation.

The post Reality Redrawn Opens At The Tech appeared first on The Mozilla Blog.

Planet MozillaAnnouncing git-cinnabar 0.5.0 beta 3

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 2?

  • Fixed incompatibilities with Mercurial >= 4.4.
  • Miscellaneous metadata format changes.
  • Move more operations to the helper, hopefully making things faster.
  • Updated git to 2.17.0 for the helper.
  • Properly handle clones with bundles when the repository doesn’t contain anything newer than the bundle.
  • Fixed tag cache, which could lead to missing tags.

Planet MozillaRyzom falling: Remote code execution via the in-game browser

The short version

Ryzom is an online role-playing game. If you happen to be playing it, using the in-game browser is a significant risk. When you do that, there is a chance that somebody will run their Lua code in your client and bad things will happen.

Explaining Ryzom’s in-game browser

Ryzom’s in-game browser is there so that you can open links sent to you without leaving the game. It is also used to display the game’s forum as well as various other web apps. The game even allows installing web apps that are created by third parties. This web browser is very rudimentary, it supports only a bunch of HTML tags and nothing fancy like JavaScript. But it compensates for that lack of functionality by running Lua code.

You have to consider that the Lua programming language is what powers the game’s user interface. So letting the browser download and run Lua code allows for perfect integration between websites and the user interface, in many cases users won’t even be able to tell the difference. The game even uses this functionality to hot-patch the user interface and add missing features to older clients.

The flaws

The developers realize of course that letting arbitrary websites run Lua code in their game client is dangerous. So they created a whitelist of trusted websites that would be allowed to do it, currently that’s app.ryzom.com and api.ryzom.com. And that solution would have been mostly fine if these sites weren’t full of Cross-Site Scripting (XSS) vulnerabilities.

Having an XSS vulnerability in your website normally is bad enough on its own. In this case however, these vulnerabilities allow anybody to create a link to a trusted website that would contain malicious Lua code. No need to make things too obvious, that link can be hidden behind a URL shortener. Send this link to your target, add some text that will make them want to open it — you are done.

To add insult to injury, the game won’t use HTTPS when establishing connections to trusted websites because the developers haven’t figured out SSL support yet. So if somebody can manipulate your traffic, e.g. if you are connected to an open WiFi, then they will be able to inject malicious Lua code when your Ryzom client starts up.

How bad is it?

What’s the worst thing that could happen? Given that Lua code controls the game’s user interface, some very competitive player could scramble the interface for an adversary to achieve an advantage over them, clearly a rather extreme action. The more likely exploit would involve tricking a game admin into running an admin command, e.g. one that gives you tons of skill points.

But the issue here extends far beyond the game itself. Lua code can read and write arbitrary files, and it can also run external applications. So the risk here is really getting your machine infested with malware, just by clicking a link in the game or by playing on an open WiFi network.

The resolution

Notifying Ryzom developers turned out rather non-trivial, which is surprising for an open-source project. Initially, I asked a gamemaster who told me to write a support mail. Supposedly, my mail would be forwarded to the developers. Nine days later, I still haven’t got any response and decided to create a Bitbucket issue asking whether the developers got the info — they didn’t. The issue was deemed “resolved” on the same day, by means of fixing a bunch of server-side XSS vulnerabilities.

It’s impossible to tell how complete this resolution is, with the Ryzom server-side web apps not being open source. Given the obvious lack of good security practices, I wouldn’t rely too much on it. Also, the issue about adding SSL support is still just sitting there, last activity was six months ago. So if you are playing Ryzom, I’d recommend disabling execution of remote Lua code altogether by removing trusted domains from Ryzom’s configuration. For that, you need to edit client.cfg file while Ryzom isn’t running and add the following line:

WebIgTrustedDomains  = {};

Some game features will no longer work then, such as the Info window. Also, using apps will be more complicated or even impossible. But at least you can enjoy the game without worrying about your computer getting p0wned.

Planet MozillaTangerine UI problems

I've been a big fan of Tangerine for a while, it's a bank that doesn't charge fees and does what I need to do. They used to have a great app and website and then it all went a bit wrong.

It's now a HTML app for Desktop and mobile. This isn't the fault of the tools used, but there's some terrible choices in the app across both.

Notifications

On my phone I get the notification number on my app screen. So, I open up the app and I get this little message:

But, you can't click on it. It's not a link, to find your notifications you have to go to Profile & Settings, scroll down to Inbox and then you can access the notifications. If notifications that are that important, how about you put a way to access them somewhere obvious.

Here's a notification:

Space

When you open the app, a full 1/3 of the screen is an advert:

Let's dismiss that:

Oh come on Tangerine. I'm not logging into my phone to get "Insights", otherwise known as "Advertising". Stop taking up space with this crap.

Cancelling

Pop quiz. You are cancelling this transaction. What does the Cancel button do?

The Cancel button cancels the cancelling. The highlighted option Confirm actually continues the cancelling. You know what would be clearer? Yes or No.

Cluttered

Supposing I wanted to see my transactions on an account. There's about one half of the screen to scroll down. The black text Posted Transactions doesn't actually do anything. The transaction list is an infinite scroll. So instead they've put everything at the top of the page, such as Search, Transaction Breakdown and so on.

Then there's another title Transactions. Do you get the idea that in those 5 boxes saying Transactions, this might be about...

Overall

The overall feel of the app is that its full of spinners, far too cluttered and just to confusing. Hey not everything I've built is perfect, but even I can spot some real problems with this app. I pretty sure Tangerine can do better than this.

And yes, I'm writing this while drinking a beer I recently bought, as shown on my transaction page.

I'd still recommend Tangerine and their credit card. If you want to open an account, use my key: 20790922S1 to give get yourself a bonus.

Planet MozillaPyCon US 2018 Wrapup

I attended PyCon US in Cleveland over the last week. Here’s a quick summary of the conference.

Aside from my usual “you should go to PyCon” admonition, I’d like to suggest writing a summary like this every time you visit a conference. It’s a nice way to share what you found valuable with others, and also to evaluate the utility of attending the conference.

I barely write a lick of Python anymore, so I mostly attend PyCon for the people and for the ideas. ome themes are common to PyCon: data science, machine learning, education, and core language. Of course, there’s always a smattering of other topics, too.

During the poster session, I saw a poster on the Python Developers Survey 2017 from JetBrains. One statistic that surprised me: 50% of respondents use Python primarily for data analysis.

Talks

There were a lot of good talks this year, although few that will be remembered forever. Here are a few highlights from the talks I attended. Sadly, PyVideo does not have the videos up yet, but I’m sure they will soon be available at http://pyvideo.org/events/pycon-us-2018.html.

I’m trying to get more comfortable with the ideas around machine learning, without actually doing any of the work myself.

  • Deconstructing the US Patent Database - some technical issues cut the talk short, but Van went into lots of interesting details about analysis of the US patent database, both the language and the attached images. It seems the project was leading toward a way to find prior art quickly. On particularly neat tool was a concept unique identifier - CUI - that replaces technical terms with an arbitrary identifier. It comes from the medical field, and allows disambiguating similar terms and combining multiple terms for the same concept.

  • Birding with Python and Machine Learning was a much lighter approach to ML. Kirk set up a webcam in his backyard and used ML to identify the presence of birds in-frame, and then to try to identify the type of bird.

  • Listen, Attend, and Walk was a more research-focused talk about interpreting natural-language navigational instructions. Padmaja talked in detail about the configuration of a RNN to parse simple English sentences and use them to navigate a DOOM-like environment. While the result wasn’t exactly magical, I appreciaed the deep, but math-light explanation of the design of the system.

On the core language, I listened to Dataclasses: The code generator to end all code generators and Get your resources faster, with importlib.resources.

Maybe quantum computing is the next big thing? I sat in on Python for the quantum computing age, where Ravi gave a nice overview of what quantum computing is. He also gave some examples of controlling (real, cloud-based) quantum computers using Python. Quantum computers still have 10-20 gates, so they can’t exactly “run Python”, but you can build a basic quantum logic circuit with Python and execute it to get the result.

Sometimes the best talks are those that tell a great story. Don’t Look Back in Anger was one of those - Lilly told the story of Edward Orange Wildman Whitehouse and the failure of the first trans-Atlantic telegraph cable. Besides being funny and an interesting piece of history, she compared the experience to modern “go-live” events and helped illustrate the need for care and planning. Reinventing the Parser Generator was also a fun story. Dave described, using his typical live-coding style, what a parser generator is, how PLY worked back in the 90’s, and how SLY uses new Python magic to do similarly expressive, cool things. Dave is a fantastic teacher, from whom I have learned a great deal, and it’s worth noting you can take private classes with him. They are well worth your time.

Yi Ling gave a great keynote on web application vulnerabilities, told in the style of a children’s book. I found the content useful - basically, how not to be stupid when building a website - but the presentation was quite engaging.

I found What Is This Mess amusing and informative, too. Justin talked about writing tests for untested code – a common situation in my day-to-day work. His advice was good and illustrated with simple but clear examples. I think I liked the talk more for the “yes, someone understands me!” factor than anything I learned from it!

“Hallway Track”

The hallway track – conference lingo for the interesting stuff that happens outside of the talks – is strong at PyCon. During the Expo (filled with vendors and swag I don’t really want) I made it a point to sit down at diverse-looking tables and chat with people. I met people from finance, college students, data scientists, googlers, and a whole host of interesting people. Working for Mozilla is, of course, a nice conversation starter.

Because I’m staying with family here in Cleveland, I did not participate in any of the evening activities. That’s been a bit of a disappointment – the dinners are always engaging – but probably best for family harmony.

On Sunday, there are simultaneous job fairs and poster sessions in the expo hall. I’m not looking for a job (although the Java recruiters remain hopeful), so I perused the posters. It’s a mix of topics, from genomic and ML research to cool new tools through programming education and civic data projects. A few were particularly interesting to me.

A poster on the Pulp project attracted my attention since it seems to solve a recurrent problem at Mozilla: mirroring large binary repositories in a consistent fashion. The system supports docker images as well as JS and Python packages, and can release repositories that are internally consistent: the packages are all known to work with each other. This may be useful for deploying Taskcluster, and is also useful for the Firefox CI system to ensure that it can reliably reproduce Firefox builds even if the sources for the build tools fail or disappear.

I talked for some time with some people from the Fedora CommOps team. They work on operational support for building and supporting the Fedora community. Since we have an ongoing Outreachy project to build a new version of Bugsahoy, I was interested in how Fedora connects new contributors to their first contribution. Their tool, easyfix, seems a little overwhelming to me, but can offer some inspiration for our effort. More interesting, Fedora uses an archived message bus (fedmsg) to track events in the Fedora ecosystem over time. This allows creation of leaderboards and other interesting, motivational statistics on new contributions.

Sprints

PyCon’s sprints start after the main conference ends. The idea is to give projects time to gather together while everyone is in the same location. PyCon supplies space, power, wifi, and occasionally food. This year, the wifi and power were strong and the food somewhat disappointing. The spaces were small windowless conference rooms, and somehow I found them stifling - I guess I’ve gotten used to working at home in a room full of windows.

I spent the day hacking on Project Mesa, like I did last year. I have no real connection to this project, but the people who work on it are interesting and smart, and I can make a useful contribution in a short amount of time.

I had hoped to meet up with other Outreachy folks, but plans fell through, so I only stuch around for the first day of sprints. I suspect that if I was more engaged with Python software on a day-to-day basis, I would have found more to hack on. For example, the Pallets project (the new umbrella for lots of Python utilities like Click and Werkzeug) had a big crowd and seemed to be quite productive. We could also hear the Django room, where a round of applause went up every time a contribution was merged.

Come To PyCon

Plan to come next year!

PyCon is an easy conference to attend. It’s in Cleveland again next year, right on the waterfront, near the science museum and the rock-and-roll hall of fame, so if you bring family they will have ample activities. The conference provides childcare if your family is of the younger persuasion. Breakfast and lunch are included, and dinners are optional. Every talk is live-captioned on a big screen beside the presenter, so if you have difficulty hearing or understanding spoken English PyCon has your back. Finanical aid is available. There’s really no reason not to attend!

Registration starts in early spring.

Planet MozillaReps Weekly Meeting, 17 May 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting, 17 May 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaWhat’s Your Open Source Strategy? Here Are 10 Answers…

<figure></figure>

A research report from Mozilla and Open Tech Strategies provides new perspectives on framing open source strategy. The report builds on Mozilla’s “Open by Design” strategy, which aims to increase the intent and impact of collaborative technology projects.

Mozilla is a radically open and participatory project. As part of the research we compiled into turning openness into a consistent competitive advantage, we identified that the application of open practices should always be paired with well-researched strategic intent. Without clarity of purpose, organizations will not (and nor should they) maintain long-term commitment to working with community. Indeed, we were not the first to observe this.

Mozilla benefits from many open practices, but open sourcing software is the foundation on which we build. Open source takes many forms at Mozilla. We enjoy a great diversity among the community structures of different Mozilla-driven open source projects, from Rust to Coral to Firefox (there are actually multiple distinct Firefox communities) and to others.

The basic freedoms offered by Mozilla’s open source projects — the famous “Four Freedoms” originally defined by the FSF — are unambiguous. But they only define the rights conveyed by the software’s license. People often have expectations that go well beyond that strict definition: expectations about development models, business models, community structure, even tool chains. It is even not uncommon for open source projects to be criticised for failing to comply with those unspoken expectations.

We recognise that there is no one true model. As Mozilla evolves more and more into a multi-product organization, there will be different models that suit different products and different environments. Structure, governance, and licensing policies should all be explicit choices based on the strategic goals of an open source project. A challenge for any organisation is how to articulate these choices, or to put it simply, how do you answer the question, “what kind of open source project is this?”.

To answer the question, we wanted to develop a set of basic models — “archetypes” — that projects could aim for, modifying them as needed, but providing a shared vocabulary for discussing how to think about any given project. We were delighted to be able to partner with one of the leading authorities in open source, Open Tech Strategies, in defining these archetypes. Their depth of knowledge and fresh perspective has created something we believe offers unique value.

The resulting framework consists of 10 common archetypes, covering things from business objectives to licensing, community standards, component coupling and project governance. It also contains some practical advice on how to use the framework and on how to set up your project.

20 years after the Open Source Initiative was founded, open source is widespread (and has inspired methods of peer production beyond the realm of software). Although this report was tailored to advance open source strategies and project design within Mozilla, and with the organizations and communities we work with, we also believe that this challenge is not unique to us. We suspect there will be many other organizations, both commercial and non-commercial, who will benefit from the model.

https://medium.com/media/9faa99d9c6b435593f9f6f5d21029b44/href

You can download the report here. Like so many things, it will never be “done”. After more hands-on-use with Mozilla projects, we intend to work with Open Tech Strategies on a version that expands its sights beyond Mozilla’s borders.

If you’re interested in collaborating, you can get in touch here: archetypes@opentechstrategies.com. The Github repository is up at https://github.com/OpenTechStrategies/open-source-archetypes.


What’s Your Open Source Strategy? Here Are 10 Answers… was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaExtensions in Firefox 61

Firefox 60 is now in the Release channel, which means that Firefox 61 has moved from Nightly to the Beta channel. As usual, Mozilla engineers and volunteer contributors have been hard at work, landing a number of new and improved WebExtensions API in this Beta release.

Before getting to the details, though, I’d like to note that the Firefox Quantum Extensions Challenge has come to an end.  The contest was a huge success and the judges (myself included) were overwhelmed with both the creativity and quality of the entrants.  A huge thank you to everyone who submitted an extension to the contest and congratulations to the winners.

The Case of the Vanishing Tabs

In November of 2017, we made a commitment to enriching the WebExtensions API with additional tab management features, with tab hiding being a top priority.  Since that time, several new and enhanced tab-related API have been added and today, with the release of Firefox 61 to the Beta channel, tab hiding is officially a WebExtensions API.

The usage of the tabs.hide() API was covered in the post on Extensions in Firefox 59.  The main change now is that it is no longer necessary to set the extensions.webextensions.tabhide.enabled preference to true; tab hiding can be used by extensions without setting that preference. In fact, that preference will soon be going away in an upcoming release.

In the user interface, when tabs are hidden, a down-arrow is added to the end of the tab strip. When clicked, this icon shows all of your tabs, hidden and visible, and provides a quick and easy way to keep from losing things (see animation below, and thanks to Afnan Khan for the great Hide Tabs extension).

Hidden Tabs Arrow

In addition, if a hidden tab is playing audio, the audio icon is shown on top of the down-arrow icon.  If you click on the down-arrow, the hidden tab with audio is pulled out so that it is easier to find (see animation below).

Audio Icon For Hidden Tabs

You can expect to see this user interface get a small update in Firefox 62. In particular, it will be modified so that it conforms to the Firefox Photon Design System.

Making Hidden Tabs Visible

As noted above, when an extension hides a tab, Firefox will display a down-arrow in the tab strip that gives users access to all of their tabs, both visible and hidden. This is a simple and easy way to manage hidden tabs, but it is also subtle.  To make sure users are completely aware of hidden tabs, and to discourage malicious and deceitful use of them, Firefox will always show a door hanger when the first tab is hidden.

Hidden Tab Door Hanger

The door hanger informs the user that one or more tabs was hidden by an extension, explains the down-arrow on the tab strip (pointing to it), and gives the user the option to disable the extension.

Even More Tab Stuff

While tab hiding is the biggest feature to land in Firefox 61, a few other highly requested tab features also made it into this release.

A new browserSettings API, openUrlbarResultsInNewTabs, allows extensions to specify if search results from the URL bar should open in the current tab or a new tab.  This complements the existing browserSettings.openBookmarksInNewTabs and browserSettings.openSearchResultsInNewTabs settings.

Also, fine grain control of the opening position for new tabs is now provided via the browserSettings.newTabPosition API.  This new API can take three different values:

  • afterCurrent – open all new tabs next to the current tab
  • relatedAfterCurrent – open child tabs (tabs opened from a link in the current tab) after the current tab
  • atEnd – open all new tabs at the end of the tab strip

Finally, listeners to the tabs.onUpdated event can now supply a filter to avoid the overhead of unwanted events. The filter supports URLs, window and tab ids, and various properties such as “audible”, “discarded”,  “hidden”, “isarticle” and more. Not only does this simplify code inside the listener for developers, it significantly improves browser performance by keeping Firefox from dispatching so many events. Extension developers are strongly encouraged to use this new feature to make their extension and the browser more performant for users.

Themes

It seems we can’t let a release go by without adding to themes, and Firefox 61 is no exception. Check out the new improvements (and the improved documentation on MDN that shows examples of what each theme property modifies).

  • Themes support default_locale; not a theme property, but a standard manifest key
  • Fixed tab_selected so it works when headerURL is not set (uplifted to 60)
  • Fully transparent values are once again allowed in themes
  • All of the toolbar properties now apply to the find bar as well
  • Themes can now set the hover, active and focus colors for buttons and toolbars
  • Autocomplete popups now honor any popup_* theme properties that are set

Cleaning the WebExtensions House

I usually end each blog post with a small section that notes some of the general improvements and/or bug fixes in the release. For Firefox 61, though, there was a concerted to improve the WebExtensions ecosystem.  A substantial number of bug fixes and optimizations landed, enough that I wanted to make sure they were more than just a footnote.

Keeping The User Informed

In addition to making sure users are fully aware of extensions using tab hiding, Firefox 61 also highlights when an extension modifies a user’s Home page by displaying a door hanger.

And to help users better understand when an extension is controlling the New Tab page or Home page, or using the tab hiding feature, the door hanger now shows the name of the controlling extension and includes a “Learn More” link that takes the user to a more detailed explanation on Mozilla’s support site.

New Home Page Door Hanger

Important Proxy Changes

After the release of Firefox 59, we discovered that the implementation of the proxyConfig API was not handling the settings for hostnames correctly, causing non-socks proxy settings to fail. The has been corrected in Firefox 61 and uplifted to Firefox 60.

The discovery and resolution of this bug, however, caused us to reevaluate how WebExtensions exposed proxy settings.  In particular, we asked ourselves why these settings weren’t part of the browser.proxy.* namespace. It is true that the underlying implementation depends upon Firefox browser settings, which is how it ended up as part of the browser.browserSettings.* namespace, but that’s just an internal detail. Every major browser vendor supports proxy settings, and they all support basically the exact same set of settings.

Given that, and the fact that Mozilla should be championing web standards, we decided it made more sense to have the proxy settings live inside the browser.proxy.* namespace. So starting with Firefox 60, the browser.browserSettings.proxyConfig API is now the browser.proxy.settings API.  Extensions developers who want to use this API should request the much more intuitive “proxy” permission instead of the “browserSettings” permission.

Finally, another patch was landed in Firefox 61 so that proxy extensions can start before requests bypass them.

Browser Settings

A few new browser settings are now available to extension developers in Firefox 61:

  • Extensions can now toggle the setting that decides if pages should be rendered with document fonts or not via browserSettings.useDocumentFonts.
  • Along similar lines, extensions can set the preference that decides if a page should be rendered with the document colors, user-selected colors, or user-selected only with high-contrast themes via the browserSettings.overrideDocumentColors API.
  • Extensions can offer users the ability to close tabs by double-clicking them via the browserSettings.closeTabsByDoubleClick API.

Thank You

This was a busy release for the WebExtensions API with a total of 95 features and improvements landed as part of Firefox 61. A sincere thank you goes to our many contributors for this release, especially our community volunteers including: Tim Nguyen, Oriol Brufau, Vivek Dhingra, Tomislav Jovanovic, Bharat Raghunathan, Zhengyi Lian, Bogdan Podzerca, Dylan Stokes, Satish Pasupuleti, and Sören Hentzschel. It is only through the combined efforts of Mozilla and our amazing community that we can ensure continued access to the open web. If you are interested in contributing to the WebExtensions ecosystem, please take a look at our wiki.

The post Extensions in Firefox 61 appeared first on Mozilla Add-ons Blog.

Planet MozillaSQL Style Guide

I'm happy to announce, we now have a SQL style guide. Check it out!

If you have any suggestions, feel free to file a PR or issue in the docs repository.

Many thanks to all who participated in the St. Mocli conversation and @mreid for the review!

Planet MozillaThe Rust compiler is getting faster

TL;DR: The Rust compiler has gotten 1.06x–4x faster over the past month.

As changes are made to the Rust compiler, a suite of benchmarks measuring compile time is run regularly on the development version. The data is viewable at http://perf.rust-lang.org. The default view is graphical, showing data from the past month.

Screenshot of perf.rust-lang.org showing measurements of the html5ever benchmark

The screenshot above shows the graphs for a single benchmark called “html5ever”, which consists of an old version of the project of the same name. Each one shows measurements for a different kind of build: a debug build, a “check” build (which detect errors but don’t generate code), and an optimized build. Within each graph there are the following three data series.

  • Clean: a normal build.
  • Baseline incremental: an incremental build with no prior incremental runs. Such a build is a little slower than a normal build, because it does normal compilation and also gathers information to guide subsequent incremental builds.
  • Clean incremental: an incremental build run immediately after a baseline incremental build. This is the best-case scenario for incremental compilation in which the minimal amount of work is done.

If you visit the site yourself you’ll see that most of the benchmarks have more than three data series, including ones for incremental builds done after small code changes (a more realistic use case), and one for builds with non-lexical lifetimes enabled.

The x-axis shows time and the y-axis shows instruction counts. Other units of measurements are available, including cycles, time, and memory usage. Instruction counts are shown as the default; this isn’t ideal because it’s only a proxy for the measurement that really matters (time)… but it’s a pretty good proxy, and it has a lot lower variation than the time measurements, which is important when detecting changes.

This graphical view is particularly useful for detecting major changes. For example, you can see that in early May there was a major regression for “clean” and “baseline incremental” builds, which Alex Crichton fixed a few days later.

As well as the graphical view, the site also provides a textual “compare” view, which can be reached via the link at the top left of each page. This view compares measurements from two revisions of the compiler; by default it compares the most recently measured revision with one from a month ago. (It can also be used locally, which is very useful to evaluate changes that speed up the compiler.)

The screenshot above is of the “compare” view at the time of writing. Each line corresponds to a single graph from the graphical view. (If you visit the site and click on an individual entry it will expand and show all of the measurements. The resemblance between those measurements and this screenshot will of course diminish over time.) The “avg” column shows the average change across all the data series. The “min” and “max” columns show the minimum and maximum changes for any of the data series. The “serde” and “script-servo” lines are empty because those benchmarks were added to the suite less than a month ago, so no comparison can be made.

The table has many numbers, but the thing to take away is that they are almost all significantly negative, meaning that compile time has reduced. The “avg” numbers range from 6% to 38%; the “min” numbers (i.e. best result) go as high as 75%; the “max” numbers (i.e. worst result) go as high as 36%.

In conclusion: the Rust compiler has gotten significantly faster in the past month. Across a wide range of programs, and a wide range of build configurations, compile times have reduced by between 6% and 75%. To put it another way, the compiler has gotten between 1.06x and 4x faster.

These benefits are available right now to users of the Nightly channel. Users of the Release channel will see them more gradually, spread across one or two versions released over the next few months.

Planet MozillaThese Weeks in Firefox: Issue 38

Highlights

Tab hiding API in action

Friends of the Firefox team

Introductions

  • :Prathiksha Guruprasad will be working on the PaymentRequest MVP during her internship.
  • Jay Lim (:imjching) will be working on Activity Stream performance bugs for the Performance team
  • Emily Hou will be working on the Firefox Send experiment on the Test Pilot team this summer.
  • Shruti Singh will be working on the Firefox Color experiment on the Test Pilot team this summer.

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

Activity Stream

Browser Architecture

  • A preference landed to allow opening the browser console as a top-level html document. Still some things to fix.
  • Landed our first XBL -> custom element conversion. May be backed out 🙁
  • Overlay code removal is ready to go
  • RKV proof of concept for XULStore is almost complete

Lint

Fluent

Mobile

  • Firefox for Fire TV will soon release v3.0, which will include our Pocket Video Feed

Performance

Policy Engine

  • No code updates. Getting marketing ready for launch. 👏
  • Outreachy intern starting next week to work on new policies.

Privacy/Security

Search and Navigation

Address Bar & Search
Places

Sync / Firefox Accounts

Test Pilot

  • New experiments launching in early June!
  • Interns starting later this month!
  • Screenshots product updates:
    • Annotation tools coming soon: undo, redo, and text/emoji
    • We’re thinking about what a mobile flow would feel like for Screenshots; PRD in flight. Ping jgruen with all your feature requests.
  • Screenshots engineering:
    • Bug fix release 32.1.0 landed recently (1454029)
    • A security fix regressed Screenshots working on PDF pages.  This is being tracked for Firefox 60 in bug 1456485.
    • We have a working Chrome add-on with a few substantial bugs. Work is continuing.
    • We would like to take over the Ctrl + Shift + S keyboard shortcut from devtools. They will be investigating usage frequency of this and other shortcuts in bug 1456984.

Web Payments

  • Team has completed 66% of the Milestone 1 – 3 Backlog.
  • Planning telemetry requirements
  • Code moved to browser/ and component moved to Firefox (from Toolkit)
  • Can now add and edit shipping addresses from the PaymentRequest dialog
    • Jared is working on adding/editing billing addresses
  • Sam is working on support for one-time-use addresses
  • Matt is working with UX on options for rich dropdown menus

Planet MozillaUpdate on Fight for Net Neutrality in U.S. – Senate votes to save net neutrality, now it’s up to the House

Today, the U.S. Senate passed a Congressional Review Act (CRA) resolution to save net neutrality and overturn the FCC’s disastrous order to end net neutrality protections.

We’re pleased this resolution passed – it’s a huge step, but the battle to protect net neutrality and reinstate the 2015 rules isn’t over. The next step is for the motion to go to the House of Representatives for a vote before the order is supposed to go into effect on June 11. Unfortunately, the rules in the House will make passage much harder than in the Senate; at this point, it’s not clear when, or if, there will be a vote there.

We will continue to fight for net neutrality in every way possible as we try to protect against erosion into a discriminatory internet, with ultimately a far worse experience for any users and businesses who don’t pay more for special treatment.

We are leading the legal battle in Mozilla v. FCC, working closely with policymakers, and educating consumers through advocacy for an open, equal, accessible internet.

And, we’re not alone – last week we partnered with organizations like Consumer Reports and the Electronic Frontier Foundation in the Red Alert protest to encourage Americans to call Congress in support of net neutrality. Consumers also share their support for the net neutrality fight- we recently conducted a poll that shows 91% of Americans believe consumers should be able to freely and quickly access their preferred content on the internet.

As I said in December– The FCC decision to obliterate the 2015 net neutrality protections is the result of broken processes, broken politics, and broken policies. The end of net neutrality would only benefit Internet Service Providers (ISPs) and it would end the internet as we know it, harming every day users and small businesses, eroding free speech, competition, innovation and user choice in the process.

Net neutrality is a core characteristic of the internet, and crucial for the economy and everyday lives. It is imperative that all internet traffic be treated equally, without discrimination against content or type of traffic — that’s how the internet was built and what has made it one of the greatest inventions of all time.

We’ll keep fighting for the open internet, and  so we ask you to call your members of Congress to make sure that politicians decide to protect their constituents rather than increase the power of ISPs.

The post Update on Fight for Net Neutrality in U.S. – Senate votes to save net neutrality, now it’s up to the House appeared first on The Mozilla Blog.

Planet WebKitRelease Notes for Safari Technology Preview 56

Safari Technology Preview Release 56 is now available for download for macOS Sierra and macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 230913-231553.

This is the last release of Safari Technology Preview that will install and run on macOS Sierra. To continue testing or living on the latest enhancements to Safari and WebKit, please upgrade to macOS High Sierra.

JavaScript

  • Implemented Intl.PluralRules (r231047)

WebAssembly

  • Added support for stream APIs (r231194)

Web API

  • Fixed document.open() event listener removal to be immediate (r231248)
  • Fixed DHTML drag operations to report the number of files in the operation (r231003)
  • Fixed window.postMessage(), window.focus(), and window.blur() unexpectedly throwing a TypeError (r231037)
  • Serialized font-variation-settings with double-quotes to match standards (r231165)
  • Stopped using the id of an <iframe> as fallback if its name attribute is not set (r231456)

Security

  • Added support for the WHATWG proposed From-Origin:same and From-Origin:same-site response headers with nested frame origin checking as an off by default experimental feature (r230968)
  • Fixed CSP referrer for a document blocked due to a violation of its frame-ancestors directive (r231461)
  • Fixed CSP status-code for a document blocked due to a violation of its frame-ancestors directive (r231464)
  • Fixed CSP to pass the document’s referrer (r231445)
  • Fixed CSP to only notify Web Inspector to pause the debugger on the first policy to violate a directive (r231443)
  • Fixed a bug causing first-party cookies to be blocked on redirects (r231008)

CSS

  • Fixed CSS filters which reference SVG filters to respect the color-interpolation-filters of the filter (r231473)
  • Fixed feTurbulence to render correctly on a Retina display (r231485)
  • Fixed shape-outside and filter styles occuring twice in the result of getComputedStyle (r230976)

Rendering

  • Changed font collection fragment identifiers to use PostScript names (r231259)
  • Fixed selecting text on a webpage causing the text vanish (r231178)
  • Fixed hiding then showing an <object> of type image to ensure the underlying image is displayed (r231292)

Media

  • Changed MediaStreams that are playing to allow removing some of its tracks (r231304)
  • Updated text track cue logging to include cue text (r231490)

Web Inspector

  • Improved the user experience in Canvas tab to show progress bars while processing actions in a new recording (r231218)
  • Ensured that tabbing through the last section of rules in the Styles editor wraps back to the first section of rules (r231372)
  • Fixed Console drawer resizing when the console prompt has more than one line of code (r231527)
  • Fixed unsupported properties that sometimes don’t get warnings just after adding them (r231377)
  • Updated the Canvas tab to determine functions by looking at the prototype (r231368)

Planet MozillaThe Joy of Coding - Episode 139

The Joy of Coding - Episode 139 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 139

The Joy of Coding - Episode 139 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaScaling Firefox Development Workflows

One of the central themes of my time at Mozilla has been my pursuit of making it easier to contribute to and hack on Firefox.

I vividly remember my first day at Mozilla in 2011 when I went to build Firefox for the first time. I thought the entire experience - from obtaining the source code, installing build dependencies, building, running tests, submitting patches for review, etc was quite... lacking. When I asked others if they thought this was an issue, many rightfully identified problems (like the build system being slow). But there was a significant population who seemed to be naive and/or apathetic to the breadth of the user experience shortcomings. This is totally understandable: the scope of the problem is immense and various people don't have the perspective, are blinded/biased by personal experience, and/or don't have the product design or UX experience necessary to comprehend the problem.

When it comes to contributing to Firefox, I think the problems have as much to do with user experience (UX) as they do with technical matters. As I wrote in 2012, user experience matters and developers are people too. You can have a technically superior product, but if the UX is bad, you will have a hard time attracting and retaining new users. And existing users won't be as happy. These are the kinds of problems that a product manager or designer deals with. A difference is that in the case of Firefox development, the target audience is a very narrow and highly technically-minded subset of the larger population - much smaller than what your typical product targets. The total addressable population is (realistically) in the thousands instead of millions. But this doesn't mean you ignore the principles of good product design when designing developer tooling. When it comes to developer tooling and workflows, I think it is important to have a product manager mindset and treat it not as a collection of tools for technically-minded individuals, but as a product having an overall experience. You only have to look as far as the Firefox Developer Tools to see this approach applied and the positive results it has achieved.

Historically, Mozilla has lacked a formal team with the domain expertise and mandate to treat Firefox contribution as a product. We didn't have anything close to this until a few years ago. Before we had such a team, I took on some of these problems individually. In 2012, I wrote mach - a generic CLI command dispatch tool - to provide a central, convenient, and easy-to-use command to discover development actions and to run them. (Read the announcement blog post for some historical context.) I also introduced a one-line bootstrap tool (now mach bootstrap) to make it easier to configure your machine for building Firefox. A few months later, I was responsible for introducing moz.build files, which paved the way for countless optimizations and for rearchitecting the Firefox build system to use modern tools - a project that is still ongoing (digging out from ~two decades of technical debt is a massive effort). And a few months after that, I started going down the version control rabbit hole and improving matters there. And I was also heavily involved with MozReview for improving the code review experience.

Looking back, I was responsible for and participated in a ton of foundational changes to how Firefox is developed. Of course, dozens of others have contributed to getting us to where we are today and I can't and won't take credit for the hard work of others. Nor will I claim I was the only person coming up with good ideas or transforming them into reality. I can name several projects (like Taskcluster and Treeherder) that have been just as or more transformational than the changes I can take credit for. It would be vain and naive of me to elevate my contributions on a taller pedestal and I hope nobody reads this and thinks I'm doing that.

(On a personal note, numerous people have told me that things like mach and the bootstrap tool have transformed the Firefox contribution experience for the better. I've also had very senior people tell me that they don't understand why these tools are important and/or are skeptical of the need for investments in this space. I've found this dichotomy perplexing and troubling. Because some of the detractors (for lack of a better word) are highly influential and respected, their apparent skepticism sews seeds of doubt and causes me to second guess my contributions and world view. This feels like a form or variation of imposter syndrome and it is something I have struggled with during my time at Mozilla.)

From my perspective, the previous five or so years in Firefox development workflows has been about initiating foundational changes and executing on them. When it was introduced, mach was radical. It didn't do much and its use was optional. Now almost everyone uses it. Similar stories have unfolded for Taskcluster, MozReview, and various other tools and platforms. In other words, we laid a foundation and have been steadily building upon it for the past several years. That's not to say other foundational changes haven't occurred since (they have - the imminent switch to Phabricator is a terrific example). But the volume of foundational changes has slowed since 2012-2014. (I think this is due to Mozilla deciding to invest more in tools as a result of growing pains from significant company expansion that began in 2010. With that investment, we invested in the bigger ticket long-standing workflow pain points, such as CI (Taskcluster), the Firefox build system, Treeherder, and code review.)

Workflows Today and in the Future

Over the past several years, the size, scope, and complexity of Firefox development activities has increased.

One way to see this is at the source code level. The following chart shows the size of the mozilla-central version control repository over time.

mozilla-central size over time

The size increases are obvious. The increases cumulatively represent new features, technologies, and workflows. For example, the repository contains thousands of Web Platform Tests (WPT) files, a shared test suite for web platform implementations, like Gecko and Blink. WPT didn't exist a few years ago. Now we have files under source control, tools for running those tests, and workflows revolving around changing those tests. The incorporation of Rust and components of Servo into Firefox is also responsible for significant changes. Firefox features such as Developer Tools have been introduced or ballooned in size in recent years. The Go Faster project and the move to system add-ons has introduced various new workflows and challenges for testing Firefox.

Many of these changes are building upon the user-facing foundational workflow infrastructure that was last significantly changed in 2012-2014. This has definitely contributed to some growing pains. For example, there are now 92 mach commands instead of like 5. mach help - intended to answer what can I do and how should I do it - is overwhelming, especially to new users. The repository is around 2 gigabytes of data to clone instead of around 500 megabytes. We have 240,000 files in a full checkout instead of 70,000 files. There's a ton of new pieces floating around. Any product manager tasked with user acquisition and retention will tell you that increasing the barrier to entry and use will jeopardize these outcomes. But with the growth of Firefox's technical underbelly in the previous years, we've made it harder to contribute by requiring users to download and see a lot more files (version control) and be overwhelmed by all the options for actions to take (mach having 92 commands). And as the sheer number of components constituting Firefox increases, it becomes harder and harder for everyone - not just new contributors - to reason about how everything fits together.

I've been framing this general problem as scaling Firefox development workflows and every time I think about the high-level challenges facing Firefox contribution today and in the years ahead, this problem floats to the top of my list of concerns. Yes, we have pressing issues like improving the code review experience and making the Firefox build system and Taskcluster-based CI fast, efficient, and reliable. But even if you make these individual pieces great, there is still a cross-domain problem of how all these components weave together. This is why I think it is important to take a wholistic view and treat developer workflow as a product.

When I look at this the way a product manager or designer would, I see a few fundamental problems that need addressing.

First, we're not optimizing for comprehensive end-to-end workflows. By and large, we're designing our tools in isolation. We focus more on maximizing the individual components instead of maximizing the interaction between them. For example, Taskcluster and Treeherder are pretty good in isolation. But we're missing features like Treeherder being able to tell me the command to run locally to reproduce a failure: I want to see a failure on Treeherder and be able to copy and paste commands into my terminal to debug the failure. In the case of code review, we've designed two good code review tools (MozReview and Phabricator) but we haven't invested in making submitting code reviews turn key (the initial system configuration is difficult and we still don't have things like automatic bug filing or reviewer selection). We are leaving many workflow optimizations on the table by not implementing thoughtful tie-ins and transitions between various tools.

Second, by-and-large we're still optimizing for a single, monolithic user segment instead of recognizing and optimizing for different users and their workflow requirements. For example, mach help lists 92 commands. I don't think any single person cares about all 92 of those commands. The average person may only care about 10 or even 20. In terms of user interface design, the features and workflow requirements of small user segments are polluting the interface for all users and making the entire experience complicated and difficult to reason about. As a concrete example, why should a system add-on developer or a Firefox Developer Tools developer (these people tend to care about testing a standalone Firefox add-on) care about Gecko's build system or tests? If you aren't touching Gecko or Firefox's chrome code, why should you be exposed to workflows and requirements that don't have a major impact on you? Or something more extreme, if you are developing a standalone Rust module or Python package in mozilla-central, why do you need to care about Firefox at all? (Yes, Firefox or another downstream consumer may care about changes to that standalone component and you can't ignore those dependencies. But it should at least be possible to hide those dependencies.)

Waving my hands, the solution to these problems is to treat Firefox development workflow as a product and to apply the same rigor that we use for actual Firefox product development. Give people with a vision for the entire workflow the ability to prioritize investment across tools and platforms. Give them a process for defining features that work across tools. Perform formal user studies. See how people are actually using the tools you build. Bring in design and user experience experts to help formulate better workflows. Perform user typing so different, segmentable workflows can be optimized for. Treat developers as you treat users of real products: listen to them. Give developers a voice to express frustrations. Let them tell you what they are trying to do and what they wish they could do. Integrate this feedback into a feature roadmap. Turn common feedback into action items for new features.

If you think these ideas are silly and it doesn't make sense to apply a product mindset to developer workflows and tooling, then you should be asking whether product management and all that it entails is also a silly idea. If you believe that aspects of product management have beneficial outcomes (which most companies do because otherwise there wouldn't be product managers), then why wouldn't you want to apply the methods of that discipline to developers and development workflows? Developers are users too and the fact that they work for the same company that is creating the product shouldn't make them immune from the benefits of product management.

If we want to make contributing to Firefox an even better experience for Mozilla employees and community contributors, I think we need to take a step back and assess the situation as a product manager would. The improvements that have been made to the individual pieces constituting Firefox's development workflow during my nearly seven years at Mozilla have been incredible. But I think in order to achieve the next round of major advancements in workflow productivity, we'll need to focus on how all of the pieces fit together. And that requires treating the entire workflow as a cohesive product.

Planet MozillaNew in Firefox 61: Developer Edition

Firefox 61: Developer Edition is available now, and contains a ton of great new features and under-the-hood improvements.

A Darker Dark Theme

Taking inspiration from Spinal Tap, Developer Edition’s dark theme now darkens more parts of the browser, including the new tab page.

Screenshot comparing the new darker Dark Theme with the older version

Searchable websites can now be added to Firefox via the “Add Search Engine” item inside the Page Action menu. The sites must describe their search APIs using OpenSearch metadata.

Screenshot of the OpenSearch "Add Search Engine" item in the page menu

And yes, the Page Action menu is also dark, if you’re using a dark theme.

More Powerful Developer Tools

More than just source maps, Firefox 61 understands how tools like Babel and Webpack work, making it possible to seamlessly inspect and interact with your original code right within the Debugger, as if it had never been bundled and minified in the first place. We’re also working to add native support for inspecting components and scopes in modern frameworks like React.

To learn more, see our separate, in-depth blog post: Debugging Modern Web Applications.

Nicer Developer Tools

The Developer Tools have seen numerous quality-of-life improvements.

You can now rearrange tools to suit your individual workflow, and any tabs that don’t fit in the current window remain readily accessible in an overflow menu.

Animation of DevTools tabs rearranging

The Network panel also gained prominent drop-down menus for controlling network throttling and importing/exporting HTTP Archive (“HAR”) files.

Screenshot of HAR import and export menu in DevTools

We’ve also sped up the DevTools across the board, and are measuring and tracking performance as an explicit goal for the team. Even more improvements are on the way.

In Firefox Quantum, we re-implemented many of the DevTools using basic web technologies: HTML, JavaScript, and CSS. We’re even using React inside the DevTools themselves! This means that if you know how to build for the web, you know how to hack on the DevTools. If you’d like to get involved, we have a great getting started guide with pointers to good first bugs to tackle.

The Accessibility Inspector

There’s also an entirely new tool available, the Accessibility Inspector, which reveals the logical structure of your page, as it might appear to a screen reader or other assistive software.

A screenshot of the new Accessibility Inspector that helps you assess what the browser and screen reader can 'see'

This is a low-level tool meant to help you understand how Firefox and screen readers “see” your content. To learn more, including how to enable and use this new tool, check out Marco Zehe’s article Introducing the Accessibility Inspector. If you’re looking for more opinionated tools to help audit and improve your site’s accessibility, consider add-ons like the aXe Developer Tools or the WAVE Accessibility Extension.

Behind the Scenes

Lastly, we landed a number of improvements and refactorings that should make Firefox a better all-around browser.

  • Firefox now parses CSS stylesheets in multiple parallel threads. This can significantly improve the time to first paint for websites, especially when there are many stylesheets on a single page.
  • The multi-tier WebAssembly compiler has been implemented for the AArch64 CPU architecture common in smartphones and tablets. You can read more about the benefits of this compiler design in Lin Clark’s article, Making WebAssembly Even Faster.
  • On macOS, like on Windows, browser add-ons now run in a separate, dedicated process. A continuation of our work with multi-process Firefox, this helps Firefox itself stay responsive, even when an add-on is busy doing work.

Firefox 61 is currently available in Beta and Developer Edition, and it will become the stable version of Firefox on June 26th. If you’d like to keep up with Firefox development as it happens, we recommend reading the Firefox Nightly Blog, or following @FirefoxNightly on Twitter.

Planet MozillaRevisiting Using Docker

When Docker was taking off like wildfire in 2013, I was caught up in the excitement like everyone else. I remember knowing of the existence of LXC and container technologies in Linux at the time. But Docker seemed to be the first open source tool to actually make that technology usable (a terrific example of how user experience matters).

At Mozilla, Docker was adopted all around me and by me for various utilities. Taskcluster - Mozilla's task execution framework geared for running complex CI systems - adopted Docker as a mechanism to run processes in self-contained images. Various groups in Mozilla adopted Docker for running services in production. I adopted Docker for integration testing of complex systems.

Having seen various groups use Docker and having spent a lot of time in the trenches battling technical problems, my conclusion is Docker is unsuitable as a general purpose container runtime. Instead, Docker has its niche for hosting complex network services. Other uses of Docker should be highly scrutinized and potentially discouraged.

When Docker hit first the market, it was arguably the only game in town. Using Docker to achieve containerization was defensible because there weren't exactly many (any?) practical alternatives. So if you wanted to use containers, you used Docker.

Fast forward a few years. We now have the Open Container Initiative (OCI). There are specifications describing common container formats. So you can produce a container once and take it to any number OCI compatible container runtimes for execution. And in 2018, there are a ton of players in this space. runc, rkt, and gVisor are just some. So Docker is no longer the only viable tool for executing a container. If you are just getting started with the container space, you would be wise to research the available options and their pros and cons.

When you look at all the options for running containers in 2018, I think it is obvious that Docker - usable though it may be - is not ideal for a significant number of container use cases. If you divide use cases into a spectrum where one end is run a process in a sandbox and the other is run a complex system of orchestrated services in production, Docker appears to be focusing on the latter. Take it from Docker themselves:

Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud. Today's businesses are under pressure to digitally transform but are constrained by existing applications and infrastructure while rationalizing an increasingly diverse portfolio of clouds, datacenters and application architectures. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation.

That description of Docker (the company) does a pretty good job of describing what Docker (the technology) has become: a constellation of software components providing the underbelly for managing complex applications in complex infrastructures. That's pretty far detached on the spectrum from run a process in a sandbox.

Just because Docker (the company) is focused on a complex space doesn't mean they are incapable of exceeding at solving simple problems. However, I believe that in this particular case, the complexity of what Docker (the company) is focusing on has inhibited its Docker products to adequately address simple problems.

Let's dive into some technical specifics.

At its most primitive, Docker is a glorified tool to run a process in a sandbox. On Linux, this is accomplished by using the clone(2) function with specific flags and combined with various other techniques (filesystem remounting, capabilities, cgroups, chroot, seccomp, etc) to sandbox the process from the main operating system environment and kernel. There are a host of tools living at this not-quite-containers level that make it easy to run a sandboxed process. The bubblewrap tool is one of them.

Strictly speaking, you don't need anything fancy to create a process sandbox: just an executable you want to invoke and an executable that makes a set of system calls (like bubblewrap) to run that executable.

When you install Docker on a machine, it starts a daemon running as root. That daemon listens for HTTP requests on a network port and/or UNIX socket. When you run docker run from the command line, that command establishes a connection to the Docker daemon and sends any number of HTTP requests to instruct the daemon to take actions.

A daemon with a remote control protocol is useful. But it shouldn't be the only way to spawn containers with Docker. If all I want to do is spawn a temporary container that is destroyed afterwards, I should be able to do that from a local command without touching a network service. Something like bubblewrap. The daemon adds all kinds of complexity and overhead. Especially if I just want to run a simple, short-lived command.

Docker at this point is already pretty far detached from a tool like bubblewrap. And the disparity gets worse.

Docker adds another abstraction on top of basic process sandboxing in the form of storage / filesystem management. Docker insists that processes execute in self-contained, chroot()'d filesystem environment and that these environments (Docker images) be managed by Docker itself. When Docker images are imported into Docker, Docker manages them using one of a handful of storage drivers. You can choose from devicemapper, overlayfs, zfs, btrfs, and aufs and employ various configurations of all these. Docker images are composed of layers, with one layer stacked on top of the prior. This allows you to have an immutable base layer (that can be shared across containers) where run-time file changes can be isolated to a specific container instance.

Docker's ability to manage storage is pretty cool. And I dare say Docker's killer feature in the very beginning of Docker was the ability to easily produce and exchange self-contained Docker images across machines.

But this utility comes at a very steep price. Again, if our use case is run a process in a sandbox, do we really care about all this advanced storage functionality? Yes, if you are running hundreds of containers on a single system, a storage model built on top of copy-on-write is perhaps necessary for scaling. But for simple cases where you just want to run a single or small number of processes, it is extremely overkill and adds many more problems than it solves.

I cannot stress this enough, but I have spent hours debugging and working around problems due to how filesystems/storage works in Docker.

When Docker was initially released, aufs was widely used. As I previously wrote, aufs has abysmal performance as you scale up the number of concurrent I/O operations. We shaved minutes off tasks in Firefox CI by ditching aufs for overlayfs.

But overlayfs is far from a panacea. File metadata-only updates are apparently very slow in overlayfs. We're talking ~100ms to call fchownat() or utimensat(). If you perform an rsync -a or chown -R on a directory with only just a hundred files that were defined in a base image layer, you can have delays of seconds.

The Docker storage drivers backed by real filesystems like zfs and btrfs are a bit better. But they have their quirks too. For example, creating layers in images is comparatively very slow compared to overlayfs (which are practically instantaneous). This matters when you are iterating on a Dockerfile for example and want to quickly test changes. Your edit-compile cycle grows frustratingly long very quickly.

And I could opine on a handful of other problems I've encountered over the years.

Having spent hours of my life debugging and working around issues with Docker's storage, my current attitude is enough of this complexity, just let me use a directory backed by the local filesystem, dammit.

For many use cases, you don't need the storage complexity that Docker forces upon you. Pointing Docker at a directory on a local filesystem to chroot into is good enough. I know the behavior and performance properties of common Linux filesystems. ext4 isn't going to start making fchownat() or utimensat() calls take ~100ms. It isn't going to complain when a hard link spans multiple layers in an image. Or slow down to a crawl when multiple threads are performing concurrent read I/O. There's not going to be intrinsically complicated algorithms and caching to walk N image layers to find the most recent version of a file (or if there is, it will be so far down the stack in kernel land that I likely won't ever have to deal with it as a normal user). Docker images with their multiple layers add complexity and overhead. For many use uses, the pain it inflicts offsets the initial convenience it saves.

Docker's optimized-for-complex-use-cases architecture demonstrates its inefficiency in simple benchmarks.

On my machine, docker run -i --rm debian:stretch /bin/ls / takes ~850ms. Almost a second to perform a directory listing (sometimes it does take over 1 second - I was being generous and quoting a quicker time). This command takes ~1ms when run in a local shell. So we're at 2.5-3 magnitudes of overhead. The time here does include time to initially create the container and destroy it afterwards. We can isolate that overhead by starting a persistent container and running docker exec -i <cid> /bin/ls / to spawn a new process in an existing container. This takes ~85ms. So, ~2 magnitudes of overhead to spawn a process in a Docker container versus spawning it natively. What's adding so much overhead, I'm not sure. Yes, there are HTTP requests under the hood. But HTTP to a local daemon shouldn't be that slow. I'm not sure what's going on.

If we docker export that image to the local filesystem and use runc state to configure so we can run it with runc, runc run takes ~85ms to run /bin/ls /. If we runc exec <cid> /bin/ls / to start a process in an existing container, that completes in ~10ms. runc appears to be executing these simple tasks ~10x faster than Docker.

But to even get to that point, we had to make a filesystem available to spawn the container in. With Docker, you need to load an image into Docker. Using docker save to produce a 105,523,712 tar file, docker load -i image.tar takes ~1200ms to complete. tar xf image.tar takes ~65ms to extract that image to the local filesystem. Granted, Docker is computing the SHA-256 of the image as part of import. But SHA-256 runs at ~250MB/s on my machine and on that ~105MB input takes ~400ms. Where is that extra ~750ms of overhead in Docker coming from?

The Docker image loading overhead is still present on large images. With a 4,336,605,184 image, docker load was taking ~32s and tar x was taking ~2s. Obviously the filesystem was buffering writes in the tar case. And the ~2s is ignoring the ~17s to obtain the SHA-256 of the entire input. But there's still a substantial disparity here. (I suspect a lot of it is overlayfs not being as optimal as ext4.)

Several years ago there weren't many good choices for tools to execute containers. But today, there are good tools readily available. And thanks to OCI standards, you can often swap in alternate container runtimes. Docker (the tool) has an architecture that is optimized for solving complex use cases (coincidentally use cases that Docker the company makes money from). Because of this, my conclusion - drawn from using Docker for several years - is that Docker is unsuitable for many common use cases. If you care about low container startup/teardown overhead, low latency when interacting with containers (including spawning processes from outside of them), and for workloads where Docker's storage model interferes with understanding or performance, I think Docker should be avoided. A simpler tool (such as runc or even bubblewrap) should be used instead.

Call me a curmudgeon, but having seen all the problems that Docker's complexity causes, I'd rather see my containers resemble a tarball that can easily be chroot()'d into. I will likely be revisiting projects that use Docker and replacing Docker with something lighter weight and architecturally simpler. As for the use of Docker in the more complex environments it seems to be designed for, I don't have a meaningful opinion as I've never really used it in that capacity. But given my negative experiences with Docker over the years, I am definitely biased against Docker and will lean towards simpler products, especially if their storage/filesystem management model is simpler. Docker introduced containers to the masses and they should be commended for that. But for my day-to-day use cases for containers, Docker is simply not the right tool for the job.

I'm not sure exactly what I'll replace Docker with for my simpler use cases. If you have experiences you'd like to share, sharing them in the comments will be greatly appreciated.

Planet MozillaRelease of python-zstandard 0.9

I have just released python-zstandard 0.9.0. You can install the latest release by running pip install zstandard==0.9.0.

Zstandard is a highly tunable and therefore flexible compression algorithm with support for modern features such as multi-threaded compression and dictionaries. Its performance is remarkable and if you use it as a drop-in replacement for zlib, bzip2, or other common algorithms, you'll frequently see more than a doubling in performance.

python-zstandard provides rich bindings to the zstandard C library without sacrificing performance, safety, features, or a Pythonic feel. The bindings run on Python 2.7, 3.4, 3.5, 3.6, 3.7 using either a C extension or CFFI bindings, so it works with CPython and PyPy.

I can make a compelling argument that python-zstandard is one of the richest compression packages available to Python programmers. Using it, you will be able to leverage compression in ways you couldn't with other packages (especially those in the standard library) all while achieving ridiculous performance. Due to my focus on performance, python-zstandard is able to outperform Python bindings to other compression libraries that should be faster. This is because python-zstandard is very diligent about minimizing memory allocations and copying, minimizing Python object creation, reusing state, etc.

While python-zstandard is formally marked as a beta-level project and hasn't yet reached a 1.0 release, it is suitable for production usage. python-zstandard 0.8 shipped with Mercurial and is in active production use there. I'm also aware of other consumers using it in production, including at Facebook and Mozilla.

The sections below document some of the new features of python-zstandard 0.9.

File Object Interface for Reading

The 0.9 release contains a stream_reader() API on the compressor and decompressor objects that allows you to treat those objects as readable file objects. This means that you can pass a ZstdCompressor or ZstdDecompressor around to things that accept file objects and things generally just work. e.g.:

   with open(compressed_file, 'rb') as ifh:
       cctx = zstd.ZstdDecompressor()
       with cctx.stream_reader(ifh) as reader:
           while True:
               chunk = reader.read(32768)
               if not chunk:
                   break

This is probably the most requested python-zstandard feature.

While the feature is usable, it isn't complete. Support for readline(), readinto(), and a few other APIs is not yet implemented. In addition, you can't use these reader objects for opening zstandard compressed tarball files because Python's tarfile package insists on doing backward seeks when reading. The current implementation doesn't support backwards seeking because that requires buffering decompressed output and that is not trivial to implement. I recognize that all these features are useful and I will try to work them into a subsequent release of 0.9.

Negative Compression Levels

The 1.3.4 release of zstandard (which python-zstandard 0.9 bundles) supports negative compression levels. I won't go into details, but negative compression levels disable extra compression features and allow you to trade compression ratio for more speed.

When compressing a 6,472,921,921 byte uncompressed bundle of the Firefox Mercurial repository, the previous fastest we could go with level 1 was ~510 MB/s (measured on the input side) yielding a 1,675,227,803 file (25.88% of original).

With level -1, we compress to 1,934,253,955 (29.88% of original) at ~590 MB/s. With level -5, we compress to 2,339,110,873 bytes (36.14% of original) at ~720 MB/s.

On the decompress side, level 1 decompresses at ~1,150 MB/s (measured at the output side), -1 at ~1,320 MB/s, and -5 at ~1,350 MB/s (generally speaking, zstandard's decompression speeds are relatively similar - and fast - across compression levels).

And that's just with a single thread. zstandard supports using multiple threads to compress a single input and python-zstandard makes this feature easy to use. Using 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~2,000 MB/s (3.9x speedup), -1 at ~2,300 MB/s (3.9x speedup), and -5 at ~2,700 MB/s (3.75x speedup).

That's with a large input. What about small inputs?

If we take 456,599 Mercurial commit objects spanning 298,609,254 bytes from the Firefox repository and compress them individually, at level 1 we yield a total of 133,457,198 bytes (44.7% of original) at ~112 MB/s. At level -1, we compress to 161,241,797 bytes (54.0% of original) at ~215 MB/s. And at level -5, we compress to 185,885,545 bytes (62.3% of original) at ~395 MB/s.

On the decompression side, level 1 decompresses at ~260 MB/s, -1 at ~1,000 MB/s, and -5 at ~1,150 MB/s.

Again, that's 456,599 operations on a single thread with Python.

python-zstandard has an experimental API where you can pass in a collection of inputs and it batch compresses or decompresses them in a single operation. It releases and GIL and uses multiple threads. It puts the results in shared buffers in order to minimize the overhead of memory allocations and Python object creation and garbage collection. Using this mode with 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~525 MB/s, -1 at ~1,070 MB/s, and -5 at ~1,930 MB/s. On the decompression side, level 1 is ~1,320 MB/s, -1 at ~3,800 MB/s, and -5 at ~4,430 MB/s.

So, my consumer grade desktop i7-6700K is capable of emitting decompressed data at over 4 GB/s with Python. That's pretty good if you ask me. (Full disclosure: the timings were taken just around the compression operation itself: overhead of loading data into memory was not taken into account. See the bench.py script in the source repository for more.

Long Distance Matching Mode

Negative compression levels take zstandard into performance territory that has historically been reserved for compression formats like lz4 that are optimized for that domain. Long distance matching takes zstandard in the other direction, towards compression formats that aim to achieve optimal compression ratios at the expense of time and memory usage.

python-zstandard 0.9 supports long distance matching and all the configurable parameters exposed by the zstandard API.

I'm not going to capture many performance numbers here because python-zstandard performs about the same as the C implementation because LDM mode spends most of its time in zstandard C code. If you are interested in numbers, I recommend reading the zstandard 1.3.2 and 1.3.4 release notes.

I will, however, underscore that zstandard can achieve close to lzma's compression ratios (what the xz utility uses) while completely smoking lzma on decompression speed. For a bundle of the Firefox Mercurial repository, zstandard level 19 with a long distance window size of 512 MB using 8 threads compresses to 1,033,633,309 bytes (16.0%) in ~260s wall, 1,730s CPU. xz -T8 -8 compresses to 1,009,233,160 (15.6%) in ~367s wall, ~2,790s CPU.

On the decompression side, zstandard takes ~4.8s and runs at ~1,350 MB/s as measured on the output side while xz takes ~54s and runs at ~114 MB/s. Zstandard, however, does use a lot more memory than xz for decompression, so that performance comes with a cost (512 MB versus 32 MB for this configuration).

Other Notable Changes

python-zstandard now uses the advanced compression and decompression APIs everywhere. All tunable compression and decompression parameters are available to python-zstandard. This includes support for disabling magic headers in frames (saves 4 bytes per frame - this can matter for very small inputs, especially when using dictionary compression).

The full dictionary training API is exposed. Dictionary training can now use multiple threads.

There are a handful of utility functions for inspecting zstandard frames, querying the state of compressors, etc.

Lots of work has gone into shoring up the code base. We now build with warnings as errors in CI. I performed a number of focused auditing passes to fix various classes of deficiencies in the C code. This includes use of the buffer protocol: python-zstandard is now able to accept any Python object that provides a view into its underlying raw data.

Decompression contexts can now be constructed with a max memory threshold so attempts to decompress something that would require more memory will result in error.

See the full release notes for more.

Conclusion

Since I last released a major version of python-zstandard, a lot has changed in the zstandard world. As I blogged last year, zstandard circa early 2017 was a very compelling compression format: it already outperformed popular compression formats like zlib and bzip2 across the board. As a general purpose compression format, it made a compelling case for itself. In my mind, brotli was its only real challenger.

As I wrote then, zstandard isn't perfect. (Nothing is.) But a year later, it is refreshing to see advancements.

A criticism one year ago was zstandard was pretty good as a general purpose compression format but it wasn't great if you live at the fringes. If you were a speed freak, you'd probably use lz4. If you cared about compression ratios, you'd probably use lzma. But recent releases of zstandard have made huge strides into the territory of these niche formats. Negative compression levels allow zstandard to flirt with lz4's performance. Long distance matching allows zstandard to achieve close to lzma's compression ratios. This is a big friggin deal because it makes it much, much harder to justify a domain-specific compression format over zstandard. I think lzma still has a significant edge for ultra compression ratios when memory utilization is a concern. But for many consumers, memory is readily available and it is easy to justify trading potentially hundreds of megabytes of memory to achieve a 10x speedup for decompression. Even if you aren't willing to sacrifice more memory, the ability to tweak compression parameters is huge. You can do things like store multiple versions of a compressed document and conditionally serve the one most appropriate for the client, all while running the same zstandard-only code on the client. That's huge.

A year later, zstandard continues to impress me for its set of features and its versatility. The library is continuing to evolve - all while maintaining backwards compatibility on the decoding side. (That's a sign of a good format design if you ask me.) I was honestly shocked to see that zstandard was able to change its compression settings in a way that allowed it to compete with lz4 and lzma without requiring a format change.

The more I use zstandard, the more I think that everyone should use this and that popular compression formats just aren't cut out for modern computing any more. Every time I download a zlib/gz or bzip2 compressed archive, I'm thinking if only they used zstandard this archive would be smaller, it would have decompressed already, and I wouldn't be thinking about how annoying it is to wait for compression operations to complete. In my mind, zstandard is such an obvious advancement over the status quo and is such a versatile format - now covering the gamut of super fast compression to ultra ratios - that it is bordering on negligent to not use zstandard. With the removal of the controversial patent rights grant license clause in zstandard 1.3.1, that justifiable resistance to widespread adoption of zstandard has been eliminated. Zstandard is objectively superior for many workloads and I heavily encourage its use. I believe python-zstandard provides a high-quality interface to zstandard and I encourage you to give it and zstandard a try the next time you compress data.

If you run into any problems or want to get involved with development, python-zstandard lives at indygreg/python-zstandard on GitHub.

*(I updated the post on 2018-05-16 to remove a paragraph about zstandard competition. In the original post, I unfairly compared zstandard to Snappy instead of Brotli and made some inaccurate statements around that comparison.)

Planet MozillaThe curl 7 series reaches 60

curl 7.60.0 is released. Remember 7.59.0? This latest release cycle was a week longer than normal since the last was one week shorter and we had this particular release date adapted to my traveling last week. It gave us 63 days to cram things in, instead of the regular 56 days.

7.60.0 is a crazy version number in many ways. We've been working on the version 7 series since virtually forever (the year 2000) and there's no version 8 in sight any time soon. This is the 174th curl release ever.

I believe we shouldn't allow the minor number to go above 99 (because I think it will cause serious confusion among users) so we should come up with a scheme to switch to version 8 before 7.99.0 gets old. If we keeping doing a new minor version every eight weeks, which seems like the fastest route, math tells us that's a mere 6 years away.

Numbers

In the 63 days since the previous release, we have done and had..

3 changes
111 bug fixes (total: 4,450)
166 commits (total: 23,119)
2 new curl_easy_setopt() options (total: 255)

1 new curl command line option (total: 214)
64 contributors, 36 new (total: 1,741)
42 authors (total: 577)
2 security fixes (total: 80)

What good does 7.60.0 bring?

Our tireless and fierce army of security researches keep hammering away at every angle of our code and this has again unveiled vulnerabilities in previously released curl code:

  1. FTP shutdown response buffer overflow: CVE-2018-1000300

When you tell libcurl to use a larger buffer size, that larger buffer size is not used for the shut down of an FTP connection so if the server then sends back a huge response during that sequence, it would buffer-overflow a heap based buffer.

2. RTSP bad headers buffer over-read: CVE-2018-1000301

The header parser function would sometimes not restore a pointer back to the beginning of the buffer, which could lead to a subsequent function reading out of buffer and causing a crash or potential information leak.

There are also two new features introduced in this version:

HAProxy protocol support

HAProxy has pioneered this simple protocol for clients to pass on meta-data to the server about where it comes from; designed to allow systems to chain proxies / reverse-proxies without losing information about the original originating client. Now you can make your libcurl-using application switch this on with CURLOPT_HAPROXYPROTOCOL and from the command line with curl's new --haproxy-protocol option.

Shuffling DNS addresses

Over six years ago, I blogged on how round robin DNS doesn't really work these days. Once upon the time the gethostbyname() family of functions actually returned addresses in a sort of random fashion, which made clients use them in an almost random fashion and therefore they were spread out on the different addresses. When getaddrinfo() has taken over as the name resolving function, it also introduced address sorting and prioritizing, in a way that effectively breaks the round robin approach.

Now, you can get this feature back with libcurl. Set CURLOPT_DNS_SHUFFLE_ADDRESSES to have the list of addresses shuffled after resolved, before they're used. If you're connecting to a service that offer several IP addresses and you want to connect to one of those addresses in a semi-random fashion, this option is for you.

There's no command line option to switch this on. Yet.

Bug fixes

We did many bug fixes for this release as usual, but some of my favorite ones this time around are...

improved pending transfers for HTTP/2

libcurl-using applications that add more transfers than what can be sent over the wire immediately (usually because the application as set some limitation of the parallelism libcurl will do) can be held "pending" by libcurl. They're basically kept in a separate queue until there's a chance to send them off. They will then be attempted to get started when the streams than are in progress end.

The algorithm for retrying the pending transfers were quite naive and "brute-force" which made it terribly slow and in effective when there are many transfers waiting in the pending queue. This slowed down the transfers unnecessarily.

With the fixes we've landed in7.60.0, the algorithm is less stupid which leads to much less overhead and for this setup, much faster transfers.

curl_multi_timeout values with threaded resolver

When using a libcurl version that is built to use a threaded resolver, there's no socket to wait for during the name resolving phase so we've often recommended users to just wait "a short while" during this interval. That has always been a weakness and an unfortunate situation.

Starting now, curl_multi_timeout() will return suitable timeout values during this period so that users will no longer have to re-implement that logic themselves. The timeouts will be slowly increasing to make sure fast resolves are detected quickly but slow resolves don't consume too much CPU.

much faster cookies

The cookie code in libcurl was keeping them all in a linear linked list. That's fine for small amounts of cookies or perhaps if you don't manipulate them much.

Users with several hundred cookies, or even thousands, will in 7.60.0 notice a speed increase that in some situations are in the order of several magnitudes when the internal representation has changed to use hash tables and some good cleanups were made.

HTTP/2 GOAWAY-handling

We figure out some problems in libcurl's handling of GOAWAY, like when an application wants to do a bunch of transfers over a connection that suddenly gets a GOAWAY so that libcurl needs to create a new connection to do the rest of the pending transfers over.

Turns out nginx ships with a config option named http2_max_requests that sets the maximum number of requests it allows over the same connection before it sends GOAWAY over it (and it defaults to 1000). This option isn't very well explained in their docs and it seems users won't really know what good values to set it to, so this is probably the primary reason clients see GOAWAYs where there's no apparent good reason for them.

Setting the value to a ridiculously low value at least helped me debug this problem and improve how libcurl deals with it!

Repair non-ASCII support

We've supported transfers with libcurl on non-ASCII platforms since early 2007. Non-ASCII here basically means EBCDIC, but the code hasn't been limited to those.

However, due to this being used by only a small amount of users and that our test infrastructure doesn't test this feature good enough, we slipped recently and broke libcurl for the non-ASCII users. Work was put in and changes were landed to make sure that libcurl works again on these systems!

Enjoy 7.60.0! In 56 days there should be another release to play with...

Planet MozillaEFail and Thunderbird, What You Need To Know

Yesterday, researchers and the press shared information describing security vulnerabilities that would enable an attacker to gain access to the plaintext of encrypted Emails. To understand how this happens, the researchers who uncovered EFail provide a good description on their website:

In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

How to know if you’re affected

You’re affected only if you:

  • Are using S/MIME encryption or PGP encryption (through the Enigmail add-on)
  • And the attacker has access to encrypted Emails of yours

How to protect yourself


DO NOT DISABLE ENCRYPTION. 
We’ve seen recommendations from some outlets to stop using encrypted Email altogether. If you are sending sensitive data via Email, Thunderbird still recommends using encryption to keep those messages safe. You should, however, check the configuration of the applications you use to view encrypted EMail. For Thunderbird, follow our guidelines below to protect yourself.

Until Thunderbird 52.8 and 52.8.1 are released with fixes:

  • Keep remote content disabled in Thunderbird (the default) is advisable as it should mitigate the described attack vector.
  • Do not use the “allow now” option that pops up when remote content is encountered in your encrypted Emails.

Most of the EFail bugs require a back-channel and require the attacker to send a manipulated Email to you, which contains part of a previously obtained encrypted message. It is also worth noting that clicking content in the Email can also allow for a back-channel (until the fixes are live).

Enigmail version 2.0.3 also shows a warning now, which should help you be aware if you are affected.

 

Planet MozillaFirefox Performance Update #8

Howdy folks! Another Firefox Performance Update coming at you. Buckle up.

But first a word from our sponsor: Talos!

Talos is a framework that we use to measure various aspects of Firefox performance as part of our continuous integration pipeline.

There are a number of Talos “suites”, where each suite contains some number of tests. These tests, in turn, report some set of numbers that are then stored and graphable via our graph viewer here.

Here’s a full list of the Talos tests, including their purpose, the sorts of measurements they take, and who’s currently a good person to ask about them if you have questions.

A lot of work has been done to reduce the amount of noise in our Talos tests, but they’re still quite sensitive and noisy. This is why it’s often necessary to do 5-10 retriggers of Talos test runs in order to do meaningful comparisons.

Sometimes Talos detects regressions that aren’t actually real regressions1, and that can be a pain. However, for the times where real regressions are caught, Talos usually lets us know much faster than Telemetry or user reports.

Did you know that you can get profiles from Try for Talos runs? This makes it much simpler to diagnose Talos regressions. Also, we now have Talos profiles being generated on our Nightly builds for added convenience!

And now for some Performance Project updates!

Early first blank paint (lead by Florian Quèze)

No new bugs have been filed against the feature yet from our beta population, and we are seeing an unsurprising drop in the time-to-first-paint probe on that channel. User Research is in the process of getting a (very!) quick study launched to verify our assumption that users will perceive the first blank paint as the browser having started more quickly.

Faster content process start-up time (lead by Felipe Gomes)

Felipe has some patches up for review to make our frame scripts as lazy as possible. To support that, he’s added some neat infrastructure using Proxy and Reflect to make it possible to create an object that can be registered as an event handler or observer, and only load the associated script when the events / observer notifications actually fire.

We’re excited to see how this work impacts our memory and content process start-up graphs!

LRU cache for tab layers (lead by Doug Thayer)

The patch to introduce the LRU cache landed and bounced a few times. There appears to be an invalidation bug with the approach that needs to be ironed out first. dthayer has a plan to address this (forcing re-paints when switching to a tab that’s already rendered in the background), and is just waiting for review.

ClientStorageTextureSource for macOS (lead by Doug Thayer)

Doug is working on finishing a project that should allow us to be more efficient when uploading things to the compositor on macOS (by handing memory over to the GPU rather than copying it). He’s currently dealing with strange crashes that he can only reproduce on Try. Somehow, Doug seems to always run into the weird bugs that only appear in automation, and the whole team is crossing our fingers for him on this one.

Swapping DataURLs for Blobs in Activity Stream (lead by Jay Lim)

Our new intern Jay Lim is diving right into performance work, and already has his first patch up. This patch makes it so that Activity Stream no longer uses DataURLs to serialize images down to the content process, and instead uses Blobs and Blob URLs. This should allow the underlying infrastructure to make better use of memory, as well as avoiding the cost of converting images to and from DataURLs.

Caching Activity Stream JS in the JS Bytecode Cache (lead by Jay Lim)

This project is still in the research phase. Jay is trying to determine if it’s possible to stash the parsed Activity Stream JS code in the JS bytecode cache that we normally use for webpages. We’re still evaluating how much this would save us on page load, and we’re also still evaluating the cost of modifying the underlying infrastructure to allow this. Stay tuned for updates.

AwesomeBar improvements (led by Gijs Kruitbosch)

Gijs has started this work by making it much cheaper to display long URLs in the AwesomeBar. This is particularly useful for DataURLs that might happen to be in your browsing history for some reason!

This is a long-pull effort, so expect this work to be spread out over a bunch of bugs.

Tab warming (lead by Mike Conley)

I’ve been focusing on determining why warming tabs seems to result in two consecutive paints. My findings are here, and I suspect that in the warming case, the second paint is avoidable. I suspect that this, coupled with dthayer’s work on ClientStorageTextureSource will greatly improve tab warming’s performance on macOS, and allow us to ship on that OS.

Firefox’s Most Wanted: Performance Wins (lead by YOU!)

Before we go into the grab-bag list of performance-related fixes – have you seen any patches landing that should positively impact Firefox’s performance? Let me know about it so I can include it in the list, and give appropriate shout-outs to all of the great work going on! That link again!

Grab-bag time

And now, without further ado, a list of performance work that took place in the tree:

(🌟 indicates a volunteer contributor)

Thanks, folks!


  1. Sometimes, for example, the test is just measuring the wrong thing. 

Planet MozillaHot Topics in Digital Rights: A Global Look at the Future of Internet Health

Hot Topics in Digital Rights: A Global Look at the Future of Internet Health In June 2017, Mozilla launched a new fellowship track focused on tech policy that brought together experts from around the world who are advancing policy...

Planet MozillaHot Topics in Digital Rights: A Global Look at the Future of Internet Health

Hot Topics in Digital Rights: A Global Look at the Future of Internet Health In June 2017, Mozilla launched a new fellowship track focused on tech policy that brought together experts from around the world who are advancing policy...

Planet MozillaWhat’s Your Open Source Strategy? Here Are 10 Answers…

A research report from Mozilla and Open Tech Strategies provides new perspectives on framing open source strategy. The report builds on Mozilla’s “Open by Design” strategy, which aims to increase the intent and impact of collaborative technology projects.

Mozilla is a radically open and participatory project. As part of the research we compiled into turning openness into a consistent competitive advantage, we identified that the application of open practices should always be paired with well-researched strategic intent. Without clarity of purpose, organizations will not (and nor should they) maintain long-term commitment to working with community. Indeed, we were not the first to observe this.

Mozilla benefits from many open practices, but open sourcing software is the foundation on which we build. Open source takes many forms at Mozilla. We enjoy a great diversity among the community structures of different Mozilla-driven open source projects, from Rust to Coral to Firefox (there are actually multiple distinct Firefox communities) and to others.

The basic freedoms offered by Mozilla’s open source projects — the famous “Four Freedoms” originally defined by the FSF — are unambiguous. But they only define the rights conveyed by the software’s license. People often have expectations that go well beyond that strict definition: expectations about development models, business models, community structure, even tool chains. It is even not uncommon for open source projects to be criticised for failing to comply with those unspoken expectations.

We recognize that there is no one true model. As Mozilla evolves more and more into a multi-product organization, there will be different models that suit different products and different environments. Structure, governance, and licensing policies should all be explicit choices based on the strategic goals of an open source project. A challenge for any organization is how to articulate these choices, or to put it simply, how do you answer the question, “what kind of open source project is this?”.

To answer the question, we wanted to develop a set of basic models — “archetypes” — that projects could aim for, modifying them as needed, but providing a shared vocabulary for discussing how to think about any given project. We were delighted to be able to partner with one of the leading authorities in open source, Open Tech Strategies, in defining these archetypes. Their depth of knowledge and fresh perspective has created something we believe offers unique value.

The resulting framework consists of 10 common archetypes, covering things from business objectives to licensing, community standards, component coupling and project governance. It also contains some practical advice on how to use the framework and on how to set up your project.

20 years after the Open Source Initiative was founded, open source is widespread (and has inspired methods of peer production beyond the realm of software). Although this report was tailored to advance open source strategies and project design within Mozilla, and with the organizations and communities we work with, we also believe that this challenge is not unique to us. We suspect there will be many other organizations, both commercial and non-commercial, who will benefit from the model.

You can download the report here. Like so many things, it will never be “done”. After more hands-on-use with Mozilla projects, we intend to work with Open Tech Strategies on a version that expands its sights beyond Mozilla’s borders.

If you’re interested in collaborating, you can get in touch here: archetypes@opentechstrategies.com. The Github repository is up at https://github.com/OpenTechStrategies/open-source-archetypes.

The post What’s Your Open Source Strategy? Here Are 10 Answers… appeared first on The Mozilla Blog.

Planet MozillaDebugging Modern Web Applications

Building and debugging modern JavaScript applications in Firefox DevTools just took a quantum leap forward. In collaboration with Logan Smyth, Tech Lead for Babel, we leveled up the debugger’s source map support to let you inspect the code that you actually wrote. Combined with the ongoing initiative to offer first-class JS framework support across all our devtools, this will boost productivity for modern web app developers.

Modern JS frameworks and build tools play a critical role today. Frameworks like React, Angular, and Ember let developers build declarative user interfaces with JSX, directives, and templates. Tools like Webpack, Babel, and PostCSS let developers use new JS and CSS features before they are supported by browser vendors. These tools help developers write simpler code, but generate more complicated code to debug.

In the example below, we use Webpack and Babel to compile ES Modules and async functions into vanilla JS, which can run in any browser. The original code on the left is pretty simple. The generated, browser-compatible code on the right is much more complicated.

In the example below, we use Webpack and Babel to compile ES Modules and async functions into vanilla JS. The original code on the left is pretty simple. The generated, browser-compatible code on the right is much more complicated.
Figure 1. Original file on the left, generated file on the right.

When the debugger pauses, it uses source maps to navigate from line 13 in the generated code to line 4 in the original code. Unfortunately, because pausing actually happens on line 13, it can be difficult for the user to figure out what the value of dancer is at that time. Moving the mouse over the variable dancer returns undefined and the only way to find the scope of dancer is to open all six of the available scopes in the Scopes pane followed by expanding the _emojis object! This complicated and frustrating process is why many people opt to disable source maps.

A view of the disconnect between the original code file and the generated code, which opens multiple scopes
Figure 2. Value of dancer is undefined, six separate scopes in the Scopes pane.

To address this problem we teamed up with Logan Smyth to see if it was possible to make the interaction feel more natural, as if you were debugging your original code. The result is a new engine that maps source maps data with Babel’s syntax tree to show the variables you expect to see the way you want to see them.

Now the panel displays the correct value of dancer, and the Scopes pane shows one scope
Figure 3. Correct value of dancer is displayed, Scopes pane shows one scope.

These improvements were first implemented for Babel and Webpack, and we’re currently adding support for TypeScript, Angular, Vue, Ember, and many others. If your project already generates source maps there is a good chance this feature will work for you out of the box.

To try it out, just head over and download Firefox Developer Edition. You can help us by testing this against your own project and reporting any issues. If you want to follow along, say hello, or contribute, you can also find us on the devtools channel Github or Mozilla Discourse or in the devtools Slack!

Our 2018 goal is to improve the lives of web developers who are building modern apps using the latest frameworks, build tools and best practices. Fixing variables is just the beginning. The future is bright!

Planet Mozillacurl user survey 2018

The curl user survey 2018 is up. If you ever use curl or libcurl, please donate some of your precious time and provide your answers!

The curl user survey is an annual tradition since 2014 and it is one of our primary ways to get direct feedback from a larger audience about what's good, what's bad and what to focus on next in the curl project. Your input really helps us!

2018 survey

The survey will be up and available to fill in during 14 days, from May 15th until the end of May 28th. Please help us share this and ask your curl using friends to join in as well.

If you submitted data last year, make sure you didn't miss the analysis of the 2017 survey.

Planet MozillaWebRender newsletter #19

I skipped a newsletter again (I’m trying to put publish one every two weeks or so), sorry! As usual a lot of fixes and a few performance improvement, and sometimes both the same time. For example the changes around image and gradient repetition were primarily motivated by bugs we were encountering when dealing with repeated backgrounds containing very large amounts of repetitions, and we decided to solve these issues by moving all images to the “brush” infrastructure (bringing better batching, faster fragment shader and the ability to move more pixels out of the alpha pass), and optimize the common cases by letting the CPU generate a single primitive that is repeated in the shader. I don’t always properly highlight fixes that benefit performance but they are here.
The most exciting (in my humble opinion) advancement lately is Kats’ work on integrating of asynchronous panning and zooming (which we refer to as APZ) with WebRender’s asynchronous scene building infrastructure. This lets us perform some potentially expensive operations (such as scene building) asynchronously and allow the critical path to prioritize scrolling, animations and video playback. It is a lot more complicated than it might sound (in part due to how it works on the Gecko side) and I am very impressed with how quickly it is taking shape.

Enough with my ramblings, let’s have a look at the highlight of the last two weeks month.

Notable WebRender changes

  • Kats integrated APZ with async scene building (spread over many pull requests).
  • Nical implemented and enabled repeating images and gradients directly in the shader when there is no tile spacing.
  • Nical implemented decomposing repeated linear and radial gradients during frame building when it can’t be done in the shader.
  • Nical and Jeff implemented decomposing tiled images during frame building when it can’t be done in the shader.
  • Nical implemented baking repeated image spacing into a texture to allow repeating them in the shader in more cases.
  • Gankro fixed serialization of gradient stops for the frame capture tool.
  • Kvark implemented clipping against the near plane for bounds calculus (fixes a few correctness bugs and improves performance in some cases).
  • Jeff fixed blob images incorrectly marked as opaque.
  • Martin implemented scrolling using the hit tester.
  • fschutt fixed some build issues with pathfinder, and some other build issues.
  • Martin prevented unnecessary clipping computations in some cases.
  • Jeff simplified the way we clip blob image tiles.
  • Gankro fixed an infinite loop.
  • Martin fixed another infinite loop.
  • Martin made the border image API more general.
  • Kvark reduced the size of the aColorTexCoord shader vertex attribute.
  • Kvark implemented taking the clip chains clip rect into account for local_rect computation.
  • Glenn added the initial implementation for brush border primitives.
  • Kvark gracefully handled invalid gradient stops.
  • Glenn implemented sub-pixel aa with specified background color in off-screen targets.
  • Martin fixed an issue with optimized away clip segments.
  • Gankro worked around a rustc bug.
  • Glenn implemented storing non-rectangular UVs in texture cache entries.
  • Glenn implemented non-square slab allocations in the texture cache.
  • Martin improved the performance of rendering dotted/dashed borders.
  • Glenn implemented collapsing opacity filters when they contain a single item.
  • Glenn fixed the clipping of elements which bounds depend on an animated property.
  • Glenn made various chnages in preparation for text run caching.
  • Kvark fixed a GPU cache invalidation bug.
  • Gankro fixed an issue with sub-pixel aa and masks.

Notable Gecko changes

  • Kats made a ton of progress on integrating APZ with async scene building (spread over many bugzilla entries).
  • Kats implemented paint skipping with WebRender.
  • Jeff enabled blob image invalidation by default.
  • Jeff fixed a ton of blob image related bugs (spread over many bugzilla entries).
  • Kats enabled some linux64-qr mochitests in the CI.
  • Hiro fixed tab throbbers that weren’t throbbing.
  • Hiro fixed another animation related bug.
  • Hiro fixed yet another animation issue.
  • Hiro fixed more animation stuff.
  • Jeff fixed a blob image assertion, and another one.
  • Kats fixed a bug with scrollbar not being rendered properly in some cases.
  • Gankro fixed some complicated text decoration cases.
  • Gankro fixed a bug with table and border-collapse.
  • Gankro fixed janky animations of text caused by pixel-snapping under certain types of transforms.
  • Gankro implemented proper hit-testing support within blob images.
  • Andrew reduced the amount of copying happening with blob images.
  • Andrew fixed an assertion in the image sharing code.
  • Kats fixed a crash.
  • Kats batched the removal of async pipelines which were causing the scene to be rebuilt a lot on some sites.
  • Kats reduced the amount of allocations happening when updating animated properties.
  • Sotaro improved the way logging is integrated with WebRender (2), (3).
  • Kats avoided the use of static initializers for APZ.
  • Hiro fixed the way we sample time stamps for tests.
  • Hiro enabled some animation tests with WebRender.
  • Lee fixed an issue with the lifetime of blob image resources.
  • Lee fixed emoji transparency.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other pref.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Planet MozillaNow at 1000 mbit

A little over six years since I got the fiber connection installed to my house. Back then, on a direct question to my provider, they could only offer 100/100 mbit/sec so that's what I went with. Using my Telia Öppen Fiber and Tyfon (subsequently bought by Bahnhof) as internet provider.

In the spring of 2017 I bumped the speed to 250/100 mbit/sec to see if I would notice and actually take advantage of the extra speed. Lo and behold, I actually feel and experience the difference - frequently. When I upgrade my Linux machines or download larger images over the Internet, I frequently do that at higher speeds than 10MB/sec now and thus my higher speed saves me time and offers improved convenience.

However, "Öppen Fiber" is a relatively expensive provider for little gain for me. The "openness" that allows me to switch between providers isn't really something that gives much benefit once you've picked a provider you like, it's then mostly a way for a middle man to get an extra cut. 250mbit/sec from Bahnhof cost me 459 SEK/month (55 USD) there.

Switching to Bahnhof to handle both the fiber and the Internet connection is a much better deal for me, price wise. I get an upgraded connection to a 1000/1000 mbit/sec for a lower monthly fee. I'll now end up paying 399/month (48 USD)  (299 SEK/month the first 24 months). So slightly cheaper for much more speed!

My household typically consists of the following devices that are used for accessing the web regularly:

  • 4 smart phones
  • 1 iPad
  • 4 laptops
  • 3 desktop computers
  • 1 TV computer

Our family of 4 consumes around 120GB average weeks. Out of this, Youtube is the single biggest hogger with almost 30% of our total bandwidth. I suppose this says something about the habits of my kids...

Out of these 13 most frequently used devices in our local network only 5 are RJ45-connected, the rest are WiFi.

Switch-over

I was told the switch-over day was May 15th, and at 08:28 in the morning my existing connection went away. I took that as the start signal. I had already gotten a box from Bahnhof with the new media converter to use.

I went downstairs and started off my taking a photo of the existing installation...

So I unscrewed that old big thing from the wall and now my installation instead looks like

You can also see the Ethernet cable already jacked in.

Once connected, I got a link at once and then I spent another few minutes to try to "register" with my user name and password until I figured out that my router has 1.1.1.1 hardcoded as DNS server and once I cleared that, the login-thing worked as it should and I could tell Bahnhof that I'm a legitimate user and woof, my mosh session magically reconnected again etc.

All in all, I was offline for shorter than 30 minutes.

Speeds and round-trips

These days a short round-trip is all the rage and is often more important than high bandwidth when browsing the web. I'm apparently pretty close to the Stockholm hub for many major services and I was a bit curious how my new operator would compare.

To my amazement, it's notably faster. google.com went from 2.3ms to 1.3ms ping time, 1.1.1.1 is at 1.3ms, facebook.com is 1.0ms away.  My own server is 1.2ms away and amusingly even if I'm this close to the main server hosting the curl web site, the fastly CDN still outperforms it so curl.haxx.se is an average 1.0ms from me.

So, the ping times were notably reduced. The bandwidth is truly at gigabit speeds in both directions according to bredbandskollen.se, which is probably the most suitable speed check site in Sweden.

A rather smooth change so far. Let's hope it stays this way.

Planet MozillaThoughts on retiring from a team

Thoughts on retiring from a team

The Rust Community Team has recently been having a conversation about what a team member’s “retirement” can or should look like. I used to be quite active on the team but now find myself without the time to contribute much, so I’m helping pioneer the “retirement” process. I’ve been talking with our subteam lead extensively about how to best do this, in a way that sets the right expectations and keeps the team membership experience great for everyone.

Nota bene: This post talks about feelings and opinions. They are mine and not meant to represent anybody else’s.

Why join a team?

When I joined the Rust community subteam, its purpose was defined vaguely. It was a small group of “people who do community stuff”, and needed all the extra hands it could get. A lot of my time was devoted explicitly to Rust and Rust-related tasks. The tasks that I was doing anyways seemed closely aligned with the community team’s work, so stepping up as a team contributor made a lot of sense. Additionally, the team was so new that the only real story for “how to work with this team and contribute to its work” was “join the team” . We hadn’t yet pioneered the subteams and collaboration with community organizers outside the official community team which are now multiplying the team’s impact.

Why leave?

I’m grateful to the people who have the bandwidth and interest to put consistent work into participating on Rust’s community team today. As the team has grown and matured, its role has transitioned from “do community tasks” to “support and coordinate the many people doing those tasks”. I neither enjoy nor excel at such coordination tasks. Not only do I have less time to devote to Rust stuff, but the community team’s work has naturally grown into higher-impact categories that I personally find less fulfilling and more exhausting to work on.

Teams and people change

In a way, the team’s growth and refinement over the years reminds me of a microcosm of what I saw while working at a former startup as it built up into an enterprise company. Some peoples’ working style had been excellently suited to the 5-person company they originally joined, but clashed with the 50-person company into which that startup grew. Others who would never have thrived in a company of only 10 people were hiring on and having a fantastic impact scaling the company up to 1,000. And some were fine when the company was small and didn’t mind being part of a larger organization either. That experience reminds me that the fit between a person and organization at some point in the past does not guarantee that they’ll remain a good fit for each other over time, and neither is necessarily to blame for the eventual mismatch as both grow and change.

Does leaving harm anyone?

When you’re appreciated and valued for the work you do on a team, it’s easy to get the idea that the team would be harmed if you left. The tyres on my bike are a Very Important Part of the bike, and if I took them off, the bike wouldn’t be rideable. But a team isn’t just a machine – a team’s impact is an emergent phenomenon that comes out of many factors, not a static item. If a sports team has a really excellent coach, they’ll retain the lessons they learned from that coach’s mentorship even after the coach moves away. Older players will pass along the coach’s lessons to younger ones, and their ideas will stick around and improve the group even long after the original players’ retirement. When a team is coordinated well, one member leaving doesn’t hurt it. And if I leave on good terms rather than sticking around till I burn out or burn bridges, I can always be available for remaining members to consult when if need advice that only I can provide.

Would staying harm anyone?

I think that in the case of the Rust community team, it would reflect poorly on the community as a whole if the exact same people comprised the community team for the entire life of the language.

If nobody new ever joins the team, we wouldn’t get new ideas and tactics, nor the priceless infusion of fresh patience and optimism that new team members bring to our perennial challenges and frustrations. So, new team members are essential. If new people joined on a regular basis but nobody ever left, the team would grow unboundedly large as time went on, and have you ever tried to get anything done with a hundred- or thousand-person committee? In my opinion, having established team members retire every now and then is an essential factor in preventing either of those undesirable hypotheticals.

The team selects for members who’ll step up and accomplish tasks when they need to. I think establishing turnover in a healthy and sustainable way is one of the most essential tasks for the team to build its skills at. The best way to get a healthy amount of turnover – not too much, but not too little either – is for every team member to step up to the personal challenge of identifying the best time to retire from active involvement. And for me, that happens to look like right now.

Aspirational Clutter

Do you have stuff in your house that you don’t use, and it’s taking up space, and you’re kind of annoyed at it for taking up space, but you don’t feel like you can get rid of it because you think you really should use it, or you’re sure you’re just going to make some personal change that will cause you to use it someday? I call that stuff aspirational clutter: It doesn’t belong to you, it belongs to some imaginary person who doesn’t exist but you aspire to become them someday.

A team meeting every week on your agenda can be aspirational clutter in the same way as a jumbled shelf of planners or a pile of sports gear covering a treadmill: It not only isn’t a good fit for who you are right now, but by wasting time or space it actually gets in the way of the habits and changes that would make you more like that person you aspire to be.

I find few experiences more existentially miserable than feeling obliged to promise work that I know I’ll lack the resources of time or energy to deliver. Sticking around on a team that I’m no longer a good fit for puts me in a situation where get to choose between feeling guilty if I don’t promise to get any work done, or feeling like a disappointment for letting others down if I commit to more than I’m able to deliver. Those aren’t feelings I want to experience, and I can avoid them easily by being honest with myself about the amount of time and energy I have available to commit to the team.

The benefits of contributing from a non-team-member role

One scary idea that comes up when leaving a team is the question: “if I’m not on the team, how can I help with the team’s work?”.

In my opinion, it builds a healthier community if people who are good at a given team’s work spend some time interfacing with the team from the perspective of non-team-members. If I know how the community team gets stuff done and I go “undercover” as a non-team-member coming to them for help, I can give them essential feedback to improve the experience and processes that non-team-members encounter.

When I wear my non-team-member hat and try to get stuff done, I learn what it’s like for everyone else who tries to interface with the team. I can then use the skills that I built on by participating on the team to remedy any challenges that a non-team-member encounters. Those changes create a better experience for every community member who interacts with the team afterwards.

What next?

As a community team alum, I’ll keep doing the Rust outreach – the meetup organizing, the conference talks, the cute swag, the stickers – that I’ve been doing all along. Stepping down from the official team member list just formalizes the state that my involvement has been in for the past year or so: Although I get the community team’s support for my endeavors when I need it, I’m not invested in the challenges of supporting others’ work which the team is now tackling.

I’m proud of the impact that the team has had while I’ve been a part of it, and I look forward to seeing what it will continue to accomplish. I’m grateful for all the leadership and hard work that have gone into making the Rust community subteam an organization from which I can step back while remaining confident that it will keep excelling and evolving.

Why blog all that?

I’m publishing my thoughts on leaving in the hopes that they can help you, dear reader, gain some perspective on your own commitments and curate them in whatever way is best for you.

If you read this and feel pressured to leave something you love and find fulfilling, please try to forget you ever saw in this post.

If you read this hoping it would give you some excuse to quit a burdensome commitment and feel disappointed that I didn’t provide one, here it is now: You don’t need a fancy eloquent excuse to stop doing something if you don’t want to any more. Replace unfulfilling pursuits with better ones.

Planet MozillaSecure mail on Power Macs is not a good idea

Arguably it hasn't been a good idea for awhile, but the EFAIL hack now makes it possible to decrypt even previous encrypted messages as well as current ones. All known mail clients for PowerPC OS X that can render HTML are vulnerable, including Apple Mail, Thunderbird and Tenfourbird. Earlier clients that lack this functionality are not vulnerable to this specific exploit, but their encryption capabilities are likely insufficient or not otherwise current, so they should not be considered secure either.

The EFAIL vulnerability is not as severe as it might sound because a key requirement is that an attacker already have access to the encrypted messages. If you used the tips in our security recommendations for PowerPC OS X to improve the security of your computer and your network connection, the odds of this occurring are not zero because the attacker may have already collected them in the past through other means, but are likely to be fairly low with the holes that remain. The risk can be mitigated further by disabling HTML rendering of E-mail (that means all E-mail, however, which might be a dealbreaker), and/or disabling automatic decryption of such messages (for example, I already cut and paste encrypted messages I receive into GPG directly in a Terminal window; my E-mail client never decrypts them automatically). A tool like Little Snitch could also be employed to block unexpected accesses to external servers, though this requires you to know what kinds of access would be unexpected for such messages.

Even with these recommendations, however, there may be other potential edge cases such that until someone(tm) updates Thunderbird or another mailer on Power Macs, secure encrypted mail on our systems should be handled with extreme caution and treated as if it were potentially exposed. If you require this kind of security from your E-mail and you must use a Power Mac, you're probably better off finding a webmail service with appropriate security and using TenFourFox (the webmail service then handles this), or building and using an E-mail client on some other system that is more up to date that you can access remotely and securely (which is what I do myself).

Planet MozillaThis Week in Rust 234

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Askama, a Jinja-like type-safe compiled templating engine. Thanks to Icefoxen for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

153 pull requests were merged in the last week

New Contributors

  • Aaron DeVore
  • C Jones
  • Collins Abitekaniza
  • Isaac Whitfield
  • Katrin Leinweber
  • Martin Husemann
  • Roman Stoliar
  • Sebastian Köln
  • Tim Allen
  • Tomas Gavenciak

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaRust turns three

Three years ago today, the Rust community released Rust 1.0 to the world, with our initial vision of fearless systems programming. As per tradition, we’ll celebrate Rust’s birthday by taking stock of the people and the product, and especially of what’s happened in the last year.

The People

Rust is a people-centric, consensus-driven project. Some of the most exciting developments over the last year have to do with how the project itself has grown, and how its processes have scaled.

The official teams that oversee the project doubled in size in the last year; there are now over a hundred individuals associated with one or more of the teams. To accommodate this scale, the team structure itself has evolved. We have top-level teams covering the language, library ecosystem, developer tooling, documentation, community, and project operations. Nested within these are dozens of subteams and working groups focused on specific topics.

Rust is now used in a huge variety of companies, including both newcomers and big names like Google, Facebook, Twitter, Dropbox, Microsoft, Red Hat, npm and, of course, Mozilla; it’s also in the top 15 languages this year on GitHub. As a byproduct, more and more developers are being paid to contribute back to Rust, many of them full time. As of today, Mozilla employees make up only 11% of the official Rust teams, and just under half of the total number of people paid to work on Rust. (You can read detailed whitepapers about putting Rust into production here.)

Graphs of Rust team growth

Finally, the Rust community continues to work on inclusivity, through outreach programs like Rust Reach and RustBridge, as well as structured mentoring and investments in documentation to ease contribution. For 2018, a major goal is to connect and empower Rust’s global community, which we’re doing both through conference launches in multiple new continents, as well as work toward internationalization throughout the project.

The Product

If you spend much time reading this blog, you’ll know that the major theme of our work over the past year has been productivity. As we said in last year’s roadmap:

From tooling to libraries to documentation to the core language, we want to make it easier to get things done with Rust.

This work will culminate in a major release later this year: Rust 2018 Edition. The release will bring together improvements in every area of the project, polished into a new “edition” that bundles the changes together with updated documentation and onboarding. The roadmap has some details about what to expect.

The components that make up Rust 2018 will be shipped as they become ready on the stable compiler. Recent releases include:

The next couple of releases will include stable SIMD support, procedural macros, custom allocators, and more. The final big features — lifetime system improvements and async/await — should both reach feature complete status on nightly within weeks. Vital tools like the RLS and rustfmt are also being polished for the new edition, including RFCs for finalizing the style and stability stories.

To help tie all this work to real-world use-cases, we’ve also targeted four domains for which Rust provides a compelling end-to-end story that we want to show the world as part of Rust 2018. Each domain has a dedicated working group and is very much open for new contributors:

As Rust 2018 comes into focus, we plan to provide a “preview” of the new edition for cutting-edge community members to try out. Over the past couple of weeks we kicked off a sprint to get the basics nailed down, but we need more help to get it ready for testing. If you’re interested, you can dive into:

The Postscript

Rust’s growth continues to accelerate at a staggering rate. It has been voted the Most Loved Language on StackOverflow for all three years since it shipped. Its community has never been healthier or more welcoming. If you’re curious about using or contributing to Rust, there’s never been a better time to get involved.

Happy 3rd birthday, Rust.

Planet MozillaWebby Lifetime Achievement Award to Mitchell Baker

Webby Lifetime Achievement Award to Mitchell Baker Laurie Segall presents the 22nd Annual Webby Lifetime Achievement Award to Mitchell Baker

Planet MozillaWebby Lifetime Achievement Award to Mitchell Baker

Webby Lifetime Achievement Award to Mitchell Baker Laurie Segall presents the 22nd Annual Webby Lifetime Achievement Award to Mitchell Baker

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>