Planet MozillaWebRender newsletter #4

We skipped the newsletter for a few weeks (sorry about that!), but we are back. I don’t have a lot to report today, in part because I don’t yet have a good workflow to track the interesting changes (especially in gecko) so I am most likely missing a lot of them, and a lot of us are working on big pieces of the project that are taking time to come together and I am waiting for these to be completed before they make it in the newsletter.

Notable WebRender changes

  • Glenn started reorganizing the shader sources to make them compile faster (important for startup time).
  • Morris implemented the backface-visibility property.
  • Glenn added some optimizations to the clipping code.
  • Glenn improved the scheduling/batching of alpha passes to reduce the number of render target switches.
  • Sotaro improved error handling.
  • Glenn improved the transfer of the primitive data to the GPU by using pixel buffer objects instead of texture uploads.
  • Glenn added a web-based debugger UI to WebRender. It can inspect display lists, batches and can control various other debugging options.

Notable Gecko changes

  • Kats enabled layers-free mode for async scrolling reftests.
  • Kats and Morris enabled rendering tables in WebRender.
  • Gankro fixed a bug with invisible text not casting shadows.
  • Gankro improved the performance of generating text display items.

Planet MozillaPorting a legacy add-on to WebExtensions

tl;dr: Search Keys has been ported successfully and it is known as Add Search Number. Please try it! It works with Google, Yahoo (HK/TW/US), Bing, DuckDuckGo and even Wikipedia’s search page.

188279

Add Search Number

I have been using the excellent Search Keys add-on (original page) for a long time. It allows one to “go to search results by pressing the number of the search”. However, it hasn’t been updated for the better half of a decade and most features (e.g. support for Yahoo! and Bing) had broken, except for the numbers for Google Search.

Recently, there has been a push to move to the WebExtensions API, especially since Firefox 57 will stop supporting legacy XUL add-ons. Hence, I set about a quest to see what it takes to port Search Keys away from XUL, and I kept the author updated throughout.

Discoveries:

  • Using GitHub with Travis and ESLint integration was crucial for saving myself time avoiding silly syntax errors. I’m sure it could be done in your favourite online repository hosting alternative (Bitbucket/GitLab), etc.
  • Getting web-ext via npm also proved essential, along with WebExtension examples
  • You need to check if the old APIs have equivalents.
    • Search Keys was using nsIIOService which had no equivalent, but it was only used for ensure that a URL is indeed an URL, so I just did it another way (new URL = “<url>”). Thanks :MattN for the tip.
    • Another usage was for openUILinkIn, and there are similar-enough WebExtensions equivalents for this (tabs, windows).
    • (File a bug if an equivalent isn’t available, but first check for dupes)
  • Migrating an old project by another person is hard. I had to have several commits where I removed features, wither down the code to the bare minimum (my objective was just to add numbers for Google Search results), got it to work, tested, and then re-added support for Yahoo!, Bing, and even DuckDuckGo and Wikipedia.
  • Comments proved extremely helpful, in the absence of documentation.

I took about 2 days to port the add-on. Developing an add-on now has come a long way and is now much easier due to the presence of these tools (GitHub, devtools, etc.) as well as AMO being much improved ever since I started writing my first add-on (ViewAbout) almost a decade ago. Sadly, ViewAbout will unlikely be ported to WebExtensions, if ever. (Reasons are at that link)

This was tested on Firefox 55, exactly experiencing the difficulties that an add-on developer currently would face right now.

The only major caveat?

There were times when I set a breakpoint on a content script using Firefox’s Developer Tools (instantiated via web-ext). After I refresh the page, the extension would occasionally “disappear” from the devtools. I would then have to close Firefox, restart it via web-ext, re-set the breakpoint, then cross my fingers and hope that the devtools will stop at the required breakpoint.

In my experience, this has been straightforward, as the original add-on was fairly simple. I understand that for other complex add-ons, the porting process is much more complicated and take a much longer time.

What has your experience been like?

(Please note that this post does not discuss the pros and cons of whether Firefox 57 and later should use WebExtensions or not, hence cutting off legacy support, any comments on this will be removed.)


Planet MozillaThirty-Five Minutes Ago

Well, that’s done.

mhoye@ANGLACHEL:~/src/planet-content/branches/planet$ git diff | grep "^-name" | wc -l
401
mhoye@ANGLACHEL:~/src/planet-content/branches/planet$ git commit -am "The Great Purge of 2017"

Purging the Planet blogroll felt a lot like being sent to exorcise the ghosts of my own family estate. There were a lot of old names, old memories and more than a few recent ones on the business end of the delete key today.

I’ve pulled out all the feeds that errored out, everyone who isn’t currently involved in some reasonably direct capacity in the Mozilla project and a bunch of maybes that hadn’t put anything up since 2014, and I’d like the record to show that I didn’t enjoy doing any of that.

If you believe your feed was pulled in error, please file a bug so I can reinstate it directly.

Planet MozillaAd Blocker Roundup: 5 Ad Blockers That Improve Your Internet Experience

Ad Blockers are a specific kind of software called an extension, a small piece of software that adds new features or functionality to Firefox. Using Ad Blockers, you can eliminate … Read more

The post Ad Blocker Roundup: 5 Ad Blockers That Improve Your Internet Experience appeared first on The Firefox Frontier.

Planet MozillaFixing a bug in TensorBoard

This week I'm talking with my open source students about bugs. Above all, I want them to learn how The Bug is the unit of work of open source. Learning to orient your software practice around the idea that we incrementally improve an existing piece of code (or a new one) by filing, discussing, fixing, and landing bugs is an important step. Doing so makes a number of things possible:

  • it socializes us to the fact that software is inherently buggy: all code has bugs, whether we are aware of them yet or not. Ideally this leads to an increased level of humility
  • it allows us to ship something now that's good enough, and improve it as we go forward. This is in contrast to the idea that we'll wait until things are done or "correct."
  • it provides an interface between the users and creators of software, where we can interact outside purely economic relationships (e.g., buying/selling).
  • connected with the above, it enables a culture of participation. Understanding how this culture works provides opportunities to become involved.

One of the ways that new people can participate in open source projects is through Triaging existing bugs: testing if a bug (still) exists or not, connecting the right people to it, providing more context, etc.

As I teach these ideas this week, I thought I'd work on triaging a bug in a project I haven't touched before. When you're first starting out in open source, the process can be very intimidating and mysterious. Often I find my students look at what goes on in these projects as something they do vs. something I could do. It almost never feels like you have enough knowledge or skill to jump in and join the current developers, who all seem to know so much.

The reality is much more mundane. The magic you see other developers doing turns out to be indistinguishable from trial and error, copy/pasting, asking questions, and failing more than you succeed. It's easy to confuse the end result of what someone else does with the process you'd need to undergo if you wanted to do the same.

Let me prove it too you: let's go triage a bug.

TensorFlow and TensorBoard

One of the projects that's trending right now on GitHub is Google's open source AI and Machine Learning framework, TensorFlow. I've been using TensorFlow in a personal project this year to do real-time image classification from video feeds, and it's been amazing to work with and learn. There's a great overview video of the kinds of things Google and others are doing with TensorFlow to automate all kinds of things on the tensorflow.org web site, along with API docs, tutorials, etc.

TensorFlow is just under 1 million lines of C++ and Python, and has over 1,100 contributors. I've found the quality of the docs and tools to be first class, especially for someone new to AI/ML like myself.

One of those high quality tools is TensorBoard.

TensorBoard

TensorBoard is a Python-based web app that reads log data generated by TensorFlow as it trains a network. With TensorBoard you can visualize your network, understand what's happening with learning and error rates, and gain lots of insight into what's actually going on with your training runs. There's an excellent video from this year's TensorFlow Dev Summit (more videos at that link) showing a lot of the cool things that are possible.

A Bug in TensorBoard

When I started using TensorFlow and TensorBoard this spring, I immediately hit a bug. My default browser is Firefox, and here's what I saw when I tried to view TensorBoard locally:

Firefox running TensorBoard

Notice all the errors in the console related to Polymer and document.registerElement not being a function. It looks like an issue with missing support for Custom Elements. In Chrome, everything worked fine, so I used that while I was iterating on my neural network training.

Now, since I have some time, I thought I'd go back and see if this was fixable. The value of having the TensorBoard UI be web based is that you should be able to use it in all sorts of contexts, and in all sorts of browsers.

Finding/Filing the Bug

My first step was to see if this bug was known. If someone has already filed it, then I won't need to; it may even be that someone is already fixing it, or that it's fixed in an updated version.

I begin by looking at the TensorBoard repo's list of Issues. As I said above, one of the amazing things about true open source projects is that more than just the code is open: so too is the process by which the code evolves in the form of bugs being filed/fixed. Sometimes we can obtain a copy of the source for a piece of software, but we can't participate in its development and maintenance. It's great that Google has put both the code and entire project on GitHub.

At the time of writing, there are only 120 open issues, so one strategy would be to just look through them all for my issue. This often won't be possible, though, and a better approach is to search the repo for some unique string. In this case, I have a bunch of error messages that I can use for my search.

I search for document.registerElement and find 1 issue, which is a lovely outcome:

Searching GitHub for my issue

Issue #236: tensor board does not load in safari is basically what I'm looking for, and discusses the same sorts of errors I saw in Firefox, but in the context of Safari.

Lesson: often a bug similar to your own is already filed, but may be hiding behind details different from the one you want to file. In this case, you might unknowingly file a duplicate (dupe), or add your information to an existing bug. Don't be afraid to file the bug: it's better to have it filed in duplicate than for it to go unreported.

Forking and Cloning the repo

Now that I've found Issue #236, I have a few options. First, I might decide that having this bug filed is enough: someone on the team can fix it when they have time. Another possibility is that I might have found that someone was already working on a fix, and a Pull Request was open for this Issue, with code to address the problem. A third option is for you to fix the bug yourself, and this is the route I want to go now.

My first step is to Fork the TensorBoard repo into my own GitHub account. I need a version of the code that I can modify vs. just read.

Forking the TensorBoard Repo

Once that completes, I'll have an exact copy of the TensorBoard repo that I control, and which I can modify. This copy lives on GitHub. To work with it on my laptop, I'll need to Clone it to my local computer as well, so that I can make and test changes:

Clone my fork

Setting up TensorBoard locally

I have no idea how to run TensorBoard from source vs. as part of my TensorFlow installation. I begin by reading their README.md file. In it I notice a useful discussion within the Usage section, which talks about how to proceed. First I'll need to install Bazel.

Lesson: in almost every case where you'll work on a bug in a new project, you'll be asked to install and setup a development environment different from what you already have/know. Take your time with this, and don't give up too easily if things don't go as smoothly as you expect: many fewer people test this setup than do the resulting project it is meant to create.

Bazel is a build/test automation tool built and maintained by Google. It's available for many platforms, and there are good instructions for installing it on your particular OS. I'm on macOS, so I opt for the Homebrew installation. This requires Java, which I also install.

Now I'm able to try and do the build I follow the instructions in the README, and within a few seconds get an error:

$ cd tensorboard
$ bazel build tensorboard:tensorboard
Extracting Bazel installation...  
.............
ERROR: /private/var/tmp/_bazel_humphd/d51239168182c03bedef29cd50a9c703/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL.  
ERROR: Analysis of target '//tensorboard:tensorboard' failed; build aborted.  
INFO: Elapsed time: 8.965s  

This error is a typical example of the kind of problem one encounters working on a new project. Specifically, it's OS specific, and relates to a first-time setup issue--I don't have XCode setup properly.

I spend a few minutes searching for a solution. I look to see if anyone has filed an issue with TensorBoard on GitHub specifically about this build error--maybe someone has had this problem before, and it got solved? I also Google to see if anyone has blogged about it or asked on StackOverflow: you are almost never the only person who has hit a problem.

I find some help on StackOverflow, which suggests that I don't have XCode properly configured (I know it's installed). It suggests some commands I can try to fully configure things, none of which solve my issue.

It looks like it wants the full version of XCode vs. just the commandline tools. The full XCode is massive to download, and I don't really want to wait, so I do a bit more digging to see if there is any other workaround. This may turn out to be a mistake, and it might be better to just do the obvious thing instead of trying to find a workaround. However, I'm willing to spend an additional 20 minutes of research to save hours of downloading.

Some more searching reveals an interesting issue on the Bazel GitHub repo. Reading through the comments on this issues, it's clear that lots of other people have hit this--it's not just me. Eventually I read this comment, with 6 thumbs-up reactions (i.e., some agreement that it works):

just for future people. sudo xcode-select -s /Applications/Xcode.app/Contents/Developer could do the the trick if you install Xcode and bazel still failing.

This allows Bazel to find my compiler and the build to proceed further...before stopping again with a new error: clang: error: unknown argument: '-fno-canonical-system-headers'.

This still sounds like a setup issue on my side vs. something in the TensorBoard code, so I keep reading. This discussion on the Bazel Google Group seems useful: it sounds like I need to clean my build and regenerate things, now that my XCode toolchain is properly setup. I do that, and my build completes without issue.

Lesson: getting this code to build locally required me to consult GitHub, StackOverflow, and Google Groups. In other words, I needed the community to guide me via asking and answering questions online. Don't be afraid to ask questions in public spaces, since doing so leaves traces for those who will follow in your footsteps.

Running TensorBoard

Now that I've built the source, I'm ready to try running it. TensorBoard is meant to be used in conjunction with Tensorflow. In this case, however, I'm interested in using it on its own, purely for the purpose of reproducing my bug, and testing a fix. I don't actually
care about having Tensorflow and real training data to visualize. I notice that the DEVELOPMENT.md file seems to indicate that it's possible to fake some training data and use that in the absence of a real TensorFlow project. I try what it suggests, which fails:

...
line 40, in create_summary_metadata  
   metadata = tf.SummaryMetadata(
AttributeError: 'module' object has no attribute 'SummaryMetadata'  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

From having programmed with TensorFlow before, I assume here that tf (i.e. the TensorFlow Python module) is missing an expected attribute, namely, SummaryMetadata. I've never heard of it, but Google helps me locate the necessary API docs.

This leads me to conclude that my installed version of TensorFlow (I installed it 4 months earlier) might not have this new API, and the code in TensorBoard now expects it to exist. The API docs I'm consulting are for version 1.3 of the TensorFlow API. What do I have installed?

$ pip search tensorflow
...
 INSTALLED: 1.2.1
 LATEST:    1.3.0

Maybe upgrading from 1.2.1 to 1.3.0 will solve this? I update my laptop to TensorFlow 1.3.0 and am now able to generate the fake data for TensorBoard.

Lesson: running portions of a larger project in isolation often means dealing with version issues and manually installing dependencies. Also, sometimes dependencies are assumed, as was TensorFlow 1.3 in this case. Likely the TensorBoard developers all have TensorFlow installed and/or are developing it at the same time. In cases like this a README may not mention all the implied dependencies.

Using this newly faked data, I try running my version of TensorBoard...which again fails with a new error:

...
   from tensorflow.python.debug.lib import grpc_debug_server
ImportError: cannot import name grpc_debug_server  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

After some more searching, I find a 10-day old open bug in TensorBoard itself. This particular bug seems to be another version skew issue between dependencies, TensorFlow, and TensorBoard. The module in question, grpc_debug_server, seems to come from TensorFlow. Looking at the history of this file, the code is pretty new, making me wonder if it is once again that I'm running something with an older API. A comment in this issue gives a clue as to a possible fix:

FYI, I ran into the same problem, and I did pip install grpc which seemed to fix the problem.

I give this a try, but TensorBoard still won't run. Further on in this issue I read another comment indicating I need the "nightly version of TensorFlow." I've never worked with the nightly version of TensorFlow before (didn't know such a thing existed), and I have no idea how to install that (the comment assumes one knows how to do this).

A bit more searching reveals the answer, and I install the nightly version:

$ pip install tf-nightly

Once again I try running my TensorBoard, and this time, it finally works.

Lesson: start by assuming that an error you're seeing has been encountered before, and go looking for an existing issue. If you don't find anything, maybe you are indeed the first person to hit it, in which case you should file a new issue yourself so you can start a discussion and work toward a fix. Everyone hits these issues. Everyone needs help.

Reproducing the Bug

With all of the setup now behind us, it's time to get started on our actual goal. My first step in tackling this bug is to make sure I can reproduce it, that is, make sure I can get TensorBoard to fail in Safari and Firefox. I also want to confirm that things work in Chrome, which would give me some assurance that I've got a working source build.

Here's my local TensorBoard running in Chrome:

TensorBoard on Chrome

Next I try Safari:

TensorBoard on Safari

And...it works? I try Firefox too:

TensorBoard on Firefox

And this works too. At this point I have two competing emotions:

  1. I'm pleased to see that the bug is fixed.
  2. I'm frustrated that I've done all this work to accomplish nothing--I was hoping I could fix it.

The Value of Triaging Bugs

It's kind of ironic that I'm upset about this bug being fixed: that's the entire point of my work, right? I would have enjoyed getting to try and fix this myself, to learn more about the code, to get involved in the project. Now I feel like I have nothing to contribute.

Here I need to challenge my own feelings (and yours too if you're agreeing with me). Do I really have nothing to offer after all this work? Was it truly wasted effort?

No, this work has value, and I have a great opportunity to contribute something back to a project that I love. I've been able to discover that a previous bug has been unknowingly fixed, and can now be closed. I've done the difficult work of Confirming and Triaging a bug, and helping the project to close it.

I leave a detailed comment with my findings. This then causes the bug to get closed by a project member with the power to do so.

So the result of my half-day of fighting with TensorBoard is that a bug got closed. That's a great outcome, and someone needed to do this work in order for this to happen. My willingness to put some effort into it was key. It's also paved the way for me to do follow-up work, if I choose: my computer now has a working build/dev environment for this project. Maybe I will work on another bug in the future.

There's more to open source than fixing bugs: people need to file them, comment on them, test them, review fixes, manage them through their lifetime, close them, etc. We can get involved in any/all of these steps, and it's important to realize that your ability to get involved is not limited to your knowledge of how the code works.

Planet MozillaFixing a bug in TensorBoard

This week I'm talking with my open source students about bugs. Above all, I want them to learn how The Bug is the unit of work of open source. Learning to orient your software practice around the idea that we incrementally improve an existing piece of code (or a new one) by filing, discussing, fixing, and landing bugs is an important step. Doing so makes a number of things possible:

  • it socializes us to the fact that software is inherently buggy: all code has bugs, whether we are aware of them yet or not. Ideally this leads to an increased level of humility
  • it allows us to ship something now that's good enough, and improve it as we go forward. This is in contrast to the idea that we'll wait until things are done or "correct."
  • it provides an interface between the users and creators of software, where we can interact outside purely economic relationships (e.g., buying/selling).
  • connected with the above, it enables a culture of participation. Understanding how this culture works provides opportunities to become involved.

One of the ways that new people can participate in open source projects is through Triaging existing bugs: testing if a bug (still) exists or not, connecting the right people to it, providing more context, etc.

As I teach these ideas this week, I thought I'd work on triaging a bug in a project I haven't touched before. When you're first starting out in open source, the process can be very intimidating and mysterious. Often I find my students look at what goes on in these projects as something they do vs. something I could do. It almost never feels like you have enough knowledge or skill to jump in and join the current developers, who all seem to know so much.

The reality is much more mundane. The magic you see other developers doing turns out to be indistinguishable from trial and error, copy/pasting, asking questions, and failing more than you succeed. It's easy to confuse the end result of what someone else does with the process you'd need to undergo if you wanted to do the same.

Let me prove it too you: let's go triage a bug.

TensorFlow and TensorBoard

One of the projects that's trending right now on GitHub is Google's open source AI and Machine Learning framework, TensorFlow. I've been using TensorFlow in a personal project this year to do real-time image classification from video feeds, and it's been amazing to work with and learn. There's a great overview video of the kinds of things Google and others are doing with TensorFlow to automate all kinds of things on the tensorflow.org web site, along with API docs, tutorials, etc.

TensorFlow is just under 1 million lines of C++ and Python, and has over 1,100 contributors. I've found the quality of the docs and tools to be first class, especially for someone new to AI/ML like myself.

One of those high quality tools is TensorBoard.

TensorBoard

TensorBoard is a Python-based web app that reads log data generated by TensorFlow as it trains a network. With TensorBoard you can visualize your network, understand what's happening with learning and error rates, and gain lots of insight into what's actually going on with your training runs. There's an excellent video from this year's TensorFlow Dev Summit (more videos at that link) showing a lot of the cool things that are possible.

A Bug in TensorBoard

When I started using TensorFlow and TensorBoard this spring, I immediately hit a bug. My default browser is Firefox, and here's what I saw when I tried to view TensorBoard locally:

Firefox running TensorBoard

Notice all the errors in the console related to Polymer and document.registerElement not being a function. It looks like an issue with missing support for Custom Elements. In Chrome, everything worked fine, so I used that while I was iterating on my neural network training.

Now, since I have some time, I thought I'd go back and see if this was fixable. The value of having the TensorBoard UI be web based is that you should be able to use it in all sorts of contexts, and in all sorts of browsers.

Finding/Filing the Bug

My first step was to see if this bug was known. If someone has already filed it, then I won't need to; it may even be that someone is already fixing it, or that it's fixed in an updated version.

I begin by looking at the TensorBoard repo's list of Issues. As I said above, one of the amazing things about true open source projects is that more than just the code is open: so too is the process by which the code evolves in the form of bugs being filed/fixed. Sometimes we can obtain a copy of the source for a piece of software, but we can't participate in its development and maintenance. It's great that Google has put both the code and entire project on GitHub.

At the time of writing, there are only 120 open issues, so one strategy would be to just look through them all for my issue. This often won't be possible, though, and a better approach is to search the repo for some unique string. In this case, I have a bunch of error messages that I can use for my search.

I search for document.registerElement and find 1 issue, which is a lovely outcome:

Searching GitHub for my issue

Issue #236: tensor board does not load in safari is basically what I'm looking for, and discusses the same sorts of errors I saw in Firefox, but in the context of Safari.

Lesson: often a bug similar to your own is already filed, but may be hiding behind details different from the one you want to file. In this case, you might unknowingly file a duplicate (dupe), or add your information to an existing bug. Don't be afraid to file the bug: it's better to have it filed in duplicate than for it to go unreported.

Forking and Cloning the repo

Now that I've found Issue #236, I have a few options. First, I might decide that having this bug filed is enough: someone on the team can fix it when they have time. Another possibility is that I might have found that someone was already working on a fix, and a Pull Request was open for this Issue, with code to address the problem. A third option is for you to fix the bug yourself, and this is the route I want to go now.

My first step is to Fork the TensorBoard repo into my own GitHub account. I need a version of the code that I can modify vs. just read.

Forking the TensorBoard Repo

Once that completes, I'll have an exact copy of the TensorBoard repo that I control, and which I can modify. This copy lives on GitHub. To work with it on my laptop, I'll need to Clone it to my local computer as well, so that I can make and test changes:

Clone my fork

Setting up TensorBoard locally

I have no idea how to run TensorBoard from source vs. as part of my TensorFlow installation. I begin by reading their README.md file. In it I notice a useful discussion within the Usage section, which talks about how to proceed. First I'll need to install Bazel.

Lesson: in almost every case where you'll work on a bug in a new project, you'll be asked to install and setup a development environment different from what you already have/know. Take your time with this, and don't give up too easily if things don't go as smoothly as you expect: many fewer people test this setup than do the resulting project it is meant to create.

Bazel is a build/test automation tool built and maintained by Google. It's available for many platforms, and there are good instructions for installing it on your particular OS. I'm on macOS, so I opt for the Homebrew installation. This requires Java, which I also install.

Now I'm able to try and do the build I follow the instructions in the README, and within a few seconds get an error:

$ cd tensorboard
$ bazel build tensorboard:tensorboard
Extracting Bazel installation...  
.............
ERROR: /private/var/tmp/_bazel_humphd/d51239168182c03bedef29cd50a9c703/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL.  
ERROR: Analysis of target '//tensorboard:tensorboard' failed; build aborted.  
INFO: Elapsed time: 8.965s  

This error is a typical example of the kind of problem one encounters working on a new project. Specifically, it's OS specific, and relates to a first-time setup issue--I don't have XCode setup properly.

I spend a few minutes searching for a solution. I look to see if anyone has filed an issue with TensorBoard on GitHub specifically about this build error--maybe someone has had this problem before, and it got solved? I also Google to see if anyone has blogged about it or asked on StackOverflow: you are almost never the only person who has hit a problem.

I find some help on StackOverflow, which suggests that I don't have XCode properly configured (I know it's installed). It suggests some commands I can try to fully configure things, none of which solve my issue.

It looks like it wants the full version of XCode vs. just the commandline tools. The full XCode is massive to download, and I don't really want to wait, so I do a bit more digging to see if there is any other workaround. This may turn out to be a mistake, and it might be better to just do the obvious thing instead of trying to find a workaround. However, I'm willing to spend an additional 20 minutes of research to save hours of downloading.

Some more searching reveals an interesting issue on the Bazel GitHub repo. Reading through the comments on this issues, it's clear that lots of other people have hit this--it's not just me. Eventually I read this comment, with 6 thumbs-up reactions (i.e., some agreement that it works):

just for future people. sudo xcode-select -s /Applications/Xcode.app/Contents/Developer could do the the trick if you install Xcode and bazel still failing.

This allows Bazel to find my compiler and the build to proceed further...before stopping again with a new error: clang: error: unknown argument: '-fno-canonical-system-headers'.

This still sounds like a setup issue on my side vs. something in the TensorBoard code, so I keep reading. This discussion on the Bazel Google Group seems useful: it sounds like I need to clean my build and regenerate things, now that my XCode toolchain is properly setup. I do that, and my build completes without issue.

Lesson: getting this code to build locally required me to consult GitHub, StackOverflow, and Google Groups. In other words, I needed the community to guide me via asking and answering questions online. Don't be afraid to ask questions in public spaces, since doing so leaves traces for those who will follow in your footsteps.

Running TensorBoard

Now that I've built the source, I'm ready to try running it. TensorBoard is meant to be used in conjunction with Tensorflow. In this case, however, I'm interested in using it on its own, purely for the purpose of reproducing my bug, and testing a fix. I don't actually
care about having Tensorflow and real training data to visualize. I notice that the DEVELOPMENT.md file seems to indicate that it's possible to fake some training data and use that in the absence of a real TensorFlow project. I try what it suggests, which fails:

...
line 40, in create_summary_metadata  
   metadata = tf.SummaryMetadata(
AttributeError: 'module' object has no attribute 'SummaryMetadata'  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

From having programmed with TensorFlow before, I assume here that tf (i.e. the TensorFlow Python module) is missing an expected attribute, namely, SummaryMetadata. I've never heard of it, but Google helps me locate the necessary API docs.

This leads me to conclude that my installed version of TensorFlow (I installed it 4 months earlier) might not have this new API, and the code in TensorBoard now expects it to exist. The API docs I'm consulting are for version 1.3 of the TensorFlow API. What do I have installed?

$ pip search tensorflow
...
 INSTALLED: 1.2.1
 LATEST:    1.3.0

Maybe upgrading from 1.2.1 to 1.3.0 will solve this? I update my laptop to TensorFlow 1.3.0 and am now able to generate the fake data for TensorBoard.

Lesson: running portions of a larger project in isolation often means dealing with version issues and manually installing dependencies. Also, sometimes dependencies are assumed, as was TensorFlow 1.3 in this case. Likely the TensorBoard developers all have TensorFlow installed and/or are developing it at the same time. In cases like this a README may not mention all the implied dependencies.

Using this newly faked data, I try running my version of TensorBoard...which again fails with a new error:

...
   from tensorflow.python.debug.lib import grpc_debug_server
ImportError: cannot import name grpc_debug_server  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

After some more searching, I find a 10-day old open bug in TensorBoard itself. This particular bug seems to be another version skew issue between dependencies, TensorFlow, and TensorBoard. The module in question, grpc_debug_server, seems to come from TensorFlow. Looking at the history of this file, the code is pretty new, making me wonder if it is once again that I'm running something with an older API. A comment in this issue gives a clue as to a possible fix:

FYI, I ran into the same problem, and I did pip install grpc which seemed to fix the problem.

I give this a try, but TensorBoard still won't run. Further on in this issue I read another comment indicating I need the "nightly version of TensorFlow." I've never worked with the nightly version of TensorFlow before (didn't know such a thing existed), and I have no idea how to install that (the comment assumes one knows how to do this).

A bit more searching reveals the answer, and I install the nightly version:

$ pip install tf-nightly

Once again I try running my TensorBoard, and this time, it finally works.

Lesson: start by assuming that an error you're seeing has been encountered before, and go looking for an existing issue. If you don't find anything, maybe you are indeed the first person to hit it, in which case you should file a new issue yourself so you can start a discussion and work toward a fix. Everyone hits these issues. Everyone needs help.

Reproducing the Bug

With all of the setup now behind us, it's time to get started on our actual goal. My first step in tackling this bug is to make sure I can reproduce it, that is, make sure I can get TensorBoard to fail in Safari and Firefox. I also want to confirm that things work in Chrome, which would give me some assurance that I've got a working source build.

Here's my local TensorBoard running in Chrome:

TensorBoard on Chrome

Next I try Safari:

TensorBoard on Safari

And...it works? I try Firefox too:

TensorBoard on Firefox

And this works too. At this point I have two competing emotions:

  1. I'm pleased to see that the bug is fixed.
  2. I'm frustrated that I've done all this work to accomplish nothing--I was hoping I could fix it.

The Value of Triaging Bugs

It's kind of ironic that I'm upset about this bug being fixed: that's the entire point of my work, right? I would have enjoyed getting to try and fix this myself, to learn more about the code, to get involved in the project. Now I feel like I have nothing to contribute.

Here I need to challenge my own feelings (and yours too if you're agreeing with me). Do I really have nothing to offer after all this work? Was it truly wasted effort?

No, this work has value, and I have a great opportunity to contribute something back to a project that I love. I've been able to discover that a previous bug has been unknowingly fixed, and can now be closed. I've done the difficult work of Confirming and Triaging a bug, and helping the project to close it.

I leave a detailed comment with my findings. This then causes the bug to get closed by a project member with the power to do so.

So the result of my half-day of fighting with TensorBoard is that a bug got closed. That's a great outcome, and someone needed to do this work in order for this to happen. My willingness to put some effort into it was key. It's also paved the way for me to do follow-up work, if I choose: my computer now has a working build/dev environment for this project. Maybe I will work on another bug in the future.

There's more to open source than fixing bugs: people need to file them, comment on them, test them, review fixes, manage them through their lifetime, close them, etc. We can get involved in any/all of these steps, and it's important to realize that your ability to get involved is not limited to your knowledge of how the code works.

Planet MozillaFirefox Developer Edition 56 Beta 12 Testday Results

Hello Mozillians!

As you may already know, last Friday – September 15th – we held a new Testday event, for Developer Edition 56 Beta 12.

Thank you all for helping us make Mozilla a better place – Athira Appu.

From India team: Baranitharan & BaraniCool, Abirami& AbiramiSD, Vinothini.K, Surentharan, vishnupriya.v, krishnaveni.B, Nutan sonawane, Shubhangi Patil, Ankita Lahoti, Sonali Dhurjad, Yadnyesh Mulay, Ankitkumar Singh.

From Bangladesh team: Nazir Ahmed Sabbir, Tanvir Rahman, Maruf Rahman, Saddam Hossain, Iftekher Alam, Pronob Kumar Roy, Md. Raihan Ali, Sontus Chandra Anik, Saheda Reza Antora, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Rahim Iqbal, Md. Almas Hossain, Ali sarif, Md.Majedul islam, JMJ Saquib, Sajedul Islam, Anika Alam, Tanvir Mazharul, Azmina Akter Papeya, sayma alam mow. 

Results:

– several test cases executed for the Preferences Search, CSS Grid Inspector Layout View and Form Autofill features.

– 6 bugs verified: 1219725 , 13739351391014 , 1382341, 1383720 , 1377182

– 1 new bug filed: 1400203

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaLinuxCon China 2017: Trip Report


Linux Foundation held a combination of three events in China as part of their foray into Asia early this year. It was a big move for them since this was supposed to be the first time Linux Foundation would hold an event in Asia.
I was invited to present a talk on Hardening IoT endpoints. The event was held in Beijing, and since I have never been to Beijing before I was pretty excited for the talk. However, it turned out the journey is pretty long and expensive. Much more than a student like me can hope to bear. Normally I represent Mozilla in such situations, but the topic of the talk was too much into security and not aligned much with the goals of Mozilla at that moment. Fortunately, Linux Foundation gave me a Scholarship to come and speak at LinuxCon China which enabled me to attend LinuxCon and the awesome team at Mozilla TechSpeakers including Michael Ellis and Havi helped me get ready for the talk.


The event was held at China National Convention Center. It's a beautiful and enormous convention center just middle of Beijing. One of the big problems I soon realized after reaching China is, most of the services in my mobile was not working. The great wall (the firewall not the actual one) was preventing most of the Google services I had, unfortunately, that included two apps I was heavily relying on. Google Maps and Google Translate. There, of course, is a local alternative to Google Maps which is Baidu Maps, but since the interface itself also was in Chinese, it wasn't of much help to me. Fortunately, my VPN setup came into my rescue and that has been my source of relief for the next two days in China.

Pro Tip: If you have to goto china and you rely on some form of service which might be blocked in China. It's better to use a good VPN. One you know will work there or roll your own. I had rolled my own since my commercial vpn also was blocked there.

The day started with Linus Trovalds having an open discussion regarding which way Linux is moving. And with very interesting aspects and views.
One of the recurring theme in the discussion, which kept coming up was regarding how the core linux maintainer circle worked. And why it was being relied on only one those very few people. The reply was most stimulating.
The very interesting quote from him was
The other talks were interesting as well. I would have really liked to attend three more talks, namely by Greg on serverless computing on edge, by Swati on Kubernates and by Kai Zhang on container-based virtualization, but that one clashed with my own talk.

My talk was on the second day and on a relatively good time, which was especially important for me as the conference wifi was the only one where I could work on my slides.
Lesson Learned: Don't rely on Google Slides in China
Fortunately courtesy to my vpn I was able to work on it and have a backup local copy ready for the talk.
That room was pretty big, didn't see this coming
What I did not anticipate earlier was how eager people were for the talk. In a nutshell this was how the room was looking when I took the podium.

My first reaction was: Wow that's a lot of people! Guess they are really interested in the talk!
And then: Shit! I hope my talk is as interesting as all of the super industry relevant talks going on around me in all other rooms.

Fortunately, the talk went pretty well. I always judge my talk based on how many queries, questions I get after the talk and also how many reactions in twitter. Judging on the number of queries afterwards I guessed atleast it wasn't that bad. I was though super disappointed on the complete radio silence in twitter regarding my talk. Only to realize later that twitter is also blocked in China.

To Do: Next time come up with better ways to track engagement.

My only complain here was, normally every Linux Foundation conference records your talk. LinuxCon didn't. Though they did upload all our slides, so if you want to go over a textual version of what I presented, have a sneak peak here. I will be all ears to listen to your feedback

SecurityPI - Hardening your IoT endpoints in Home. from LinuxCon ContainerCon CloudOpen China

This would have normally finished my recount of the event, but this time it didn't I finally went to a BoF session on Fedora and CentOS, and ended up having a 2 hour long discussion on the various issues Mozilla and Fedora communities face and pain points with Brian Exelbierd. We temporarily suspended the discussion with no clear path to a solution but with a notion to touch base with each other again on it.

Conclusion: LinuxCon was a perfect example of how to handle and manage a huge footfall with a multilingual audience and still make the conference good. The quality of the talks was astounding as well as speakers. I really loved my experience there. Made some great friends (I am looking at you Greg and Swati :D), had some awesome conversation.

And did I mention the speakers at the day caught up and decided we needed a memoir for us? Which happens to be us discussing everything related to Linux to Mozilla to security in Forbidden City
That in a nutshell were the speakers
Like I said, one hell of a conference.

PS: If you want to talk to me about anything related to the talk, don't hesitate to get in touch using either my email or twitter.

Planet Mozillaimpl Future for Rust

The Rust community has been hard at work on our 2017 roadmap, but as we come up on the final quarter of the year, we’re going to kick it into high gear—and we want you to join us!

Our goals for the year are ambitious:

To finish off these goals, we intend to spend the rest of the year focused purely on “implementation” work—which doesn’t just mean code! In particular, we are effectively spinning down the RFC process for 2017, after having merged almost 90 RFCs this year!

So here’s the plan. Each Rust team has put together several working groups focused on a specific sub-area. Each WG has a leader who is responsible for carving out and coordinating work, and a dedicated chat channel for getting involved. We are working hard to divvy up work items into many shapes and sizes, and to couple them with mentoring instructions and hands-on mentors. So if you’ve always wanted to contribute to Rust but weren’t sure how, this is the perfect opportunity for you. Don’t be shy—we want and need your help, and, as per our roadmap, our aim is mentoring at all levels of experience. To get started, say hello in the chat rooms for any of the work groups you’re interested in!

A few points of order

There are a few online venues for keeping in the loop with working group activity:

  • There is a dedicated Gitter community with channels for each working group, as well as a global channel for talking about the process as a whole, or getting help finding your way to a working group.

  • The brand-new findwork site, which provides an entry point to a number of open issues across the Rust project, including those managed by working groups (see the “impl period” tab). Thanks, @nrc, for putting this together!

We also plan two in-person events, paired with upcoming Rust conferences. Each of them is a two-day event populated in part by Rust core developers; come hang out and work together!

As usual, all of these venues abide by the Rust code of conduct. But more than that: this “impl period” is a chance for us all to have fun collaborating and helping each other, and those participating in the official venues are expected to meet the highest standards of behavior.

The working groups

Without further ado, here’s the initial lineup! (A few more working groups are expected to arise over time.)

If you find a group that interests you, please say hello in the corresponding chat room!

Compiler team

WG-compiler-errors Make Rust's error messages even friendlier. Learn more Chat
WG-compiler-front Dip your toes in with parsing and syntax sugar. Learn more Chat
WG-compiler-middle Implement features that involve typechecking. Learn more Chat
WG-compiler-traits Want generic associated types? You know what to do. Learn more Chat
WG-compiler-incr Finish incremental compilation; receive undying love. Learn more Chat
WG-compiler-nll Delve into the bowels of borrowck to slay the beast: NLL! Learn more Chat
WG-compiler-const Const generics. Enough said. Learn more Chat

Libs team

WG-libs-blitz Help finish off the Blitz before all the issues are gone! Learn more Chat
WG-libs-cookbook Work on bite-sized examples to get folks cooking with Rust. Learn more Chat
WG-libs-guidelines Take the wisdom from the Blitz and pass it on. Learn more Chat
WG-libs-simd Provide access to hardware parallelism in Rust! Learn more Chat
WG-libs-openssl Want better docs for openssl? So do we. Learn more Chat
WG-libs-rand Craft a stable, core crate for randomness. Learn more Chat

Docs team

WG-docs-rustdoc Help make docs beautiful for everyone! Learn more Chat
WG-docs-rustdoc2 Get in on a bottom-up revamp of rustdoc! Learn more Chat
WG-docs-rbe Teach others Rust in the browser. Learn more Chat

Dev tools team

WG-dev-tools-rls Help make Rust's IDE experience first class. Learn more Chat
WG-dev-tools-vscode Improve Rust's IDE experience for VSCode. Learn more Chat
WG-dev-tools-clients Implement new RLS clients: Atom, Sublime, Visual Studio... Learn more Chat
WG-dev-tools-IntelliJ Polish up an already-rich Rust IDE experience. Learn more Chat
WG-dev-tools-rustfmt Make Rust's code the prettiest! Learn more Chat
WG-dev-tools-rustup Make Rust's first impression even better! Learn more Chat
WG-dev-tools-clippy It looks like you're trying to write a linter. Want help? Learn more Chat
WG-dev-tools-bindgen Make FFI'ing to C and C++ easy, automatic, and robust! Learn more Chat

Cargo team

WG-cargo-native Let's make native dependencies as painless as we can. Learn more Chat
WG-cargo-registries Going beyond crates.io to support custom registries. Learn more Chat
WG-cargo-pub-deps Teach Cargo which of your dependencies affects your users. Learn more Chat
WG-cargo-integration How easy can it be to use Cargo with your build system? Learn more Chat

Infrastructure team

WG-infra-crates.io Try your hand at a production Rust web app! Learn more Chat
WG-infra-perf Let's make sure Rust gets faster. Learn more Chat
WG-infra-crater Regularly testing the compiler against the Rust ecosystem. Learn more Chat
WG-infra-secure Help us implement best practices for Rust's infrastructure! Learn more Chat
WG-infra-host Managing the services that keep the Rust machine running. Learn more Chat
WG-infra-rustbuild Streamline the compiler build process. Learn more Chat

Core team

WG-core-site The web site is getting overhauled; help shape the new content! Learn more Chat

Planet MozillaWhy Does Firefox Use e4 and e5 Values to Fill Memory?

I was once talking to some colleagues about a Firefox crash bug. As we gazed at the crash report, one leaned over and pointed at the value in one of the CPU registers: 0xe5e5e5e9. “Freed memory,” he sagely indicated: “e5”.

Magic debug numbers

Using special numbers to indicate something in memory is an old trick. Wikipedia even has Wikipedia even has famous examples of such things! Neato! These numbers are often referred to as “poison” or “junk” in the context of filling memory (because they’re supposed to cause the program to fail, or be meaningless garbage).

Mozilla uses this trick (and the “poison” terminology) in Firefox debug builds to indicate uninitialized memory (e4), as well as freed memory (e5). Thus the presence of these values in a crash report, or other failure report, indicate that something has gone wrong with memory handling. But why e4 and e5?

jemalloc

jemalloc is a general purpose implementation of malloc. Firefox utilizes a modified version of jemalloc to perform memory allocation. There's a pretty rich history here, and it would take another blog post to cover how and why Mozilla use jemalloc. So I'm going to hand wave and say that it is used, and the reasons for doing so are reasonable.

jemalloc can use magic/poison/junk values when performing malloc or free. However, jemalloc will use the value a5 when allocating, and 5a when freeing, so why do we see something different in Firefox?

A different kind of poison

When using poison values, it's possible for the memory with these values to still be used. The hope is that when doing so the program will crash and you can see which memory is poisoned. However, with the 5a value in Firefox there was concern that 1) the program would not crash, and 2) as a result, it could be exploited: see this bug.

As a result of these concerns, it was decided to use the poison values we see today. The code that sets these values has undergone some changes since the above bug, but the same values are used. If you want to take a look at the code responsible here is a good place to start.

Planet MozillaWhy Does Firefox Use e4 and e5 Values to Fill Memory?

I was once talking to some colleagues about a Firefox crash bug. As we gazed at the crash report, one leaned over and pointed at the value in one of the CPU registers: 0xe5e5e5e9. “Freed memory,” he sagely indicated: “e5”.

Magic debug numbers

Using special numbers to indicate something in memory is an old trick. Wikipedia even has Wikipedia even has famous examples of such things! Neato! These numbers are often referred to as “poison” or “junk” in the context of filling memory (because they’re supposed to cause the program to fail, or be meaningless garbage).

Mozilla uses this trick (and the “poison” terminology) in Firefox debug builds to indicate uninitialized memory (e4), as well as freed memory (e5). Thus the presence of these values in a crash report, or other failure report, indicate that something has gone wrong with memory handling. But why e4 and e5?

jemalloc

jemalloc is a general purpose implementation of malloc. Firefox utilizes a modified version of jemalloc to perform memory allocation. There's a pretty rich history here, and it would take another blog post to cover how and why Mozilla use jemalloc. So I'm going to hand wave and say that it is used, and the reasons for doing so are reasonable.

jemalloc can use magic/poison/junk values when performing malloc or free. However, jemalloc will use the value a5 when allocating, and 5a when freeing, so why do we see something different in Firefox?

A different kind of poison

When using poison values, it's possible for the memory with these values to still be used. The hope is that when doing so the program will crash and you can see which memory is poisoned. However, with the 5a value in Firefox there was concern that 1) the program would not crash, and 2) as a result, it could be exploited: see this bug.

As a result of these concerns, it was decided to use the poison values we see today. The code that sets these values has undergone some changes since the above bug, but the same values are used. If you want to take a look at the code responsible here is a good place to start.

Planet MozillaBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:


(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

Planet MozillaBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:


(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

The post Busting the myth that net neutrality hampers investment appeared first on The Mozilla Blog.

Planet MozillaOn the Remarks of the Late Steve Jobs at the Opening of the Steve Jobs Theater

But one of the ways that I believe people express their appreciation to the rest of humanity is to make something wonderful and put it out there.

—Steve Jobs (date unknown, as played at the opening of the Steve Jobs Theater, September 12, 2017)

When I read this1 the other day, my first thought was of Camino.

We were often asked by outsiders why we worked on Camino, and why we persisted in building Camino for so long after Safari, Firefox, and Chrome were launched. In the minds of many of these people, our time and talents would have been better-spent working on anything other than Camino. While we all likely had different reasons, there were many areas of commonality; primarily, and most importantly, we loved or enjoyed working on Camino. Among other reasons, I also liked that I could see that my efforts made a difference; I wasn’t some cog in a giant, faceless machine, but a valued member of a strong, small team and a part of a larger community of our users who relied on Camino for their daily browsing and livelihoods. It was a way to “give back” to the world (and the open-source community) for things that were useful and positive in my life, to show appreciation.

We were making something wonderful, and we put it out there for the world to use.

I ♥ Camino!

        

1 Part of a heretofore publicly-unheard address from Steve Jobs that was played at the opening of the Steve Jobs Theater and the Apple fall 2017 product launches. ↩︎

Planet MozillaWebdev Beer and Tell: September 2017, 15 Sep 2017

Webdev Beer and Tell: September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: September 2017, 15 Sep 2017

Webdev Beer and Tell: September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWhy Good-First-Bugs often aren't

Let me start by saying that I'm a huge fan of open source projects putting a Good-First-Bug type label on issues. From my own experience over the past decade trying to get students engaged in open source projects, it's often a great way to quickly find entry points for new people. Right now I've got 40 students learning open source with me at Seneca, and because I want to get them all working on bugs ASAP, you better believe that I'm paying close attention to good-first-bug labels!

Maintainers

There are a few ways that we tend to interact with good-first-bugs. First, we have project maintainers who add the label when they think they see something that might be a good fit for someone new. I've been this person, and I know that it takes some discipline to not fix every bug. To be honest, it's faster and easier to just fix small things yourself. It's also going to mean you end up having to fix everything yourself--you won't grow new contributors this way.

What you need to do instead is to write large amounts of prose of the type "Here's what you need to do if you're interested in fixing this". You need sample code, links to relevant files, screenshots, etc. so that someone who lands in this bug can readily assess whether their current skill level (or aspirational skill level), and the bug's requirements, meet.

Sometimes maintainers opt not to do this, and instead say, "I'd be willing to mentor this." The problem with this approach, in my experience, is that it becomes a kind of debt with which you saddle your future self. Are you sure you'll want to mentor this bug in 2 years, when it's no longer on your roadmap? You'd be better to "mentor" the bug upfront, and just spell out what has to happen in great detail: "Do this, this, and this." If you can't do that, the reality is it's not a good-first-bug.

New Contributors

The second way we encounter good-first-bugs is as someone looking for an opportunity to contribute. I make a habit of finding/fixing these in various projects so that I can use real examples to show my students the steps. I also tag along with my students as they attempt them, and I've seen it all. It's interesting what you encounter on this side of things. Sometimes it goes exactly as you'd hope: you make the fix and the patch is accepted. However, more often then not you run into snags.

First, before you even get going on a fix, finding a bug that isn't already being worked on can be hard. A lot of people are looking for opportunities to get started, and when you put up a sign saying "Start Here," people will! Read through the comments on many good-first-bugs and you'll find an unending parade of "I'd like to work on this bug!" and "Can you assign this to me?" followed by "Are you still working on this?" and "I'm new, can you help me get started?". That stream of comments often repeats forever, leaving the project maintainers frustrated, the contributors lost, and the bug untouched.

Expiry Dates

Once people do get started on a bug, another common problem I see is that the scope of the bug has shifted such that the problem/fix described no longer makes sense. You see a lot of responses like this: "Thanks for this fix, but we've totally refactored this code, and it's not necessary any more. Closing!" This doesn't feel great, or make you want to put more effort into finding something else to do.

The problem here wasn't that the bug was wrong...when filed. The bug has become obsolete over time. Good-first-bugs really need an expiry date. If a project isn't triaging its good-first-bugs on a somewhat regular basis, it's basically going to end up in this state eventually, with some or all of them being useless, and therefore bad-first-bugs. You're better off closing bugs like this and having no good-first-bugs listed, than to have 50 ancient bugs that no one on the project cares about, wants to review, or has time to discuss.

Good First Experience

This week I've been thinking a lot about ways to address some of the problems above. In their lab this week, I asked my students to build Firefox, and also to make some changes to the code. I had a few goals with this:

  • Build a large open source project to learn about setting up dev environments, obtaining source code, build systems, etc.
  • Gain some experience making a change and rebuilding Firefox, to prove to themselves that they could do it and to remove some of the mystery around how one does this.
  • Learn how to start navigating around in large code, see how things are built (e.g., Firefox uses JS and CSS for its front-end).
  • Have some fun doing something exciting and a bit scary

I've done this many times in the past, and often I've gone looking for a simple good-first-bug to facilitate these goals. This time I wanted to try something different. Instead of a good-first-bug, I wanted what I'll call a "Good First Experience."

A Good First Experience tries to do the following:

  • It's reproducible by everyone. Where a good-first-bug is destroyed in being fixed, a good-first-experience doesn't lose its potential after someone completes it.
  • It's not tied to the current state of the project, and therefore doesn't become obsolete (as quickly). Where a good-first-bug is always tied to the project's goals, coding practices, and roadmap, a good-first-experience is independent of the current state of the project.
  • It's meant to be fun, exploratory, and safe. Where a good-first-bug is about accomplishing a task, and is therefore defined and limited by that task, a good-first-experience can be the opposite: an unnecessary change producing an outcome whose value is measured by the participate vs the project.

Toward a Good First Experience with Firefox

I reached out to a half-dozen Mozilla colleagues for ideas on what we could try (thanks to all who replied). I ended-up going with some excellent suggestions from Blake Winton (@bwinton). Blake has a history of being whimsical in his approach to his work at Mozilla, and I think he really understood what I was after.

Based on his suggestions, I gave the students some options to try:

  • In browser/base/content/browser.js change the function OpenBrowserWindow to automatically open cat GIFs. You can alter the code like so:
function OpenBrowserWindow(options) {  
  return window.open("http://www.chilloutandwatchsomecatgifs.com");
}
  • Look at the CSS in browser/base/content/browser.css and try changing some of the colours.

  • Modify the way tabs appear by playing with CSS in browser/themes/shared/tabs.inc.css, for example: you could alter things like min-height.

  • You could try adding a background: url(http://i.imgur.com/UkT7jcm.gif); to the #TabsToolbar in browser/themes/windows/browser.css to add something new

  • Modify the labels for menu items like "New Window" in browser/locales/en-US/chrome/browser/browser.dtd to something else.

None of these changes are necessary, prudent, or solving a problem. Instead, they are fun, exploratory, and simple. Already students are having some success, which is great to see.

Example of what students did

Nature via National Park vs. Wilderness

I was reflecting that the real difference between a good-first-experience and a real bug is a lot like experiencing nature by visiting a National Park vs. setting out in the wilderness. There isn't a right or wrong way to do this, and both have obvious advantages and disadvantages. However, what National Parks do well is to make the experience of nature accessible to everyone: manicured paths, maps with established trails to follow, amenities so you can bring your family, information. It's obviously not the same as cutting a trail into a forest, portaging your canoe between lakes, or hiking on the side of a mountain. But it means that more people can try the experience of doing the real thing in relative safety, and without a massive commitment of time or effort. It's also a mostly self-guided experience vs. something you need a guide (maintainer) to accomplish. In the end, this experience might be enough for many people, and will help bring awareness and an enriching experience. For others, it will be the beginning of bolder outings into the unknown.

I don't think my current attempt represents a definitive good-first-experience in Mozilla, but it's got me thinking more about what one might look like, and I wanted to get you thinking about them too. I know I'm not alone in wanting to bring students into projects like Mozilla and Firefox, and needing a repeatable entry point.

Planet MozillaWhy Good-First-Bugs often aren't

Let me start by saying that I'm a huge fan of open source projects putting a Good-First-Bug type label on issues. From my own experience over the past decade trying to get students engaged in open source projects, it's often a great way to quickly find entry points for new people. Right now I've got 40 students learning open source with me at Seneca, and because I want to get them all working on bugs ASAP, you better believe that I'm paying close attention to good-first-bug labels!

Maintainers

There are a few ways that we tend to interact with good-first-bugs. First, we have project maintainers who add the label when they think they see something that might be a good fit for someone new. I've been this person, and I know that it takes some discipline to not fix every bug. To be honest, it's faster and easier to just fix small things yourself. It's also going to mean you end up having to fix everything yourself--you won't grow new contributors this way.

What you need to do instead is to write large amounts of prose of the type "Here's what you need to do if you're interested in fixing this". You need sample code, links to relevant files, screenshots, etc. so that someone who lands in this bug can readily assess whether their current skill level (or aspirational skill level), and the bug's requirements, meet.

Sometimes maintainers opt not to do this, and instead say, "I'd be willing to mentor this." The problem with this approach, in my experience, is that it becomes a kind of debt with which you saddle your future self. Are you sure you'll want to mentor this bug in 2 years, when it's no longer on your roadmap? You'd be better to "mentor" the bug upfront, and just spell out what has to happen in great detail: "Do this, this, and this." If you can't do that, the reality is it's not a good-first-bug.

New Contributors

The second way we encounter good-first-bugs is as someone looking for an opportunity to contribute. I make a habit of finding/fixing these in various projects so that I can use real examples to show my students the steps. I also tag along with my students as they attempt them, and I've seen it all. It's interesting what you encounter on this side of things. Sometimes it goes exactly as you'd hope: you make the fix and the patch is accepted. However, more often then not you run into snags.

First, before you even get going on a fix, finding a bug that isn't already being worked on can be hard. A lot of people are looking for opportunities to get started, and when you put up a sign saying "Start Here," people will! Read through the comments on many good-first-bugs and you'll find an unending parade of "I'd like to work on this bug!" and "Can you assign this to me?" followed by "Are you still working on this?" and "I'm new, can you help me get started?". That stream of comments often repeats forever, leaving the project maintainers frustrated, the contributors lost, and the bug untouched.

Expiry Dates

Once people do get started on a bug, another common problem I see is that the scope of the bug has shifted such that the problem/fix described no longer makes sense. You see a lot of responses like this: "Thanks for this fix, but we've totally refactored this code, and it's not necessary any more. Closing!" This doesn't feel great, or make you want to put more effort into finding something else to do.

The problem here wasn't that the bug was wrong...when filed. The bug has become obsolete over time. Good-first-bugs really need an expiry date. If a project isn't triaging its good-first-bugs on a somewhat regular basis, it's basically going to end up in this state eventually, with some or all of them being useless, and therefore bad-first-bugs. You're better off closing bugs like this and having no good-first-bugs listed, than to have 50 ancient bugs that no one on the project cares about, wants to review, or has time to discuss.

Good First Experience

This week I've been thinking a lot about ways to address some of the problems above. In their lab this week, I asked my students to build Firefox, and also to make some changes to the code. I had a few goals with this:

  • Build a large open source project to learn about setting up dev environments, obtaining source code, build systems, etc.
  • Gain some experience making a change and rebuilding Firefox, to prove to themselves that they could do it and to remove some of the mystery around how one does this.
  • Learn how to start navigating around in large code, see how things are built (e.g., Firefox uses JS and CSS for its front-end).
  • Have some fun doing something exciting and a bit scary

I've done this many times in the past, and often I've gone looking for a simple good-first-bug to facilitate these goals. This time I wanted to try something different. Instead of a good-first-bug, I wanted what I'll call a "Good First Experience."

A Good First Experience tries to do the following:

  • It's reproducible by everyone. Where a good-first-bug is destroyed in being fixed, a good-first-experience doesn't lose its potential after someone completes it.
  • It's not tied to the current state of the project, and therefore doesn't become obsolete (as quickly). Where a good-first-bug is always tied to the project's goals, coding practices, and roadmap, a good-first-experience is independent of the current state of the project.
  • It's meant to be fun, exploratory, and safe. Where a good-first-bug is about accomplishing a task, and is therefore defined and limited by that task, a good-first-experience can be the opposite: an unnecessary change producing an outcome whose value is measured by the participate vs the project.

Toward a Good First Experience with Firefox

I reached out to a half-dozen Mozilla colleagues for ideas on what we could try (thanks to all who replied). I ended-up going with some excellent suggestions from Blake Winton (@bwinton). Blake has a history of being whimsical in his approach to his work at Mozilla, and I think he really understood what I was after.

Based on his suggestions, I gave the students some options to try:

  • In browser/base/content/browser.js change the function OpenBrowserWindow to automatically open cat GIFs. You can alter the code like so:
function OpenBrowserWindow(options) {  
  return window.open("http://www.chilloutandwatchsomecatgifs.com");
}
  • Look at the CSS in browser/base/content/browser.css and try changing some of the colours.

  • Modify the way tabs appear by playing with CSS in browser/themes/shared/tabs.inc.css, for example: you could alter things like min-height.

  • You could try adding a background: url(http://i.imgur.com/UkT7jcm.gif); to the #TabsToolbar in browser/themes/windows/browser.css to add something new

  • Modify the labels for menu items like "New Window" in browser/locales/en-US/chrome/browser/browser.dtd to something else.

None of these changes are necessary, prudent, or solving a problem. Instead, they are fun, exploratory, and simple. Already students are having some success, which is great to see.

Example of what students did

Nature via National Park vs. Wilderness

I was reflecting that the real difference between a good-first-experience and a real bug is a lot like experiencing nature by visiting a National Park vs. setting out in the wilderness. There isn't a right or wrong way to do this, and both have obvious advantages and disadvantages. However, what National Parks do well is to make the experience of nature accessible to everyone: manicured paths, maps with established trails to follow, amenities so you can bring your family, information. It's obviously not the same as cutting a trail into a forest, portaging your canoe between lakes, or hiking on the side of a mountain. But it means that more people can try the experience of doing the real thing in relative safety, and without a massive commitment of time or effort. It's also a mostly self-guided experience vs. something you need a guide (maintainer) to accomplish. In the end, this experience might be enough for many people, and will help bring awareness and an enriching experience. For others, it will be the beginning of bolder outings into the unknown.

I don't think my current attempt represents a definitive good-first-experience in Mozilla, but it's got me thinking more about what one might look like, and I wanted to get you thinking about them too. I know I'm not alone in wanting to bring students into projects like Mozilla and Firefox, and needing a repeatable entry point.

Planet MozillaAdd-ons Update – 2017/09

Here’s your monthly add-ons update.

The Review Queues

In the past month, our team reviewed 2,490 listed add-on submissions:

  • 2,074 in fewer than 5 days (83%).
  • 89 between 5 and 10 days (4%).
  • 327 after more than 10 days (13%).

244 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel and will soon hit Beta, only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Amola Singh
  • yfdyh000
  • bfred-it
  • Tiago Morais Morgado
  • Divya Rani
  • angelsl
  • Tim Nguyen
  • Atique Ahmed Ziad
  • Apoorva Pandey
  • Kevin Jones
  • ljbousfield
  • asamuzaK
  • Rob Wu
  • Tushar Sinai
  • Trishul Goel
  • zombie
  • tmm88
  • Christophe Villeneuve
  • Hemanth Kumar Veeranki

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/09 appeared first on Mozilla Add-ons Blog.

Planet MozillaQuantum Flow Engineering Newsletter #24

I hope you’re not tired of reading these newsletters so far.  If not, I applaud your patience with me in the past few months.  But next week, as Firefox 57 will merge to the Beta channel, I’m planning to write the last one of this series.

Nightly has been pretty solid on performance.  It is prudent at this point to focus our attention more on other aspects of quality for the 57 release, to make sure that things like the crash rate and regressions are under control.  The triage process that we set up in March to enable everyone to take part in finding and nominating performance problems which they think should be fixed in Firefox 57 was started with the goal of creating a large pool of prioritized bugs that we believed would vastly impact the real world performance of Firefox for the majority of our users.  I think this process worked quite well overall, but it has mostly served its purpose, and participating in the triage takes a lot of time (we sometimes had two meetings per week to be able to deal with the incoming volume of bugs!)  With one week left, it seemed like a good decision to stop the triage meetings now.  We also had a weekly 30-minute standup meeting where people talked about what they had done on Quantum Flow during the past week (and you read about many of those in the newsletters!), and for similar reasons that meeting also will be wound down.  This gives back several person-hours back on their calendars to people who really need it, hurray!

The work on the Speedometer benchmark for 57 is more or less wrapped up at this point.  One noteworthy change that happened last week which I should mention here is this jump in the numbers which happened on September 7.  The reason behind it was a change on the benchmark side to switch from reporting the score using arithmetic mean to using geometric mean.  This is a good change in my opinion because it means that the impact of a few of the JS frameworks being tested wouldn’t dominate the overall score.  The unfortunate news is that as a result of this change, Firefox took a bigger hit in numbers than Chrome did, but I’m still very proud of all the great work that happened when optimizing for this benchmark, and I think the right response to this change is for us to optimize more to get the few percentages of head-to-head comparison that we lost back.  🙂

Speedometer changes as a result of computing the benchmark score using geometric mean

Even though most of the planned performance work for Firefox 57 is done, it doesn’t mean that people are done pouring in their great fixes as things are making it to the finish line last-minute!  So now please allow me to take a moment to thank everyone who helped make Firefox faster in the last week, as usual, I hope I’m not forgetting any names here:

Planet MozillaPut your multiple online personalities in Firefox Multi-Account Containers

Our new Multi-Account Containers extension for Firefox means you can finally wrangle multiple email/social accounts. Maybe you’ve got two Gmail or Instagram or Twitter or Facebook accounts (or a few … Read more

The post Put your multiple online personalities in Firefox Multi-Account Containers appeared first on The Firefox Frontier.

Planet MozillaMeasuring the Subjective: The Performance Dashboard with Estelle Weyl

Measuring the Subjective: The Performance Dashboard with Estelle Weyl Performance varies quite a bit depending on the site, the environment and yes, the user. And users don't check your performance metrics. Instead, they perceive...

Planet MozillaMeasuring the Subjective: The Performance Dashboard with Estelle Weyl

Measuring the Subjective: The Performance Dashboard with Estelle Weyl Performance varies quite a bit depending on the site, the environment and yes, the user. And users don't check your performance metrics. Instead, they perceive...

Planet MozillaFirefox 56 new contributors

With the upcoming release of Firefox 56, we are pleased to welcome the 37 developers who contributed their first code change to Firefox in this release, 29 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Planet MozillaReps Weekly Meeting Sep. 14, 2017

Reps Weekly Meeting Sep. 14, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Sep. 14, 2017

Reps Weekly Meeting Sep. 14, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaBuilding the DOM faster: speculative parsing, async, defer and preload

In 2017, the toolbox for making sure your web page loads fast includes everything from minification and asset optimization to caching, CDNs, code splitting and tree shaking. However, you can get big performance boosts with just a few keywords and mindful code structuring, even if you’re not yet familiar with the concepts above and you’re not sure how to get started.

The fresh web standard <link rel="preload">, that allows you to load critical resources faster, is coming to Firefox later this month. You can already try it out in Firefox Nightly or Developer Edition, and in the meantime, this is a great chance to review some fundamentals and dive deeper into performance associated with parsing the DOM.

Understanding what goes on inside a browser is the most powerful tool for every web developer. We’ll look at how browsers interpret your code and how they help you load pages faster with speculative parsing. We’ll break down how defer and async work and how you can leverage the new keyword preload.

Building blocks

HTML describes the structure of a web page. To make any sense of the HTML, browsers first have to convert it into a format they understand – the Document Object Model, or DOM. Browser engines have a special piece of code called a parser that’s used to convert data from one format to another. An HTML parser converts data from HTML into the DOM.

In HTML, nesting defines the parent-child relationships between different tags. In the DOM, objects are linked in a tree data structure capturing those relationships. Each HTML tag is represented by a node of the tree (a DOM node).

The browser builds up the DOM bit by bit. As soon as the first chunks of code come in, it starts parsing the HTML, adding nodes to the tree structure.

The DOM has two roles: it is the object representation of the HTML document, and it acts as an interface connecting the page to the outside world, like JavaScript. When you call document.getElementById(), the element that is returned is a DOM node. Each DOM node has many functions you can use to access and change it, and what the user sees changes accordingly.

CSS styles found on a web page are mapped onto the CSSOM – the CSS Object Model. It is much like the DOM, but for the CSS rather than the HTML. Unlike the DOM, it cannot be built incrementally. Because CSS rules can override each other, the browser engine has to do complex calculations to figure out how the CSS code applies to the DOM.

 

The history of the <script> tag

As the browser is constructing the DOM, if it comes across a <script>...</script> tag in the HTML, it must execute it right away. If the script is external, it has to download the script first.

Back in the old days, in order to execute a script, parsing had to be paused. It would only start up again after the JavaScript engine had executed code from a script.

Why did the parsing have to stop? Well, scripts can change both the HTML and its product―the DOM. Scripts can change the DOM structure by adding nodes with document.createElement(). To change the HTML, scripts can add content with the notorious document.write() function. It’s notorious because it can change the HTML in ways that can affect further parsing. For example, the function could insert an opening comment tag making the rest of the HTML invalid.

Scripts can also query something about the DOM, and if that happens while the DOM is still being constructed, it could return unexpected results.

document.write() is a legacy function that can break your page in unexpected ways and you shouldn’t use it, even though browsers still support it. For these reasons, browsers have developed sophisticated techniques to get around the performance issues caused by script blocking that I will explain shortly.

What about CSS?

JavaScript blocks parsing because it can modify the document. CSS can’t modify the document, so it seems like there is no reason for it to block parsing, right?

However, what if a script asks for style information that hasn’t been parsed yet? The browser doesn’t know what the script is about to execute—it may ask for something like the DOM node’s background-color which depends on the style sheet, or it may expect to access the CSSOM directly.

Because of this, CSS may block parsing depending on the order of external style sheets and scripts in the document. If there are external style sheets placed before scripts in the document, the construction of DOM and CSSOM objects can interfere with each other. When the parser gets to a script tag, DOM construction cannot proceed until the JavaScript finishes executing, and the JavaScript cannot be executed until the CSS is downloaded, parsed, and the CSSOM is available.

Another thing to keep in mind is that even if the CSS doesn’t block DOM construction, it blocks rendering. The browser won’t display anything until it has both the DOM and the CSSOM. This is because pages without CSS are often unusable. If a browser showed you a messy page without CSS, then a few moments later snapped into a styled page, the shifting content and sudden visual changes would make a turbulent user experience.

See the Pen Flash of Unstyled Content by Milica (@micikato) on CodePen.

That poor user experience has a name – Flash of Unstyled Content or FOUC

To get around these issues, you should aim to deliver the CSS as soon as possible. Recall the popular “styles at the top, scripts at the bottom” best practice? Now you know why it was there!

Back to the future – speculative parsing

Pausing the parser whenever a script is encountered means that every script you load delays the discovery of the rest of the resources that were linked in the HTML.

If you have a few scripts and images to load, for example–

<script src="slider.js"></script>
<script src="animate.js"></script>
<script src="cookie.js"></script>
<img src="slide1.png">
<img src="slide2.png">

–the process used to go like this:

 

That changed around 2008 when IE introduced something they called “the lookahead downloader”. It was a way to keep downloading the files that were needed while the synchronous script was being executed. Firefox, Chrome and Safari soon followed, and today most browsers use this technique under different names. Chrome and Safari have “the preload scanner” and Firefox – the speculative parser.

The idea is: even though it’s not safe to build the DOM while executing a script, you can still parse the HTML to see what other resources need to be retrieved. Discovered files are added to a list and start downloading in the background on parallel connections. By the time the script finishes executing, the files may have already been downloaded.

The waterfall chart for the example above now looks more like this:

The download requests triggered this way are called “speculative” because it is still possible that the script could change the HTML structure (remember document.write ?), resulting in wasted guesswork. While this is possible, it is not common, and that’s why speculative parsing still gives big performance improvements.

While other browsers only preload linked resources this way, in Firefox the HTML parser also runs the DOM tree construction algorithm speculatively. The upside is that when a speculation succeeds, there’s no need to re-parse a part of the file to actually compose the DOM. The downside is that there’s more work lost if and when the speculation fails.

(Pre)loading stuff

This manner of resource loading delivers a significant performance boost, and you don’t need to do anything special to take advantage of it. However, as a web developer, knowing how speculative parsing works can help you get the most out of it.

The set of things that can be preloaded varies between browsers. All major browsers preload:

  • scripts
  • external CSS
  • and images from the <img> tag

Firefox also preloads the poster attribute of video elements, while Chrome and Safari preload @import rules from inlined styles.

There are limits to how many files a browser can download in parallel. The limits vary between browsers and depend on many factors, like whether you’re downloading all files from one or from several different servers and whether you are using HTTP/1.1 or HTTP/2 protocol. To render the page as quickly as possible, browsers optimize downloads by assigning priority to each file. To figure out these priorities, they follow complex schemes based on resource type, position in the markup, and progress of the page rendering.

While doing speculative parsing, the browser does not execute inline JavaScript blocks. This means that it won’t discover any script-injected resources, and those will likely be last in line in the fetching queue.

var script = document.createElement('script');
script.src = "//somehost.com/widget.js";
document.getElementsByTagName('head')[0].appendChild(script);

You should make it easy for the browser to access important resources as soon as possible. You can either put them in HTML tags or include the loading script inline and early in the document. However, sometimes you want some resources to load later because they are less important. In that case, you can hide them from the speculative parser by loading them with JavaScript late in the document.

You can also check out this MDN guide on how to optimize your pages for speculative parsing.

defer and async

Still, synchronous scripts blocking the parser remains an issue. And not all scripts are equally important for the user experience, such as those for tracking and analytics. Solution? Make it possible to load these less important scripts asynchronously.

The defer and async attributes were introduced to give developers a way to tell the browser which scripts to handle asynchronously.

Both of these attributes tell the browser that it may go on parsing the HTML while loading the script “in background”, and then execute the script after it loads. This way, script downloads don’t block DOM construction and page rendering. Result: the user can see the page before all scripts have finished loading.

The difference between defer and async is which moment they start executing the scripts.

defer was introduced before async. Its execution starts after parsing is completely finished, but before the DOMContentLoaded event. It guarantees scripts will be executed in the order they appear in the HTML and will not block the parser.

async scripts execute at the first opportunity after they finish downloading and before the window’s load event. This means it’s possible (and likely) that async scripts are not executed in the order in which they appear in the HTML. It also means they can interrupt DOM building.

Wherever they are specified, async scripts load at a low priority. They often load after all other scripts, without blocking DOM building. However, if an async script finishes downloading sooner, its execution can block DOM building and all synchronous scripts that finish downloading afterwards.

Note: Attributes async and defer work only for external scripts. They are ignored if there’s no src.

preload

async and defer are great if you want to put off handling some scripts, but what about stuff on your web page that’s critical for user experience? Speculative parsers are handy, but they preload only a handful of resource types and follow their own logic. The general goal is to deliver CSS first because it blocks rendering. Synchronous scripts will always have higher priority than asynchronous. Images visible within the viewport should be downloaded before those below the fold. And there are also fonts, videos, SVGs… In short – it’s complicated.

As an author, you know which resources are the most important for rendering your page. Some of them are often buried in CSS or scripts and it can take the browser quite a while before it even discovers them. For those important resources you can now use <link rel="preload"> to communicate to the browser that you want to load them as soon as possible.

All you need to write is:

<link rel="preload" href="very_important.js" as="script">

You can link pretty much anything and the as attribute tells the browser what it will be downloading. Some of the possible values are:

  • script
  • style
  • image
  • font
  • audio
  • video

You can check out the rest of the content types on MDN.

Fonts are probably the most important thing that gets hidden in the CSS. They are critical for rendering the text on the page, but they don’t get loaded until browser is sure that they are going to be used. That check happens only after CSS has been parsed, and applied, and the browser has matched CSS rules to the DOM nodes. This happens fairly late in the page loading process and it often results in an unnecessary delay in text rendering. You can avoid it by using the preload attribute when you link fonts.

One thing to pay attention to when preloading fonts is that you also have to set the crossorigin attribute even if the font is on the same domain:

<link rel="preload" href="font.woff" as="font" crossorigin>

The preload feature has limited support at the moment as the browsers are still rolling it out, but you can check the progress here.

Conclusion

Browsers are complex beasts that have been evolving since the 90s. We’ve covered some of the quirks from that legacy and some of the newest standards in web development. Writing your code with these guidelines will help you pick the best strategies for delivering a smooth browsing experience.

If you’re excited to learn more about how browsers work here are some other Hacks posts you should check out:

Quantum Up Close: What is a browser engine?
Inside a super fast CSS engine: Quantum CSS (aka Stylo)

Planet MozillaPublic Event: The Fate of Net Neutrality in the U.S.

Mozilla is hosting a free panel at the Internet Archive in San Francisco on Monday, September 18. Hear top experts discuss why net neutrality matters and what we can do to protect it

 

Net neutrality is under siege.

Despite protests from millions of Americans, FCC Chairman Ajit Pai is moving forward with plans to dismantle hard-won open internet protections.

“Abandoning these core protections will hurt consumers and small businesses alike,” Mozilla’s Heather West penned in an open letter to Pai earlier this week, during Pai’s visit to San Francisco.

The FCC may vote to gut net neutrality as early as October. What does this mean for the future of the internet?

Join Mozilla and the nation’s leading net neutrality experts at a free, public event on September 18 to discuss just this. We will gather at the Internet Archive to discuss why net neutrality matters to a healthy internet — and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Net neutrality is under siege. Mozilla is hosting a public panel in San Francisco to explore what’s ahead

<WHAT>

The Battle to Save Net Neutrality, a reception and discussion in downtown San Francisco. Register for free tickets

<WHO>

Mozilla Tech Policy Fellow and former FCC Counselor Gigi Sohn will moderate a conversation with the nation’s leading experts on net neutrality, including Mozilla’s Chief Legal and Business Officer, Denelle Dixon, and:

Tom Wheeler, Former FCC Chairman who served under President Obama and was architect of the 2015 net neutrality rules

Representative Ro Khanna, (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley

Amy Aniobi, Supervising Producer of HBO’s “Insecure”

Luisa Leschin, Co-Executive Producer/Head Writer of Amazon’s “Just Add Magic”

Malkia Cyril, Executive Director of the Center for Media Justice

and Dane Jasper, CEO and Co-Founder of Sonic.

<WHEN>

Monday, September 18, 2017 from 6 p.m. to 9 p.m. PT

<WHERE>

The Internet Archive, 300 Funston Avenue San Francisco, CA 94118

RSVP: The Battle to Save Net Neutrality

The post Public Event: The Fate of Net Neutrality in the U.S. appeared first on The Mozilla Blog.

Planet Mozillahyperlinks in buttons are probably not a great idea

Over in web-bug #9726, there's an interesting issue reported against glitch.com (which is already fixed because those peeps are classy):

Basically, they had an HTML <button> that when clicked would display:block a descendent <dialog> element that contained some hyperlinks to help you create a new project.

screenshot of glitch.com button

The simplest test case:

<button>
  <a href="https://example.com">do cool thing</a>
</button>

Problem is, clicking on an anchor with an href inside of a button does nothing in Firefox (and Opera Presto, which only 90s kids remember).

What the frig, web browsers.

But it turns out HTML is explicit on the subject, as it often is, stating that a button's content model must not have an interactive content descendant.

(and <a href> is totally, like, interactive content, itsho*)

Soooo, probably not a good idea to follow this pattern. And who knows what it means for accessibility.

The fix for glitch is simple: just make the <dialog> a sibling, and hide and show it the same way.

* in the spec's humble opinion

Planet MozillaSome Opinions On The History Of Web Audio

People complain that Web Audio provides implementations of numerous canned processing features, but they very often don't do exactly what you want, and working around those limitations by writing your own audio processing code in JS is difficult or impossible.

This was an obvious pitfall from the moment the Web Audio API was proposed by Chris Rogers (at Google, at that time). I personally fought pretty hard in the Audio WG for an API that would be based on JS audio processing (with allowance for popular effects to be replaced with browser-implemented modules). I invested enough to write a draft spec for my alternative and implement a lot of that spec in Firefox, including Worker-based JS sample processing.

My efforts went nowhere for several reasons. My views on making JS sample manipulation a priority were not shared by the Audio WG. Here's my very first response to Chris Rogers' reveal of the Web Audio draft; you can read the resulting discussion there. The main arguments against prioritizing JS sample processing were that JS sample manipulation would be too slow, and JS GC (or other non-realtime behaviour) would make audio too glitchy. Furthermore, audio professionals like Chris Rogers assured me they had identified a set of primitives that would suffice for most use cases. Since most of the Audio WG were audio professionals and I wasn't, I didn't have much defense against "audio professionals say..." arguments.

The Web Audio API proceeded mostly unchanged because there wasn't anyone other than me trying to make significant changes. After an initial burst of interest Apple's WG participation declined dramatically, perhaps because they were getting Chris Rogers' Webkit implementation "for free" and had nothing to gain from further discussion. I begged Microsoft people to get involved but they never did; in this and other areas they were (are?) apparently content for Mozilla and Google to spend energy to thrash out a decent spec that they later implement.

However, the main reason that Web Audio was eventually standardized without major changes is because Google and Apple shipped it long before the spec was done. They shipped it with a "webkit" prefix, but they evangelized it to developers who of course started using it, and so pretty soon Mozilla had to cave.

Ironically, soon after Web Audio won, the "extensible Web" become a hot buzzword. Web Audio had a TAG review at which it was clear Web Audio was pretty much the antithesis of "extensible Web", but by then it was too late to do anything about it.

What could I have done better? I probably should have reduced the scope of my spec proposal to exclude MediaStream/HTMLMediaElement integration. But I don't think that, or anything else I can think of, would have changed the outcome.

Planet MozillaVerified cryptography for Firefox 57

Traditionally, software is produced in this way: write some code, maybe do some code review, run unit-tests, and then hope it is correct. Hard experience shows that it is very hard for programmers to write bug-free software. These bugs are sometimes caught in manual testing, but many bugs still are exposed to users, and then must be fixed in patches or subsequent versions. This works for most software, but it’s not a great way to write cryptographic software; users expect and deserve assurances that the code providing security and privacy is well written and bug free.

Even innocuous looking bugs in cryptographic primitives can break the security properties of the overall system and threaten user security. Unfortunately, such bugs aren’t uncommon. In just the last year, popular cryptographic libraries have issued dozens of CVEs for bugs in their core cryptographic primitives or for incorrect use of those primitives. These bugs include many memory safety errors, some side-channels leaks, and a few correctness errors, for example, in bignum arithmetic computations… So what can we do?

Fortunately, recent advances in formal verification allow us to significantly improve the situation by building high assurance implementations of cryptographic algorithms. These implementations are still written by hand, but they can be automatically analyzed at compile time to ensure that they are free of broad classes of bugs. The result is that we can have much higher confidence that our implementation is correct and that it respects secure programming rules that would usually be very difficult to enforce by hand.

This is a very exciting development and Mozilla has partnered with INRIA and Project Everest  (Microsoft Research, CMU, INRIA) to bring components from their formally verified HACL* cryptographic library into NSS, the security engine which powers Firefox. We believe that we are the first major Web browser to have formally verified cryptographic primitives.

The first result of this collaboration, an implementation of the Curve25519 key establishment algorithm (RFC7748), has just landed in Firefox Nightly. Curve25519 is widely used for key-exchange in TLS, and was recently standardized by the IETF.  As an additional bonus, besides being formally verified, the HACL* Curve25519 implementation is also almost 20% faster on 64 bit platforms than the existing NSS implementation (19500 scalar multiplications per second instead of 15100) which represents an improvement in both security and performance to our users. We expect to ship this new code as part as our November Firefox 57 release.

Over the next few months, we will be working to incorporate other HACL* algorithms into NSS, and will also have more to say about the details of how the HACL* verification works and how it gets integrated into NSS.

Benjamin Beurdouche, Franziskus Kiefer & Tim Taubert

The post Verified cryptography for Firefox 57 appeared first on Mozilla Security Blog.

Planet MozillaHow do you become a Firefox peer? The answer may surprise you!

So you want to know how someone becomes a peer? Surprisingly the answer is pretty unclear. There is no formal process for peer status, at least for Firefox and Toolkit. I haven’t spotted one for other modules either. What has generally happened in the past is that from time to time someone will come along and say, “Oh hey, shouldn’t X be a peer by now?” to which I will say “Uhhh maybe! Let me go talk to some of the other peers that they have worked with”. After that magic happens and I go and update the stupid wiki pages, write a blog post and mail the new peers to congratulate them.

I’d like to formalise this a little bit and have an actual process that new peers can see and follow along to understand where they are. I’d like feedback on this idea, it’s just a straw-man at this point. With that I give you … THE ROAD TO PEERSHIP (cue dramatic music).

  1. Intro patch author. You write basic patches, request review and get them landed. You might have level 1 commit access, probably not level 3 yet though.
  2. Senior patch author. You are writing really good patches now. Not just simple stuff. Patches that touch multiple files maybe even multiple areas of the product. Chances are you have level 3 commit access. Reviewers rarely find significant issues with your work (though it can still happen). Attention to details like maintainability and efficiency are important. If your patches are routinely getting backed out or failing tests then you’re not here yet.
  3. Intro reviewer. Before being made a full peer you should start reviewing simple patches. Either by being the sole reviewer for a patch written by a peer or doing an initial review before a peer does a final sign-off. Again paying attention to maintainability and efficiency are important. As is being clear and polite in your instructions to the patch author as well as being open to discussion where disagreements happen.
  4. Full peer. You, your manager or a peer reach out to me showing me cases where you’ve completed the previous levels. I double-check with a couple of peers you’ve work with. Congratulations, you made it! Follow-up on review requests promptly. Be courteous. Re-direct reviews that are outside your area of expertise.

Does this sound like a reasonable path? What criteria am I missing? I’m not yet sure what length of time we would expect each step to take but I am imagine that more senior contributors could skip straight to step 2.

Feedback welcome here or in private by email.

Planet MozillaThe Joy of Coding - Episode 112

The Joy of Coding - Episode 112 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 112

The Joy of Coding - Episode 112 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaSocorro and Firefox 57

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

Teams at Mozilla are feverishly working on Firefox 57. That's super important work and we're getting down to the wire. Socorro is a critical part of that development work as it collects incoming crashes, processes them, and has tools for analysis.

This blog post covers some of the things Socorro engineering has been doing to facilitate that work and what we're planning from now until Firefox 57 release.

This quarter

This quarter, we replaced Snappy with Tecken for more reliable symbol lookup in Visual Studio and other clients.

We built a Docker-based local dev environment for Socorro making it easier to run Socorro on your local machine configured like crash-stats.mozilla.com. It now takes five steps to getting Socorro running on your computer.

We also overhauled the signature generation system in Socorro and slapped on a command-line interface. Now you can test the effects of signature generation changes on specific crashes as well as groups of crashes on your local machine.

We've also been fixing stability issues and bugs and myriad other things.

Now until Firefox 57

Starting today and continuing until after Firefox 57 release, we are:

  1. prioritizing your signature generation changes, getting them landed, and pushing them to -prod
  2. triaging Socorro bugs into "need it right now" and "everything else" buckets
  3. deferring big changes to Socorro until after Firefox 57 including API endpoint deprecation, major UI changes to the crash-stats interface, and other things that would affect your workflow

We want to make sure crash analysis is working as best as it can so you can do the best you can so we can have a successful Firefox 57.

Please contact us if you need something!

We hang out on #breakpad on irc.mozilla.org. You can also write up bugs.

Hopefully this helps. If not, let us know!

Planet MozillaAnnouncing the 2017 Ford-Mozilla Open Web Fellows!

At the foundation of our net policy and advocacy platforms at Mozilla is our support for the growing network of leaders all over the world. For the past two years, Mozilla and the Ford Foundation have partnered over fourteen organizations with progressive technologists operating at the intersection of open web security and policy; and in 2017-2018 we plan to continue our Open Web Fellows Program with our largest cohort yet! Following months of deliberation, and a recruitment process that included close to 300 competitive applicants from our global community, we’re delighted to introduce you to our 2016-2017 Open Web Fellows:

                      

This year, we’ll host an unprecedented set of eleven fellows embedded in four incumbent and seven new host organizations! These fellows will partner with their host organizations over the next 10 months to work on independent research and project development that amplifies issues of Internet Health, privacy and security, as well as net neutrality and open web policy on/offline.

If you’d like to learn more about our fellows, we encourage you to browse their bios, read up on their host organizations, and follow them on Twitter! We look forward to updating you on our Fellows’ progress, and can’t wait to learn more from them over the coming months. Stay tuned!

The post Announcing the 2017 Ford-Mozilla Open Web Fellows! appeared first on Open Policy & Advocacy.

Planet MozillaOpen Source Needs Students To Thrive

This past year, thousands of computer science students in the United States were inspired by open source, yet in many cases their flames of interest were doused by the structure of technical education at most colleges. Concerns about students plagiarizing each other’s work, lack of structural support, resources, and community connections are making it hard for students to jump between curious to capable in the world of open source.

As part of our ongoing efforts to engage college students and develop a program to support open source clubs, Mozilla’s Open Innovation Team recently conducted a study to better understand the current state of open source on US Campuses. We also asked ourselves “what can Mozilla do to support and fuel students who are actively engaged in advancing open source?” Read the full research report here.

We ran a broad screening process to identify students with an interest in technology, an interest in open source, and who also represented a diversity of gender identities, academic focuses, locations and schools. We ultimately selected 25 students with whom to conduct an in-depth interview.

<figure>Photo distributed with CC BY-NC-ND</figure>

We found that open source is usually learned outside the classroom, there is strong interest, but the overall level for open source literacy is low.

Students are excited about open source, but there’s a knowledge gap

Students are generally excited about the idea of open source, citing the control it gives them over the software they use, the opportunities it provides for them to build skills, and the emphasis on community.

However, for many students a lot is still unknown, and there are core aspects of open source that lots of students weren’t aware of. For example, a challenge that many students faced when trying to contribute to an existing open source project was not knowing how to analytically read code. One student described his challenges trying to read a codebase for the first time…

I looked at a codebase and I had no idea where to begin. It felt like it would take weeks just to come up to speed.” — Eric, Georgia Tech

Students also were worried about how viable open source is a career path, leading one student to ask “how can I pay my student loans with open source”.

Another example was at a hackathon attended by our researcher, for submissions to the “Best Open Source Hack” category. In fact, only 5 of the 16 entries correctly licensed their software. 10 of the disqualified teams expressed surprise that a license was required. They had believed that all that was required to make software open source was to release it on Github.

I had been told that being on Github was enough. I had never heard about licensing before!

Open source isn’t taught, it’s learned informally

A major reason for this lack of literacy is that open source is rarely taught as part of university curriculum (except the Portland State University). In fact, the structure and culture of most computer science programs often unintentionally reinforces behaviors that are counter to developing the skills necessary to make contributions to existing open source projects. A large part of this seems to come from a desire to prevent academic dishonesty.

“An [Open Source Club] member recently told me that one of the reasons he joined was that he wanted to be able to code along side other people and help them solve problems with their code. He didn’t feel like he could normally do that in his classes without being accused of helping people ‘cheating.” — Wes, Rensselaer Polytechnic

As a result most students learn about open source informally through hobbies, like robotics programming, extracurriculars or their peers.

The reality is that on most college campuses, Open Source is learned in students off time and during club times. Its students teaching students, not professors teaching us.— Semirah, UMass Dartmouth

Implications: Starting their careers with a knowledge and skills gap

A generation of developers are at risk of starting their technical careers without understanding or even knowing about open source or the value of open. Mozilla purposefully designs open products and technologies which can grow and change the Web because of passionate OS contributors but we need to enable the next generation to drive the mission forward.

“Open source offers an alternative to corporate control of programs and the web. That’s something that needs to be encouraged.” — Casey, Portland State

Opportunities: Filling the need for bottom-up support

As people who care about open source we can tackle this by supporting organizations like POSSE who are working to get better open source education into the classroom. Ensuring that more students are exposed to open source concepts and the basic skills they’d need to participate, as a part of their education.

Given the challenges and wait times associated with introducing new curriculum in most universities, there is also an immediate and present need for well-supported, networked, informal structures that help teach, instill and provide access to open source projects and technologies for students. From what we learned so far and from the feedback we got from the students, there is a real opportunity for Mozilla to fill this need and make a difference on campuses interested in open source.

Next Steps for Mozilla’s Open Source Student Network

Based on this research, we are currently working with a team of student leaders to design a program that makes it easy for students to learn about and contribute to open source on their campuses.

We are also working closely with organizations already in this space such as POSSE, Red Hat, and the Open Source Initiative to create educational content and connect with professors and students who share our mission.

Furthermore we’re partnering with other teams and projects inside Mozilla such as Add Ons, Rust, Dev Tools, and Mozilla VR/AR, to create activities and challenges that motivate and engage a vast network of students and professors in our products and technology development processes.

Does this reflect your experience? Tell us what it’s like on your campus in the comments here or reach out to us on discourse or via email at campusclubs@mozilla.com!


Open Source Needs Students To Thrive was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaMozilla Announces 15 New Fellows for Science, Advocacy, and Media

These technologists, researchers, activists, and artists will spend the next 10 months making the Internet a better place

 

Today, Mozilla is announcing 15 new Fellows in the realms of science, advocacy, and media.

Fellows hail from Mexico, Bosnia & Herzegovina, Uganda, the United States, and beyond. They are multimedia artists and policy analysts, security researchers and ethical hackers.

Over the next several months, Fellows will put their diverse abilities to work making the Internet a healthier place. Among their many projects are initiatives to make biomedical research more open; uncover technical solutions to online harassment; teach privacy and security fundamentals to patrons at public libraries; and curtail mass surveillance within Latin American countries.

 

<Meet our Ford-Mozilla Open Web Fellows>

 

The 2017 Ford-Mozilla Open Web Fellows

Ford-Mozilla Open Web Fellows are talented technologists who are passionate about privacy, security, and net neutrality. Fellows embed with international NGOs for 10 months to work on independent research and project development.

Past Open Web Fellows have helped build open-source whistle-blowing software, and analyzed discriminatory police practice data.

Our third cohort of Open Web Fellows was selected from more than 300 applications. Our 11 2017 Fellows and host organizations are:

Sarah Aoun | Hollaback!

Carlos Guerra | Derechos Digitales

Sarah Kiden | Research ICT Africa

Bram Abramson | Citizen Lab

Freddy Martinez | Freedom of the Press Foundation

Rishab Nithyanand | Data & Society

Rebecca Ricks | Human Rights Watch

Aleksandar Todorović | Bits of Freedom

Maya Wagoner | Brooklyn Public Library

Orlando Del Aguila | Majal

Nasma Ahmed | MPower Change

Learn more about our Open Web Fellows.

 

<Meet our Mozilla Fellows in Science>

Mozilla’s Open Science Fellows work at the intersection of research and openness. They foster the use of open data and open source software in the scientific community, and receive training and support from Mozilla to hone their skills around open source, participatory learning, and data sharing.

Past Open Science fellows have developed online curriculum to teach the command line and scripting languages to bioinformaticians. They’ve defined statistical programming best-practices for instructors and open science peers. And they’ve coordinated conferences on the principles of working open.

Our third cohort of Open Science Fellows — supported by the Siegel Family Endowment — was selected from a record pool of 1,090 applications. Our two 2017 fellows are:

Amel Ghouila

A computer scientist by background, Amel earned her PhD in Bioinformatics and is currently a bioinformatician at Institut Pasteur de Tunis. She works on the frame of the pan-African bioinformatics network H3ABionet, supporting researchers and their projects while developing bioinformatics capacity throughout Africa. Amel is passionate about knowledge transfer and working open to foster collaborations and innovation in the biomedical research field. She is also passionate about empowering and educating young girls — she launched the Technovation Challenge Tunisian chapter to help Tunisian girls learn how to address community challenges by designing mobile applications.

Follow Amel on Twitter and Github.

 

Chris Hartgerink

Chris is an applied statistics PhD-candidate at Tilburg University, as part of the Metaresearch group. He has contributed to open science projects such as the Reproducibility Project: Psychology. He develops open-source software for scientists. And he conducts research on detecting data fabrication in science. Chris is particularly interested in how the scholarly system can be adapted to become a sustainable, healthy environment with permissive use of content, instead of a perverse system that promotes unreliable science. He initiated Liberate Science to work towards such a system.

Follow Chris on Twitter and Github.

Learn more about our Open Science Fellows.

 

<Meet our Mozilla Fellows in Media>

This year’s Mozilla Fellows cohort will also be joined by media producers.  These makers and activists have created public education and engagement work that explores topics related to privacy and security.  Their work incites curiosity and inspires action, and over their fellowship year will work closely with the Mozilla fellows cohort to understand and explain the most urgent issues facing the open Internet. Through a partnership with the Open Society Foundation, these fellows join other makers who have benefited from Mozilla’s first grants to media makers. Our two 2017 fellows are:

Hang Do Thi Duc

Hang Do Thi Duc is a media maker whose artistic work is about the social web and the effect of data-driven technologies on identity, privacy, and society. As a German Fulbright and DAAD scholar, Hang received an MFA in Design and Technology at Parsons in New York City. She most recently created Data Selfie, a browser extension that aims to provide users with a personal perspective on data mining and predictive analytics through their Facebook consumption.

Joana Varon

Joana is Executive Directress and Creative Chaos Catalyst at Coding Rights, a women-run organization working to expose and redress the power imbalances built into technology and its application. Coding Rights focuses on imbalances that reinforce gender and North/South inequalities.

 

Meet more Mozilla fellows. The Mozilla Tech Policy Fellowship, launched in June 2017, brings together tech policy experts from around the world. Tech Policy Fellows participate in policy efforts to improve the health of the Internet. Find more details about the fellowship and individuals involved. Learn more about the Tech Policy Fellows.

The post Mozilla Announces 15 New Fellows for Science, Advocacy, and Media appeared first on The Mozilla Blog.

Planet MozillaNew Firefox and Toolkit module peers

Please join me in welcoming another set of brave souls willing to help shepherd new code into Firefox and Toolkit:

  • Luke Chang
  • Ricky Chien
  • Luca Greco
  • Kate Hudson
  • Tomislav Jovanovic
  • Ray Lin
  • Fischer Liu

While going through this round of peer updates I’ve realised that it isn’t terribly clear how people become peers. I intend to rectify that in a coming blog post.

Planet MozillaBlueBorne and the Power Mac TL;DR: low practical risk, but assume the worst

Person of Interest, which is one of my favourite shows (Can. You. Hear. Me?) was so very ahead of its time in many respects, and awfully prescient about a lot else. One of those things was taking control of a device for spying purposes via Bluetooth, which the show variously called "forced pairing" or "bluejacking."

Because, thanks to a newly discovered constellation of flaws nicknamed BlueBorne, you can do this for real. Depending on the context and the flaw in question, which varies from operating system to operating system, you can achieve anything from information leaks and man-in-the-middle attacks to full remote code execution without the victim system having to do anything other than merely having their Bluetooth radio on. (And people wonder why I never have Bluetooth enabled on any of my devices and use a wired headset with my phone.)

What versions of OS X are likely vulnerable? The site doesn't say, but it gives us a couple clues with iOS, which shares the XNU kernel. Versions 9.3.5 and prior are all vulnerable to remote code execution, including AppleTV version 7.2.2 which is based on iOS 8.4.2; this correlates with a XNU kernel version of 15.6.0, i.e., El Capitan. Even if we consider there may be some hardening in contemporary desktop versions of macOS, 10.4 and 10.5 are indisputably too old for that, and 10.6 very likely as well. It is therefore reasonable to conclude Power Macs are vulnerable.

As a practical matter, though, an exploit that relies on remote code execution would have to put PowerPC code somewhere it could execute, i.e., the exploit would have to be specific to Power Macs. Unless your neighbour is, well, me, this is probably not a high probability in practice. A bigger risk might be system instability if an OS X exploit is developed and weaponized and tries spraying x86 code at victim systems instead. On a 10.6 system you'd be at real risk of being pwned (more on that below). On a PowerBook G4, they wouldn't be able to take your system over, but it has a good chance of getting bounced up and down and maybe something damaged in the process. This is clearly a greater risk for laptops than desktop systems, since laptops might be in more uncontrolled environments where they could be silently probed by an unobserved attacker.

The solution is obvious: don't leave Bluetooth on, and if you must use it, enable it only in controlled environments. (This would be a good time to look into a wired keyboard or a non-Bluetooth wireless mouse.) My desktop daily drivers, an iMac G4 and my trusty Quad G5, don't have built-in Bluetooth. When I need to push photos from my Pixel, I plug in a USB Bluetooth dongle and physically disconnect it when I'm done. As far as my portable Power Macs in the field, I previously used Bluetooth PAN with my iBook G4 for tethering but I think I'll be switching to WiFi for that even though it uses more power, and leave Bluetooth disabled except if I have no other options. I already use a non-Bluetooth wireless mouse that does not require drivers, so that's covered as well.

Older Intel Mac users, it goes without saying that if you're on anything prior to Sierra you should assume the worst as well. Apple may or may not offer patches for 10.10 and 10.11, but they definitely won't patch 10.9 and earlier, and you are at much greater risk of being successfully exploited than Power Mac users. Don't turn on Bluetooth unless you have to.

Very Soon Now(tm) I will be doing an update to our old post on keeping Power Macs safe online, and this advice will be part of it. Watch for that a little later.

Meanwhile, however, the actual risk to our Power Macs isn't the biggest question this discovery poses. The biggest question is, if the show got this right, what if there's really some sort of Samaritan out there too?

Planet MozillaSHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA (Still Hacking Anyways) is an nonprofit, outdoor hacker-camp series organized every four years. SHA2017 was held this August 4-8 in Zeewolde, Netherlands.

Attended by more than 3500 hackers, SHA was a fun, knowledge-packed four-day festival. The festival featured a wide range of talks and workshops, including sessions related to Internet of Things (IoT), hardware and software hacking, security, privacy, and much more!

Ram Dayal Vaishnav, a Tech Speaker from Mozilla’s Indian community, presented a session on WebVR, Building a Virtual-Reality Website using A-Frame. Check out a video recording of Ram’s talk:

Head on over to Ram’s personal blog to catch a few more highlights from SHA2017.

Planet MozillaSHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA (Still Hacking Anyways) is an nonprofit, outdoor hacker-camp series organized every four years. SHA2017 was held this August 4-8 in Zeewolde, Netherlands.

Attended by more than 3500 hackers, SHA was a fun, knowledge-packed four-day festival. The festival featured a wide range of talks and workshops, including sessions related to Internet of Things (IoT), hardware and software hacking, security, privacy, and much more!

Ram Dayal Vaishnav, a Tech Speaker from Mozilla’s Indian community, presented a session on WebVR, Building a Virtual-Reality Website using A-Frame. Check out a video recording of Ram’s talk:

Head on over to Ram’s personal blog to catch a few more highlights from SHA2017.

Planet MozillaCleaning House

Current status:


Current Status

When I was desk-camping in CDOT a few years ago, one thing I took no small joy in was the combination of collegial sysadminning and servers all named after cities or countries that made a typical afternoon’s cubicle chatter sound like a rapidly-developing multinational diplomatic crisis.

Change management when you’re module owner of Planet Mozilla and de-facto administrator of a dozen or so lesser planets is kind of like that. But way, way better.

Over the next two weeks or I’m going to going to be cleaning up Planet Mozilla, removing dead feeds and culling the participants list down to people still actively participating in the Mozilla project in some broadly-defined capacity. As well, I’ll be consuming decommissioning a number of uninhabited lesser under- or unused planets and rolling any stray debris back into Planet Mozilla proper.

With that in mind, if anything goes missing that you expected to survive a transition like that, feel free to email me or file a bug. Otherwise, if any of your feeds break I am likely to be the cause of that, and if you find a planet you were following has vanished you can take some solace in the fact that it was probably delicious.

Planet MozillaThese Weeks in Firefox: Issue 23

The team is busy sanding down the last few rough edges, and getting Firefox 57 ready to merge to beta! So busy in fact, that there are no screenshots or GIFs for this blog post. Sorry!

If you’re hankering for a more visual update, check out dolske’s Photon Engineering Newsletter #15!

Highlights

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates

Add-ons

Activity Stream

Browser Architecture

Firefox Core Engineering

  • Bug 1390703 – Flash Click-to-Play being increased to 25% on Release 55, (hopefully) shortly followed to 100%
  • Bug 1397562 – Update staging is now disabled on OSX and Linux (update staging was disabled on Windows in bug 1397562).
    • This is in response to what we think may be an issue with e10s sandboxing.
    • This is why you may suddenly be seeing a flash of the “Nightly is applying updates” (like in bug 1398641).
  • Bug 1380252, bug 1380254 – Optimized data in crash reports and crash ping processing.
  • Open call for ideas/investigation on bug 1276488 — suspected omnijar corruption, but not much to go on.

Form Autofill

Mobile

  • Firefox iOS 8.3 Shipped last week and contains primarily bug fixes
  • Firefox iOS 9.0 has been sent to QA for final verification and is expected to ship next week. It is a fantastic release with the following highlights:
    • Support for syncing your mobile bookmarks between all your devices
    • Tracking Protection will be enabled by default for Private mode and can be enabled for Regular Mode
    • Large improvements in our data storage layer that should improve performance and stability
    • Many small bug fixes
    • Compatibility with iOS 11 (which likely ships next week)

Photon

Performance
  • For 57 we had to disable tab warming when hovering tabs because it caused more  regressions than we are comfortable fixing for 57. We are now planning to ship this significant perf improvement in 58.
  • All the significant performance improvements we are still working on at this point are at risk for 57 because we are trying to avoid risk.
Structure
Animation
  • Investigation ongoing for bug 1397092 – high cpu usage possibly caused by new 60fps tab loading indicator
  • Fatter download progressbar bug 1387557 in for review, is last animation feature planned for 57
  • Polishing, please report any glitches you see

Search and Navigation

Test Pilot

  • We reduced our JS bundle size from 2.6MB to 736k
  • Send is working on A/B tests and adding password protection

Planet MozillaRust Berlin Meetup September 2017

Rust Berlin Meetup September 2017 Talks: An overview of the Servo architecture by Emilio and rust ❤️ sensors by Claus

Planet MozillaRust Berlin Meetup September 2017

Rust Berlin Meetup September 2017 Talks: An overview of the Servo architecture by Emilio and rust ❤️ sensors by Claus

Planet MozillaExperimenting with WebAssembly and Computer Vision

This past summer, four time-crunched engineers with no prior WebAssembly experience began experimenting. The result after six weeks of exploration was WebSight: a real-time face detection demo based on OpenCV.

By compiling OpenCV to WebAssembly, the team was able to reuse a well-tested C/C++ library directly in the browser and achieve performance an order of magnitude faster than a similar JavaScript library.

I asked the team members—Brian Feldman, Debra Do, Yervant Bastikian, and Mark Romano—to write about their experience.

Note: The report that follows was written by the team members mentioned above.

WebAssembly (“wasm”) made a splash this year with its MVP release, and eager to get in on the action, we set out to build an application that made use of this new technology.

We’d seen projects like WebDSP compile their own C++ video filters to WebAssembly, an area where JavaScript has historically floundered due to the computational demands of some algorithms. This got us interested in pushing the limits of wasm, too. We wanted to use an existing, specialized, and time-tested C++ library, and after much deliberation, we landed on OpenCV, a popular open-source computer vision library.

Computer vision is highly demanding on the CPU, and thus lends itself well to wasm. Building off of some incredible work put forward by the UC Irvine SysArch group and Github user njor, we were able to update outdated asm.js builds of OpenCV to compile with modern versions of Emscripten, exposing much of OpenCV’s core functionality in JavaScript callable formats.

Working with these Emscripten builds went much differently than we expected. As Web developers, we’re used to writing code and being able to iterate and test very quickly. Introducing a large C++ library with 10-15 minute build times was a foreign experience, especially when our normal working environments are Webpack, Nodemon, and hot reloading everywhere. Once compiled, we approached the wasm build as a bit of a black box: the module started as an immutable beast of an object, and though we understood it more and more throughout the process, it never became ‘transparent’.

The efforts spent on compiling the wasm file, and then incorporating it into our JavaScript were worthwhile: it outperformed JavaScript with ease, and was significantly quicker than WebAssembly’s predecessor, asm.js.

We compared these formats through the use of a face detection algorithm. The architecture of the functions that drove these algorithms was the same, the only difference was the implementation language for each algorithm. Using web workers, we passed video stream data into the algorithms, which returned with the coordinates of a rectangle that would frame any faces in the image, and calculated an FPS measure. While the range of FPS is dependent on the user’s machine and the browser being used (Firefox takes the cake!), we noted that the FPS of the wasm-powered algorithm was consistently twice as high as the FPS of the asm.js implementation, and twenty times higher than the JS implementation, solidifying the benefits of web assembly.

Building in cutting edge technology can be a pain, but the reward was worth the temporary discomfort. Being able to use native, portable, C/C++ code in the browser, without third-party plugins, is a breakthrough. Our project, WebSight, successfully demonstrated the use of OpenCV as a WebAssembly module for face and eye detection. We’re really excited about the  future of WebAssembly, especially the eventual addition of garbage collection, which will make it easier to efficiently run other high-level languages in the browser.

You can view the demo’s GitHub repository at github.com/Web-Sight/WebSight.

Planet MozillaMartes Mozilleros, 12 Sep 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaMartes Mozilleros, 12 Sep 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaMozilla running into CHAOSS to Help Measure and Improve Open Source Community Health

This week the Linux Foundation announced project CHAOSS, a collaborative initiative focused on creating the analytics and metrics to help define the health of open source communities, and developing tools for analyzing and improving the contributor experience in modern software development.

<figure>credit: Chaoss project</figure>

Besides Mozilla, initial members contributing to the project include Bitergia, Eclipse Foundation, Jono Bacon Consulting, Laval University (Canada), Linaro, OpenStack, Polytechnique Montreal (Canada) Red Hat, Sauce Labs, Software Sustainability Institute, Symphony Software Foundation, University of Missouri, University of Mons (Belgium), University of Nebraska at Omaha, and University of Victoria.

With the combined expertise from academic researchers and practitioners from industry the CHAOSS metrics committee aims to “define a neutral, implementation-agnostic set of reference metrics to be used to describe communities in a common way.” The analytical work will be complemented by the CHAOSS software committee, “formed to provide a framework for establishing an open source GPLv3 reference implementation of the CHAOSS metrics.”

Mozilla’s Open Innovation strategist Don Marti will be part of the CHAOSS project’s governance board, which is responsible for the overall oversight of the Project and coordination of efforts of the technical committees.

As a member of CHAOSS, Mozilla is committed to supporting research that will help maintainers pick the right open source metrics to focus on — metrics that will help open source projects make great software and provide a rewarding experience for contributors.

If you want to learn more about how to participate in the project have a look at the CHAOSS community website: https://chaoss.community.


Mozilla running into CHAOSS to Help Measure and Improve Open Source Community Health was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaTwo Days, or How Long Until The Data Is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 


Planet MozillaThe backdoor threat

— “Have you ever detected anyone trying to add a backdoor to curl?”

— “Have you ever been pressured by an organization or a person to add suspicious code to curl that you wouldn’t otherwise accept?”

— “If a crime syndicate would kidnap your family to force you to comply, what backdoor would you be be able to insert into curl that is the least likely to get detected?” (The less grim version of this question would instead offer huge amounts of money.)

I’ve been asked these questions and variations of them when I’ve stood up in front of audiences around the world and talked about curl and how it is one of the most widely used software components in the world, counting way over three billion instances.

Back door (noun)
— a feature or defect of a computer system that allows surreptitious unauthorized access to data.

So how is it?

No. I’ve never seen a deliberate attempt to add a flaw, a vulnerability or a backdoor into curl. I’ve seen bad patches and I’ve seen patches that brought bugs that years later were reported as security problems, but I did not spot any deliberate attempt to do bad in any of them. But if done with skills, certainly I wouldn’t have noticed them being deliberate?

If I had cooperated in adding a backdoor or been threatened to, then I wouldn’t tell you anyway and I’d thus say no to questions about it.

How to be sure

There is only one way to be sure: review the code you download and intend to use. Or get it from a trusted source that did the review for you.

If you have a version you trust, you really only have to review the changes done since then.

Possibly there’s some degree of safety in numbers, and as thousands of applications and systems use curl and libcurl and at least some of them do reviews and extensive testing, one of those could discover mischievous activities if there are any and report them publicly.

Infected machines or owned users

The servers that host the curl releases could be targeted by attackers and the tarballs for download could be replaced by something that carries evil code. There’s no such thing as a fail-safe machine, especially not if someone really wants to and tries to target us. The safeguard there is the GPG signature with which I sign all official releases. No malicious user can (re-)produce them. They have to be made by me (since I package the curl releases). That comes back to trusting me again. There’s of course no safe-guard against me being forced to signed evil code with a knife to my throat…

If one of the curl project members with git push rights would get her account hacked and her SSH key password brute-forced, a very skilled hacker could possibly sneak in something, short-term. Although my hopes are that as we review and comment each others’ code to a very high degree, that would be really hard. And the hacked person herself would most likely react.

Downloading from somewhere

I think the highest risk scenario is when users download pre-built curl or libcurl binaries from various places on the internet that isn’t the official curl web site. How can you know for sure what you’re getting then, as you couldn’t review the code or changes done. You just put your trust in a remote person or organization to do what’s right for you.

Trusting other organizations can be totally fine, as when you download using Linux distro package management systems etc as then you can expect a certain level of checks and vouching have happened and there will be digital signatures and more involved to minimize the risk of external malicious interference.

Pledging there’s no backdoor

Some people argue that projects could or should pledge for every release that there’s no deliberate backdoor planted so that if the day comes in the future when a three-letter secret organization forces us to insert a backdoor, the lack of such a pledge for the subsequent release would function as an alarm signal to people that something is wrong.

That takes us back to trusting a single person again. A truly evil adversary can of course force such a pledge to be uttered no matter what, even if that then probably is more mafia level evilness and not mere three-letter organization shadiness anymore.

I would be a bit stressed out to have to do that pledge every single release as if I ever forgot or messed it up, it should lead to a lot of people getting up in arms and how would such a mistake be fixed? It’s little too irrevocable for me. And we do quite frequent releases so the risk for mistakes is not insignificant.

Also, if I would pledge that, is that then a promise regarding all my code only, or is that meant to be a pledge for the entire code base as done by all committers? It doesn’t scale very well…

Additionally, I’m a Swede living in Sweden. The American organizations cannot legally force me to backdoor anything, and the Swedish versions of those secret organizations don’t have the legal rights to do so either (caveat: I’m not a lawyer). So, the real threat is not by legal means.

What backdoor would be likely?

It would be very hard to add code, unnoticed, that sends off data to somewhere else. Too much code that would be too obvious.

A backdoor similarly couldn’t really be made to split off data from the transfer pipe and store it locally for other systems to read, as that too is probably too much code that is too different than the current code and would be detected instantly.

No, I’m convinced the most likely backdoor code in curl is a deliberate but hard-to-detect security vulnerability that let’s the attacker exploit the program using libcurl/curl by some sort of specific usage pattern. So when triggered it can trick the program to send off memory contents or perhaps overwrite the local stack or the heap. Quite possibly only one step out of several steps necessary for a successful attack, much like how a single-byte-overwrite can lead to root access.

Any past security problems on purpose?

We’ve had almost 70 security vulnerabilities reported through the project’s almost twenty years of existence. Since most of them were triggered by mistakes in code I wrote myself, I can be certain that none of those problems were introduced on purpose. I can’t completely rule out that someone else’s patch modified curl along the way and then by extension maybe made a vulnerability worse or easier to trigger, could have been made on purpose. None of the security problems that were introduced by others have shown any sign of “deliberateness”. (Or were written cleverly enough to not make me see that!)

Maybe backdoors have been planted that we just haven’t discovered yet?

Discussion

Follow-up discussion/comments on hacker news.

Planet MozillaThis Week in Rust 199

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is pikkr, a JSON parser that can extract values without tokenization and is blazingly fast using AVX2 instructions, Thank you, bstrie for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

99 pull requests were merged in the last week

New Contributors

  • bgermann
  • Douglas Campos
  • Ethan Dagner
  • Jacob Kiesel
  • John Colanduoni
  • Lance Roy
  • Mark
  • MarkMcCaskey
  • Max Comstock
  • toidiu
  • Zaki Manian

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When programmers are saying that there are a lot of bicycles in code that means that it contains reimplementations of freely available libraries instead of using them

Presumably the metric for this would be bicyclomatic complexity?

/u/tomwhoiscontrary on reddit.

Thanks to Matt Ickstadt for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaCyclic queries in chalk

In my last post about chalk queries, I discussed how the query model in chalk. Since that writing, there have been some updates, and I thought it’d be nice to do a new post covering the current model. This post will also cover the tabling technique that scalexm implemented for handling cyclic relations and show how that enables us to implement implied bounds and other long-desired features in an elegant way. (Nice work, scalexm!)

What is a chalk query?

A query is simply a question that you can ask chalk. For example, we could ask whether Vec<u32> implements Clone like so (this is a transcript of a cargo run session in chalk):

?- load libstd.chalk
?- Vec<u32>: Clone
Unique; substitution [], lifetime constraints []

As we’ll see in a second, the answer “Unique” here is basically chalk’s way of saying “yes, it does”. Sometimes chalk queries can contain existential variables. For example, we might say exists<T> { Vec<T>: Clone } – in this case, chalk actually attempts to not only tell us if there exists a type T such that Vec<T>: Clone, it also wants to tell us what T must be:

?- exists<T> { Vec<T>: Clone }
Ambiguous; no inference guidance

The result “ambiguous” is chalk’s way of saying “probably it does, but I can’t say for sure until you tell me what T is”.

So you think can think of a chalk query as a kind of subroutine like Prove(Goal) = R that evaluates some goal (the query) and returns a result R which has one of the following forms:

  • Unique: indicates that the query is provable and there is a unique value for all the existential variables.
    • In this case, we give back a substitution saying what each existential variable had to be.
    • Example: exists<T> { usize: PartialOrd<T> } would yield unique and return a substitution that T = usize, at least today (since there is only one impl that could apply, and we haven’t implemented the open world modality that aturon talked about yet).
  • Ambiguous: the query may hold but we could not be sure. Typically, this means that there are multiple possible values for the existential variables.
    • Example: exists<T> { Vec<T>: Clone } would yield ambiguous, since there are many T that could fit the bill).
    • In this case, we sometimes give back guidance, which are suggested values for the existential variables. This is not important to this blog post so I’ll not go into the details.
  • Error: the query is provably false.

(The form of these answers has changed somewhat since my previous blog post, because we incorporated some of aturon’s ideas around negative reasoning.)

So what is a cycle?

As I outlined long ago in my first post on lowering Rust traits to logic, the way that the Prove(Goal) subroutine works is basically just to iterate over all the possible ways to prove the given goal and try them one at a time. This often requires proving subgoals: for example, when we were evaluating ?- Vec<u32>: Clone, internally, this would also wind up evaluating u32: Clone, because the impl for Vec<T> has a where-clause that T must be clone:

impl<T> Clone for Vec<T>
where
  T: Clone,
  T: Sized,
{ }

Sometimes, this exploration can wind up trying to solve the same goal that you started with! The result is a cyclic query and, naturally, it requires some special care to yield a valid answer. For example, consider this setup:

trait Foo { }
struct S<T> { }
impl<U> Foo for S<U> where U: Foo { }

Now imagine that we were evaluating exists<T> { T: Foo }:

  • Internally, we would process this by first instantiating the existential variable T with an inference variable, so we wind up with something like ?0: Foo, where ?0 is an as-yet-unknown inference variable.
  • Then we would consider each impl: in this case, there is only one.
    • For that impl to apply, ?0 = S<?1> must hold, where ?1 is a new variable. So we can perform that unification.
      • But next we must check that ?1: Foo holds (that is the where-clause on the impl). So we would convert this into “closed” form by replacing all the inference variables with exists binders, giving us something like exists<T> { T: Foo }. We can now perform this query.
        • Only wait: This is the same query we were already trying to solve! This is precisely what we mean by a cycle.

In this case, the right answer for chalk to give is actually Error. This is because there is no finite type that satisfies this query. The only type you could write would be something like

S<S<S<S<...ad infinitum...>>>>: Foo

where there are an infinite number of nesting levels. As Rust requires all of its types to have finite size, this is not a legal type. And indeed if we ask chalk this query, that is precisely what it answers:

?- exists<T> { S<T>: Foo }
No possible solution: no applicable candidates

But cycles aren’t always errors of this kind. Consider a variation on our previous example where we have a few more impls:

trait Foo { }

// chalk doesn't have built-in knowledge of any types,
// so we have to declare `u32` as well:
struct u32 { }
impl Foo for u32 { }

struct S<T> { }
impl<U> Foo for S<U> where U: Foo { }

Now if we ask the same query, we get back an ambiguous result, meaning that there exists many solutions:

?- exists<T> { T: Foo }
Ambiguous; no inference guidance

What has changed here? Well, introducing the new impl means that there is now an infinite family of finite solutions:

  • T = u32 would work
  • T = S<u32> would work
  • T = S<S<u32>> would work
  • and so on.

Sometimes there can even be unique solutions. For example, consider this final twist on the example, where we add a second where-clause concerning Bar to the impl for S<T>:

trait Foo { }
trait Bar { }

struct u32 { }
impl Foo for u32 { }

struct S<T> { }
impl<U> Foo for S<U> where U: Foo, U: Bar { }
//                                 ^^^^^^ this is new

Now if we ask the same query again, we get back yet a different response:

?- exists<T> { T: Foo }
Unique; substitution [?0 := u32], lifetime constraints []

Here, Chalk figured out that T must be u32. How can this be? Well, if you look, it’s the only impl that can apply – for T to equal S<U>, U must implement Bar, and there are no Bar impls at all.

So we see that when we encounter a cycle during query processing, it doesn’t necessarily mean the query needs to result in an error. Indeed, the overall query may result in zero, one, or many solutions. But how does should we figure out what is right? And how do we avoid recursing infinitely while doing so? Glad you asked.

Tabling: how chalk is handling cycles right now

Naturally, traditional Prolog interpreters have similar problems. It is actually quite easy to make a Prolog program spiral off into an infinite loop by writing what seem to be quite reasonable clauses (quite like the ones we saw in the previous section). Over time, people have evolved various techniques for handling this. One that is relevant to us is called tabling or memoization – I found this paper to be a particularly readable introduction. As part of his work on implied bounds, scalexm implemented a variant of this idea in chalk.

The basic idea is as follows. When we encounter a cycle, we will actually wind up iterating to find the result. Initially, we assume that a cycle means an error (i.e., no solutions). This will cause us to go on looking for other impls that may apply without encountering a cycle. Let’s assume we find some solution S that way. Then we can start over, but this time, when we encounter the cyclic query, we can use S as the result of the cycle, and we would then check if that gives us a new solution S’.

If you were doing this in Prolog, where the interpreter attempts to provide all possible answers, then you would keep iterating, only this time, when you encountered the cycle, you would give back two answers: S and S’. In chalk, things are somewhat simpler: multiple answers simply means that we give back an ambiguous result.

So the pseudocode for solving then looks something like this:

  • Prove(Goal):
    • If goal is ON the stack already:
      • return stored answer from the stack
    • Else, when goal is not on the stack:
      • Push goal on to the stack with an initial answer of error
      • Loop
        • Try to solve goal yielding result R (which may generate recursive calls to Solve with the same goal)
        • Pop goal from the stack and return the result R if any of the following are true:
          • No cycle was encountered; or,
          • the result was the same as what we started with; or,
          • the result is ambiguous (multiple solutions).
        • Otherwise, set the answer for Goal to be R and repeat.

If you’re curious, the real chalk code is here. It is pretty similar to what I wrote above, except that it also handles “coinductive matching” for auto traits, which I won’t go into now. In any case, let’s apply this to our three examples of proving exists<T> { T: Foo }:

  • In the first example, where we only had impl<U> Foo for S<U> where U: Foo, the cyclic attempt to solve will yield an error (because the initial answer for cyclic alls is errors). There is no other way for a type to implement Foo, and hence the overall attempt to solve yields an error. This is the same as what we started with, so we just return and we don’t have to cycle again.
  • In the second example, where we added impl Foo for u32, we again encounter a cycle and return error at first, but then we see that T = u32 is a valid solution. So our initial result R is Unique[T = u32]. This is not what we started with, so we try again.
    • In the second iteration, when we encounter the cycle trying to process impl<U> Foo for S<U> where U: Foo, this time we will give back the answer U = u32. We will then process the where-clause and issue the query u32: Foo, which succeeds. Thus we wind up yielding a successful possibility, where T = S<u32>, in addition to the result that T = u32. This means that, overall, our second iteration winds up producing ambiguity.
  • In the final example, where we added a where clause U: Bar, the first iteration will again produce a result of Unique[T = u32]. As this is not what we started with, we again try a second iteration.
    • In the second iteration, we will again produce T = u32 as a result for the cycle. This time however we go on to evaluate u32: Bar, which fails, and hence overall we still only get one successful result (T = u32).
    • Since we have now reached a fixed point, we stop processing.

Why do we care about cycles anyway?

You may wonder why we’re so interested in handling cycles well. After all, how often do they arise in practice? Indeed, today’s rustc takes a rather more simplistic approach to cycles. However, this leads to a number of limitations where rustc fails to prove things that it ought to be able to do. As we were exploring ways to overcome these obstacles, as well as integrating ideas like implied bounds, we found that a proper handling of cycles was crucial.

As a simple example, consider how to handle “supertraits” in Rust. In Rust today, traits sometimes have supertraits, which are a subset of their ordinary where-clauses that apply to Self:

// PartialOrd is a "supertrait" of Ord. This means that
// I can only implement `Ord` for types that also implement
// `PartialOrd`.
trait Ord: PartialOrd { }

As a result, whenever I have a function that requires T: Ord, that implies that T: PartialOrd must also hold:

fn foo<T: Ord>(t: T) {
  bar(t); // OK: `T: Ord` implies `T: PartialOrd`
}  

fn bar<T: PartialOrd>(t: T) {
  ...
}  

The way that we handle this in the Rust compiler is through a technique called elaboration. Basically, we start out with a base set of where-clauses (the ones you wrote explicitly), and then we grow that set, adding in whatever supertraits should be implied. This is an iterative process that repeats until a fixed-point is reached. So the internal set of where-clauses that we use when checking foo() is not {T: Ord} but {T: Ord, T: PartialOrd}.

This is a simple technique, but it has some limitations. For example, RFC 1927 proposed that we should elaborate not only supertraits but arbitrary where-clauses declared on traits (in general, a common request). Going further, we have ideas like the implied bounds RFC. There are also just known limitations around associated types and elaboration.

The problem is that the elaboration technique doesn’t really scale gracefully to all of these proposals: often times, the fully elaborated set of where-clauses is infinite in size. (We somewhat arbitrarily prevent cycles between supertraits to prevent this scenario in that special case.)

So we tried in chalk to take a different approach. Instead of doing this iterative elaboration step, we push that elaboration into the solver via special rules. The basic idea is that we have a special kind of predicate called a WF (well-formed) goal. The meaning of something like WF(T: Ord) is basically “T is capable of implementing Ord” – that is, T satisfies the conditions that would make it legal to implement Ord. (It doesn’t mean that T actually does implement Ord; that is the predicate T: Ord.) As we lower the Ord and PartialOrd traits to simpler logic rules, then, we can define the WF(T: Ord) predicate like so:

// T is capable of implementing Ord if...
WF(T: Ord) :-
  T: PartialOrd. // ...T implements PartialOrd.

Now, WF(T: Ord) is really an “if and only if” predicate. That is, there is only one way for WF(T: Ord) to be true, and that is by implementing PartialOrd. Therefore, we can define also the opposite direction:

// T must implement PartialOrd if...
T: PartialOrd :-
  WF(T: Ord). // ...T is capable of implementing Ord.

Now if you think this looks cyclic, you’re right! Under ordinary circumstances, this pair of rules doesn’t do you much good. That is, you can’t prove that (say) u32: PartialOrd by using these rules, you would have to use other rules for that (say, rules arising from an impl).

However, sometimes these rules are useful. In particular, if you have a generic function like the function foo we saw before:

fn foo<T: Ord>() { .. }

In this case, we would setup the environment of foo() to contain exactly two predicates {T: Ord, WF(T: Ord)}. This is a form of elaboration, but not the iterative elaboration we had before. We simply introduce WF-clauses. But this gives us enough to prove that T: PartialOrd (because we know, by assumption, that WF(T: Ord)). What’s more, this setup scales to arbitrary where-clauses and other kinds of implied bounds.

Conclusion

This post covers the tabling technique that chalk currently uses to handle cycles, and also the key ideas of how Rust handles elaboration.

The current implementation in chalk is really quite naive. One interesting question is how to make it more efficient. There is a lot of existing work on this topic from the Prolog community, naturally, with the work on the well-founded semantics being among the most promising (see e.g. this paper). I started doing some prototyping in this direction, but I’ve recently become intrigued with a different approach, where we use the techniques from Adapton (or perhaps other incremental computation systems) to enable fine-grained caching and speed up the more naive implementation. Hopefully this will be the subject of the next blog post!

Planet MozillaWelcome to San Francisco, Chairman Pai – We Depend on Net Neutrality

This is an open letter to FCC Chairman Ajit Pai as he arrives in San Francisco for an event. He has said that Silicon Valley is a magically innovative place – and we agree. An open internet makes that possible, and enables other geographical areas to grow and innovate too.

Welcome to San Francisco, Chairman Pai! As you have noted in the past, the Bay Area has been a hub for many innovative companies. Our startups, technology companies, and service providers have added value for billions of users online.

The internet is a powerful tool for the economy and creators. No one owns the internet – we can all create, shape, and benefit from it. And for the future of our society and our economy, we need to keep it that way – open and distributed.

We are very concerned by your proposal to roll back net neutrality protections that the FCC enacted in 2015 and that are currently in place. That enforceable policy framework provides vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Abandoning these core protections will hurt consumers and small businesses alike.

As network engineers have noted, your proposal mischaracterizes many aspects of the internet, and does not show that the 2015 open internet order would benefit anyone other than major broadband providers. Instead, this seems like a politically loaded decision made about rules that have not been tested, either in the courts or in the field. User rights, the American economy, and free speech should not be used as political footballs. We deserve more from you, an independent regulator.

Broadband providers are in a position to restrict internet access for their own business objectives: favoring their own products, blocking sites or brands, or charging different prices (either to users or to content providers) and offering different speeds depending on content type. Net neutrality prohibits network providers from discriminating based on content, so everyone has equal access to potential users – whether you are a powerful incumbent or an up-and-coming disruptive service. That’s key to a market that works.

The open internet aids free speech, competition, innovation and user choice. We need more than the hollow promises and wishful thinking of your proposal – we must have enforceable rules. And net neutrality enforcement under non-Title II theories has been roundly rejected by the courts.

Politics is a terrible way to decide the future of the internet, and this proceeding increasingly has the makings of a spectator sport, not a serious debate. Protecting the internet should not be a political, or partisan, issue. The internet has long served as a forum where all voices are free to be heard – which is critical to democratic and regulatory processes. These suffer when the internet is used to feed partisan politics. This partisanship also damages the Commission’s strong reputation as an independent agency. We don’t believe that net neutrality, internet access, or the open internet is – or ever should be – a partisan issue. It is a human issue.

Net neutrality is most essential in communities that don’t count giant global businesses as their neighbors like your hometown in Kansas. Without it, consumers and businesses will not be able to compete by building and utilizing new, innovative tools. Proceed carefully – and protect the entire internet, not just giant ISPs.

The post Welcome to San Francisco, Chairman Pai – We Depend on Net Neutrality appeared first on Open Policy & Advocacy.

Planet MozillaPhoton Engineering Newsletter #15

I’m back from a vacation to see the eclipse, so it’s time for Newsletter #15! (It’s taking me some time to get caught up, so this update covers the last 2 or so weeks.)

As noted in my previous update, Mike and Jared took over Newsletter duties while I was out. If you somehow missed their excellent updates – Newsletter #13 and Newsletter #14 – please check them out. (Go ahead, I’ll wait.)

We’re getting very close to Firefox 57 entering Beta! Code merges to the Beta on September 20th, and the first Beta release should come on the 26th. The Photon project is targeting the 15th to be ready for Beta, just to make sure there’s a bit of time to spare. We’ll be continuing to fix bugs and improve polish during the Beta, but the type of fixes we make will begin to scale back, as we focus on making sure 57 is a rock-solid release. This means becoming increasingly risk-adverse – there will always be bugs (and more releases to fix them in), so we very much want to avoid causing new regressions shortly before 57 ships to everybody. Last-minute firedrills are no fun for anyone. But we’re in really great shape right now – we’re done with feature development, are already shifting to more minor fixes, and there isn’t anything really scary waiting to be fixed.

Recent Changes

Menus/structure:

Animation:

Preferences:

  • Once last P1 bug to feature complete!
  • Team to move to help out Onboarding once all P1 and important P3s are fixed.

Visual redesign:

Onboarding:

Performance:


Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>