Planet MozillaA response to “Net Neutrality. No big deal.”

I recently watched this video titled “Net Neturality. No big deal.” by Bryan Lunduke.

I watched this video because, while I am in favour of Net Neutrality, and concerned about the impending repeal of Net Neutrality regulations in the United States, I consciously try to avoid being stuck in an echo chamber of similar views, and try to expose myself to opposing viewpoints as well; particularly when such opposing viewpoints are held by a figure in a community that I respect and identify with (in Bryan’s case, the Linux and free software community).

I found the video interesting and well-presented, but I didn’t find Bryan’s arguments convincing. I decided to write this post to respond to two of the arguments Bryan makes in particular.

The first argument I wanted to address was about the fact that some of the largest companies that are lobbying to keep Net Neutrality rules in place – Google, Netflix, and Microsoft – are also supporters of DRM. Bryan argues, that since these companies support DRM, which is a threat to a free and open internet, we should not take their support for Net Neutrality (which they claim to also be motivated by a desire for a free and open internet) at face value; rather, they only support Net Neutrality regulations because they have a financial incentive for doing so (namely, they run high-bandwidth streaming and similar services that are likely to be first in line to be throttled in a world without Net Neutrality protections).

I don’t dispute that companies like Google, Netflix, and Microsoft support Net Neutrality for selfish reasons. Yes, the regulations affect their bottom line, at least in the short term. But that doesn’t mean there aren’t also good reasons for supporting Net Neutrality. Many organizations – like the Electronic Frontier Foundation, whom I have no reason to suspect to be beholden to the pocketbooks of large tech companies – have argued that Net Neutrality is, in fact, important for a free and open internet. That Netflix supports it for a different reason, doesn’t make that any less the case.

I also think that comparing DRM and the lack of Net Neutrality in this way confuses the issue. Yes, both are threats to a free and open internet, but I think they are qualitatively very different.

To explain why, let’s model an instance of communication over the internet as being between two parties: a sender or producer of the communication, and its receiver or consumer. DRM exists to give the producer control over how the communication is consumed. There are many problems with DRM, but at least it is not intended to interfere with communication in cases where the producer and consumer agree on the terms (e.g. the price, or lack thereof) of the exchange1.

By contrast, in a world without Net Neutrality rules, an intermediary (such as an ISP) can interfere with (such as by throttling) communication between two parties even when the two parties agree on the terms of the communication. This potentially opens the door to all manner of censorship, such as interfering with the communications of political activists. I see this as being a much greater threat to free communication than DRM.

(I also find it curious that Bryan seems to focus particularly on the standardization of DRM on the Web as being objectionable, rather than DRM itself. Given that DRM exists regardless of whether or not it’s standardized on the Web, the fact that it is standardized on the Web is a good thing, because it enables the proprietary software that implements the DRM to be confined to a low-privilege sandbox in the user’s browser, rather than having “full run of the system” as pre-standardization implementations of DRM like Adobe Flash did. See this article for more on that topic.)

The second argument Bryan makes that I wanted to address was that Net Neutrality rules mean the U.S. government being more involved in internet communications, such as by monitoring communications to enforce the rules.

I don’t buy this argument for two reasons. First, having Net Neutrality rules in place does not mean that internet communications need to be proactively monitored to enforce the rules. The role of the government could very well be limited to investigating and corrections violations identified and reported by users (or organizations acting on behalf of users).

But even we assume there will be active monitoring of internet communications to enforce the rules, I don’t see that as concerning. Let’s not kid ourselves: the U.S. government already monitors all internet communications it can get its hands on; axing Net Neutrality rules won’t cause them to stop. Moreover, users already have a way to protect the content of their communications (and, if desired, even the metadata, using tools like Tor) from being monitored: encryption. Net Neutrality rules don’t change that in any way.

In sum, I enjoyed watching Bryan’s video and I always appreciate opposing viewpoints, but I didn’t find the arguments that Net Neutrality is not a big deal convincing. For the time being, I continue to believe that the impending rollback of U.S. Net Neutrality rules is a big deal.

Footnotes

1. I am thinking here of cases where the content being communicated is original content, that is, content originated by the producer. I am, of course, aware, that DRM can and does interfere with the ability of two parties to communicate content owned by a third party, such as sending a movie to a friend. To be pedantic, DRM can even interfere with communication of original content in cases where such content is mistakenly identified as belonging to a third party. I’m not saying DRM is a good thing – I’m just saying it doesn’t rise to being the same level of threat to free communication as not having Net Neutrality protections does.


Planet MozillaFree software in the snow

There are an increasing number of events for free software enthusiasts to meet in an alpine environment for hacking and fun.

In Switzerland, Swiss Linux is organizing the fourth edition of the Rencontres Hivernales du Libre in the mountain resort of Saint-Cergue, a short train ride from Geneva and Lausanne, 12-14 January 2018. The call for presentations is still open.

In northern Italy, not far from Milan (Malpensa) airport, Debian is organizing a Debian Snow Camp, a winter getaway for developers and enthusiasts in a mountain environment where the scenery is as diverse as the Italian culinary options. It is hoped the event will take place 22-25 February 2018.

Planet MozillaReps Weekly Meeting Nov. 23, 2017

Reps Weekly Meeting Nov. 23, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone....

Planet MozillaReps Weekly Meeting Nov. 23, 2017

Reps Weekly Meeting Nov. 23, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone....

Planet MozillaNon-coding Volunteers, Mission-driven Mozillians, Community Coordinators & SUMO

Hey there, SUMO Nation!

On the weekend of 18th & 19th of November 2017, SUMO representatives took part in a discussion and brainstorming session organized by the Open Innovation team. This post is a summary of what was discussed (including a few definitions to maintain clarity) and a call to action for all of you who express interest in participating in the next stage of community building across Mozilla.

Please read this post carefully in order to avoid confusion or surprises in the coming months. Thank you!

Summary

The introductory part allowed everyone invited (representatives from both staff and community for the Reps Council, the Indian community, Localization, and SUMO) explain their project’s goals and current community situation. There was also a presentation regarding the diversity & inclusion status at Mozilla.

Later on, we moved on to reviewing external cases and digging deeper into what was the main problem statement: the way we try to organize communities at the moment is not the most efficient and best use of everyone’s time, energy, and other resources. We want to find a better way with your help and make it work for everyone’s benefit.

Some of the big issues leading to this statement brought up during the session were: inconsistency, competition and conflict, accumulation of power, lack of equal opportunities and oversight.

Thus, our goal was to come together as a group to create and align around a set of principles for developing healthy and cohesive volunteer leadership structures for mission-driven Mozillians.

We agreed to reach this goal through alignment on challenges and opportunities facing volunteer leadership structures at Mozilla today, developing insight into a shared set of principles to guide Mozilla’s support of its community in the future, and engaging a broader audience (this means you) as the next step.

The second day was focussed on revisiting the discussion from the first day and then working together on generating a set of principles and planning the involvement of a larger audience (this includes you as well).

It is very important to note here that all of the information and planning connected with this project is meant to apply and appeal to Mozillians interested in coordinating community activities, without changing or compromising the more “technical” or “task-based” aspects of “getting stuff done” as a Mozillian.

In other words, do not expect any large changes in the way our Support Forums, Social Support or Localization work because of this project. You should expect more opportunities to engage with the wider Mozillian community in your area (of activity or geographical) – and be ready to get involved if you want to.

A Few Definitions

Non-coding volunteers & Mission-driven Mozillians

When mentioning the above, we talked about people who:

  • Regularly contribute to a number of activities in support of Mozilla’s mission
  • Are highly invested in Mozilla as an organization and a mission
  • Contribute in multiple areas (either sequentially or simultaneously)
  • (Usually) do not contribute code to Mozilla’s products and technologies (or it’s not their primary focus)
  • (Usually) have multiple social connections to other contributors

Their activities include: evangelizing, teaching, advocating, localizing, documenting, community building and doing testing.

Leadership & Community Coordination

During the meeting, for lack of a better catch-all term, we agreed to talk about “leadership” and “leaders” as our focus. We all agreed that in the future it would be much better to use a less “loaded” term. “Community coordination” and “Community Coordinators” are friendly replacement suggestions from the SUMO team. You’ve seen them here first! ;-)

Call To Action

We spent the whole weekend discussing the above and working on synthesizing our shared experiences, questions, and ideas.

Now it’s your time to get involved and take part in creating the future of Mozilla’s global community.

Please refer to this Mozilla Discourse thread and Question document to get started.

The deadline for this project is 11th of December.

Thank you & see you on Discourse!

Michał

 

Credits

Huge thanks to:

– Konstantina & Martyna for all the logistics

– Lucy, Emma, Sukhmani, Ruben & George for preparing & coordinating the talks and brainstorms

Madalina, Paul & Simone for preparing & representing SUMO

– the Berlin MozSpace for being at its usual warm & welcoming atmosphere (despite and against the fiercely autumnal weather outside)

Planet MozillaXUL, Mac Touchbar, BlueGriffon

The title of this article says it all. First attempt, works fine, trivial to add to any XUL window. This is a code I wrote for Postbox, used here with permission.

BlueGriffon with Mac Touchbar

Planet MozillaNoScript 10.1.2: Temporary allow all and more

v 10.1.2
=============================================================
+ Added "Revoke temporary permissions" button
+ Added "Temporarily allow all this page" button
x Simplified popup listing, showing base domains only (full
  origin URLs can still be entered in the Options window to
  further tweak permissions)
x Fixed UI not launching in Incognito mode
x Fixed changing permissions in the CUSTOM preset affecting
  the DEFAULT permissions sometimes
x Fixed UI almost unusable in High Contrast mode
x Fixed live bookmark feeds blocked if "fetch" permissions
  were not given
x Fixed background requests from other WebExtensions being
  blocked

Update

Oh, and in case you missed it (sorry, how couldn't you, since I didn't manage to write any documentation yet?), Alt+Shift+N is the convenient keyboard shortcut to #NoScript10's permission management popup :)

Planet Mozilla"What are you struggling with?"

This week I had a discussion with my open source students in order to get a sense of where people were at with their work. One of the questions I had for them was, "What are you struggling with, what are you finding tricky?"

Here's some of what I heard from a class of 40 undergrad students who have been working on open source with Mozilla since September:

1. git

Specifically things like keeping their trees up-to-date, rebasing, squashing, etc. This didn't surprise me. Getting comfortable with common, local git workflows (add, commit, branch, checkout) is one thing; but becoming comfortable with using remotes, tracking fast moving upstream branches, re-aligning work you've done with modifications to the main project--it's a lot when you're just starting out. Unfortunately, the way open source uses GitHub, you have to deal with all of it at once.

I need to spend a bunch of time showing them how to accomplish things I'm seeing in their pull requests. I think it's easier to receive this when you've got a need for it.

2. finding bugs

I hear this one a lot, and it's hard to solve in the general case. On the one hand, there are literally millions of open issues on GitHub. For example, here's 1.6 million open issues with the word fix in the title. GitHub published numbers for 2017 indicating that ~69 million issues were closed this year alone.

GitHub's same numbers from 2017 indicate that there are over 500K students on GitHub (I bet the number is a lot higher, since most of my students don't self-identify as such in their profiles). How is it, then, that it is so hard to match people who want to learn, contribute, and build things collaboratively with potential work in existing open source projects? My teaching for the past decade can basically be summed up as "find bugs for all my students." Some terms I do better than others, and it's always a challenge.

What makes a bug "good" for one student will also make it "no good" for another, whether due to issues of timing, current technical ability, fit with academic goals, etc. To get this right, you really need to take a hybrid approach here: one foot in the open source project, another in the classroom, and help to connect people to projects by knowing something about both. Much like trying to sell your house by putting a "For Sale" sign in the window, it's really hard to advertise project work and then magically have the right people locate it. Similarly, students randomly scrolling through issues rarely achieve a favourable outcome. I've had way more success by talking to people on both sides, and then matching them. Mozilla's Jason Laster has been excellent at this during the past months. "I've got a student who wants to work on something like X," I'd tell him. "OK, I've got an idea, I'll file a bug she can work." Amazing.

Another issue that comes up a lot is that not all communities can or want to engage with new contributors. This is fair, and to be expected as teams deal with various release cycles. This term alone I've been told by three groups I've approached that they didn't want me to bring students into their project area because they couldn't afford to spend the time mentoring. I think projects are getting better about knowing their limits, and being vocal about what they can and can't do.

I'm also experiencing an interesting attitude with my students this term: "I want to write code," I'm hearing a lot of them say. Many view adding new code to be more important (or desirable) to debugging, refactoring, or removing existing code. I tend to prefer the latter myself, and think they would do better if they spent more time reading and less time writing code (we all would, to be honest). Part of this might be discomfort with #3.

3. how to read code and code history

There is such an art to this, and almost no one is taught to do it. We learn to write code, not read it. Part of the trick is knowing where to start, what to ignore, what's critical, and how to progress, since you don't start at main() and work your way to the end. Code has more in common with Choose Your Own Adventure books than it does with literature.

Reading code is almost always goal oriented. Sometimes the goal might be enjoyment, but that's not usually the case in my experience. Rather, you're usually trying to fix something, or understand how systems work. Unlike a novel that one might devour, you need to interrogate code. You can't trust it. It's almost certainly telling you lies, and often in the documentation! Something is failing. Something is doing what it shouldn't. You have to approach it carefully, from a distance, and be willing to change your mind as you uncover more facts.

To teach this, I think you have to have some goals, whether made-up or real, things you want to understand or fix. I'll look for a big example bug to work on and show them how I'd progress through the code, how to search for things, how to deal with code you don't understand, how to move out from an arbitrary point of understanding toward something more general.

Sometimes I've added a feature with my students in order to show how this works, and other times I'll take apart some existing code to try and figure out how it works. Both have advantages and disadvantages. One thing I might do this time is compare how a few different browsers implement some common feature, and go spelunking through Gecko, WebKit, and Chrome, learning as we go.

4. how to ask questions

We all need help. It sounds obvious, but it's not. It feels like we're the only one who doesn't understand what's going on, especially in a big open source project where there's a flurry of activity all around us from people who seem to be so much smarter and more talented than us. It's easy to examine individuals statically vs. in motion along a timeline: you are progressing, you are learning, you are moving forward, some people ahead of you, some behind. The rate doesn't matter. Your current position doesn't matter. That you're moving forward is all that matters.

And if your progress forward is being impeded by some issue you don't understand, you need to be brave and ask for help. If we can normalize this, it makes it easier. One of my students put it this way on Slack:

knowing how to ask questions in open source is something I feel like I need to get better at. In a way it's good to know other people in the class feel the same way

So how do you ask a question? Respectfully. Begin by respecting yourself. Don't devalue yourself or self-deprecate: you aren't dumb because you don't know something, so don't imply (to yourself or others) that you are. Next, respect the people you're asking. You've got something you need to know in order to progress--how much of the research could you do on your own? You're not respecting other peoples' time when you don't do any work and expect others to do it all for you. But if you've spent time wrestling with a problem, and come out the loser, it's wise to ask for help. Finally, when that help comes, you should be thankful and give respect to the person who has taken the time to help you. This can be as simple as an acknowledgement of the impact they've had, how they've helped you move forward. It doesn't need to be big, but it needs to happen.

5. understanding dev environments

One of my students kept hammering on this point, saying: "We're taught a dozen programming languages, but not how to use all these environments." It's true, learning new programming languages is easy in comparison to how we use them within ecosystems of tools and frameworks. Also, not everything about a dev environment is something you can see in code: lots of things are invisible practices we have as a community. It can be hard to learn these things on your own, because having access to a tool doesn't necessarily mean you have access to the knowledge of how it should be used.

This is why I encourage my students to get involved in the virtual community spaces a project uses. Whether that's Slack, irc, a mailing list, a weekly call--whatever it is, we need to have the ability to watch people use their tools, and observe them doing things we haven't seen before. Just as chefs travel the world to work in great kitchens, and young artists apprentice at the studios of established artists, it's a great idea to join these spaces, and observe. No one will think it odd that you're listening vs. talking. You'll see lots of people ask for help, talk about problems they're having, and also reveal how they work, even if indirectly.

In much rarer cases, they'll show you directly. This is why I really love what Mike Conley has done with The Joy of Coding. As I write this, there are 120 episodes available, where Mike works on Firefox code, and talks about what he's doing as he does it. What an incredible role model Mike is, and what a gift to the community. Webpack's Sean Larkin is another guy I think does a great job here, inserting himself into the learning process of the community.

6. overcoming the feeling that you need permission to do things

"I noticed something confusing in the docs, should I file an issue?" It takes time to gain the confidence necessary to move from "I must be wrong" to "this seems wrong." These are two similar sentiments, but where one one assumes the limitations of the self, the other begins with the belief that everything is broken, including me.

I think it was easier to overcome this 20 years, even 10 years ago, before we all became digital consumers. Open source functions via the web, and the web was built on a foundation of permissionless publishing. I'm writing this now without first getting permission. The web enables us, all of us, to create content and put it beside content from individuals, institutions, governments, and corporations large and small. As such, the web is malleable, editable.

This means that you can and should work on things in projects on the web. You can comment on issues where you have something to say. You can draw attention to shortcomings in documentation, code, tests, or designs. Even better, you can fix these shortcomings. You can insert your ideas, your passion, your gifts, yourself into the web. You don't always need to, nor is it wise to give too much of yourself. But you can, and you should know that you have this power and right. You're important. Your ideas matter. You're welcome.

The other side of this permissionless approach is that no one will give you permission: no one will invite you in. It takes some courage. You have to decide you're going to belong somewhere, then start being active. You have to start saying we when you talk about the project and the code. You have to start believing that you're part of it. Because by virtue of the fact that you're working on it, you are part of it. You don't need permission, or if you still feel like you do, consider this to be your permission to get started.

Just tell them "Dave said I could."

Planet MozillaLet’s talk about side projects in software

Last week, I was on a panel about tech careers at the University of Ottawa.  One of the topics that came up again and again was side projects.  What should students be doing outside of work and school projects to showcase their skills for employers?   How many commits should they have in their GitHub activity graph? A lot of the students talked about the very long hours they were working to enhance their skills portfolio. Several of them looked very stressed out regarding this situation. One student mentioned that this was especially difficult since they were working part-time during the school year.

<figure class="wp-caption alignnone" id="attachment_1825">25171628443_a036520ed6_z<figcaption class="wp-caption-text">From WOCintechchat stock photos License Creative Commons Attribution 2.0 Generic (CC BY 2.0)</figcaption></figure>

A few weeks ago, a colleague I worked with at a previous company asked me if I was interested in talking about an opportunity at new company where he was working.  I responded that I was happy at my current position and said thanks for thinking of me.  I recently volunteered to become a career mentor for new Canadians and I hadn’t heard of this company before, so I checked out their careers page, thinking they might have some opportunities for people in the program. One of their job descriptions had something like this as a _requirement_ for the position.*

Passionate about software – coding in your spare time

When I see I requirement like this on a job description I immediately think several things about the company’s culture:

  1. They don’t provide time for employees to learn new skills on their own during office hours, or a budget for external training courses.
  2. They are not interested in hiring people who have caregiving responsibilities outside of work such as taking care of kids or elderly relatives.
  3. People with chronic illnesses who may not able to work very long hours will not succeed in this environment.
  4. They probably have high employee turnover, possibly due to burnout.

In summary, diversity and inclusion, and the health of your employees is not a priority.

I know a lot of developers who are do extraordinary work, and work 9-5.  They spent their day getting work done, then go home and spend time with their families and friends.  They have hobbies other than learning the newest shiny framework.  They don’t come in every morning exhausted because the were up until 2am debugging a problem that would be revealed easily to fresh eyes in the morning’s light.

The tech industry has glamourized constant coding, sleep deprivation and a GitHub activity profile as green as a spring meadow.  The young students who are entering our industry have observed the behaviour of their elders and they are emulating us before they start their first full-time job.  The end result is that our industry will continue to fail at becoming more inclusive.

<figure class="wp-caption alignnone" id="attachment_1822">sylwia-pietruszka-198938<figcaption class="wp-caption-text">Photo by Sylwia Pietruszka on Unsplash</figcaption></figure>

There are many reasons for your GitHub activity graph not to be the exalted rectangle of wall to wall green.

  1. You don’t have a GitHub profile at all.  Many companies consume open source software, but many do not contribute at all and their software is hidden from public view.
  2. You are a senior engineer who reviews code and mentors new engineers more than they write new code yourself.
  3. You take vacations, are on parental or elder-care leave or are taking a mental break due to burnout.
  4. You have multiple jobs due to financial obligations and don’t have time to code in your spare time

If you’re on the bus sitting next to an accountant, do you expect them to do your taxes for you? No.  If you’re a doctor, do you fix broken legs on your way home from work for random people? No.   Why do we have the cultural expectation that you should be working for free as a software developer?

I’ve read enough biographies of tech CEOs/executives to understand that is the path they chose and it is filled with long hours, sleep deprivation and little time with their families.  But for the average developer who is in high demand compared to other professions, why sacrifice so much?

If you are a leader in your organization, you can set the tone for this by giving your employees time to scale up and learn new skills and set reasonable schedules for deliverables.  You can leave work at a reasonable hour, and not send emails and all hours of the night.  Let everyone know that you are leaving early to watch your kid’s school play, or take your cat to the vet.  People emulate the behaviour of their leaders.  If you want to have a diverse and inclusive workforce, and build better products they bring a unique perspective to your business, you have to build a culture that supports them.

Writing code is fun.  Building something to fix a complex problem, watching the tests finally go green and landing in production is one of my favourite things to do. It is addictive, and you always want to do more.  But software is ephemeral. You write it, it’s useful for a while, and then it gets replaced with something new. The time you have with the people you love is limited too.

In the words of Gord Downie and the Tragically Hip,

“No dress rehearsal
This is our life”

The Tragically Hip – Ahead By A Century 


* I’m not trying to say that this particular company is particularly egregious, I see this listed as a requirement all the time.

Note: I ‘ve spent a lot of time learning new things outside of work.  But it should not be a requirement for hiring. Yes, I realize that I work in open source which relies on the work of unpaid contributors and some of the “always be coding” etc mantras stem from open source philosophies.  Ashe Dryden has a fantastic post on this very topic.

 


Planet MozillaOnline shopping: Autofill your credit card info safely with Firefox

You’re doing some online shopping. You’re filling out yet another payment form. Somehow the little box read your mind and already knows what you’re about to type into it. How … Read more

The post Online shopping: Autofill your credit card info safely with Firefox appeared first on The Firefox Frontier.

Planet MozillaVR Hackathon at FIXME, Lausanne (1-3 December 2017)

The FIXME hackerspace in Lausanne, Switzerland is preparing a VR Hackathon on the weekend of 1-3 December.

Competitors and visitors are welcome, please register here.

Some of the free software technologies in use include Blender and Mozilla VR.

Planet MozillaWeb XR Meetup London

Web XR Meetup London Talks @WebXR Meet-up London (November 2017 edition)

Planet MozillaWeb XR Meetup London

Web XR Meetup London Talks @WebXR Meet-up London (November 2017 edition)

Planet MozillaFeel the Speed of the new Firefox

The new Firefox is here and it’s amazing for mobile devices. It’s fast, it’s beautiful and it’s optimized for those times when you need the world at your fingertips. It’s … Read more

The post Feel the Speed of the new Firefox appeared first on The Firefox Frontier.

Planet MozillaThe Joy of Coding - Episode 121

The Joy of Coding - Episode 121 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 121

The Joy of Coding - Episode 121 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaGood list, and good to help Santa know which treasures you’d like most.

Good list, and good to help Santa know which treasures you’d like most. Nanoleaf Aurora is especially marvelous, so much so that I have two of them at my place. (If I may, I suggest hinting to Santa for the new Rhythm module, which lets Aurora respond to music and sound.)

Oh, and thanks for the great Holiday Buyer’s Guide too!

Happy Holidays,
David

Planet MozillaPhoton Project on Mobile

Firefox 57 is going to look and feel very different. The New Firefox Quantum, feels faster and actually loads faster too.

To reflect all of the amazing changes that were subtly happening behind-the-scenes, we wanted to update the user facing side of Firefox too. So, the Firefox UX team started work on the Photon Project.

For the mobile side of things, Bryan, Carol, and myself defined our own scope within the Photon Project. Our goal was to unify the Firefoxes across all of the systems and devices that we supported. We weren’t trying to be identical, we just wanted to be more similar. We wanted to help users take advantage of the different platforms whilst allowing our own design values to shine through.

Cross-platform goodness

It’s important for our users to get a consistent, familiar experience across all devices. If you choose to rely on Firefox on one platform, you should get the same dependability on any other platform you use Firefox on.

One of the issue I’ve struggled with while designing for 57 is defining the new design style for Firefox across platforms. It’s an interesting and challenging design problem — Carol Huang, Visual Design Lead, Taipei
<figure></figure>

During my time here working on Firefox Mobile products, I’ve never seen us look as consistent with ourselves as we do now. On desktop, tablet, and mobile devices, there’s a very strong nod towards the same core design values that we all love and share. This carries over into what we think Firefox should look like and feel like.

Before Photon

As Apple and Google remain committed to pushing their design guidelines and aesthetics forward in iOS and Android, we had to keep up too. Firefox had to feel like it belonged and compared to other top apps in the store. But instead, we were starting to feel out of place and sluggish.

For us, smart phone devices make up the bulk of our user base. So that’s where we started. But we knew we couldn’t leave tablets behind either. By only targeting a few key screens, we made it easier on ourselves and we really honed in on the areas of high visibility.

Firefox for iOS had not seen any significant revisions since its introduction, and its original design was not flexible enough to absorb the new features we had already added and were planning to add. In some ways, the mobile UI limitations mimicked those of the desktop product — Bryan Bell, Staff Product Designer, California
<figure></figure><figure></figure><figure></figure><figure></figure><figure></figure><figure></figure><figure></figure>

After Photon

We concentrated on the most common screens that a Firefox user would (typically) interact with. This decision to scope down the initial iteration of the project really helped us get things moving.

Lucky for us, we didn’t have to do much guessing here either. With a wealth of existing user research and knowledge, we had a pretty good idea of what these key screens might be. Some examples of these common interactions and workflows were things like opening a webpage from an external app, typing a search term/URL, and opening the menu.

We started by aligning our iconography and introducing a new, vibrant colour palette to the UI. You can see this in multiple places of the product, such as the text highlight and the loading bar. The new gradient loading animation is one of my personal favourites because I think it’s an important part of our charm that gives us that extra bit of delight — Carol Huang, Visual Design Lead, Taipei
It’s essential that as the desktop version of Firefox evolves the mobile versions mature along with it. The Photon inspired redesign of Firefox for iOS focused on making sure the new organization of the Desktop UI translated to the iPhone, so things, like send to device, and bookmarking, worked the same way on both platforms — Bryan Bell, Staff Product Designer, California
<figure></figure><figure></figure><figure></figure><figure></figure><figure></figure><figure></figure><figure></figure>

The Curve is gone

For a while now, we’ve hung our hat on the curve as our identifying feature. When we first launched Firefox for iOS, it was there too. This signature look of ours separated us from other browsers (and even performed well in “blur-your-eyes” or “over-the-shoulder” tests). But at this point it felt dated and seemed kind of unnecessary. It also created a whole different set of UI challenges for ourselves too (think toolbar customization, PWA theming support, additional actions, etc…). It wasn’t very future proof.

Beyond

As we know, these things are rarely “done”, if ever. But right now, I must say that I’m just incredibly proud of how far we’ve come and the work we’ve accomplished as a team. We had a specific vision in mind and we took big steps towards achieving that vision.

Firefox Quantum is looking great, and we’re keeping both eyes on the goals. If you have some critique or feedback you’d like to give or you just love the new Firefox, we’d definitely like to hear it. There will be lots of opportunities to iterate and improve, and we’ll continue moving forward, together.

<figure></figure>

Photon Project on Mobile was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaThese Weeks in Firefox: Issue 28

Highlights

So many new tabs opened.

Too many New Tabs to contain!

Flash Click-to-Play UI

Don’t

  • The Test Pilot website is now Photonized! This was an intense sprint where we touched a huge percentage of the code, changing the styles and rearranging the directory structure of our React components so things are better componentized.
New Test Pilot Design

The site was ready the day before the 57 launch, and it took a ton of effort from everybody.

Friends of the Firefox team

Project Updates

Add-ons

Activity Stream

  • Running 2 experiments on release channel with 1% of new users and 2% of existing users getting old Tiles about:newtab.
  • Added tippy top rich icon service to show icons that are better quality but only advertised by the site to iOS devices (including twitch to avoid thumbnailing…)

Improved tile icons

Browser Architecture

  • Sync and storage team have completed a roadmap review, look for more details in our next newsletter.
  • XBL removal is proceeding. No more XBL bindings in mobile!

Firefox Core Engineering

Form Autofill

  • Credit card autofill is enabled by default on Fx58 beta 5, for users using en-US build and located in the US.
  • We’re ready to increase the availability of Address Autofill on Fx57 from 1% to 20% (Quantum release continues to get better! \o/)
  • Implemented the credit card updating mechanism, including the door hanger and deduplication rules.
  • Fixed some site compatibility issues for credit card expiration dates.
  • Fixed bugs in the suggestion dropdown footer and preferences UIs.
  • Localization push: access keys in autofill doorhangers are now localized, implemented a parser of libaddressinput for knowing which address fields in preferences should be visible in different countries.
  • Refactored FormAutofillHandler to support multiple section mechanism.

Photon

Structure

  • Paolo updated the identity popup cert/security subview to the new photon styling.

New Photon-Style Identity Popup

Animation

  • Sam is working on extracting/polishing some of the SVG utilities we bodged together for the SVG animation work. Includes some SVGO plugins.

Visuals

Privacy/Security

Search and Navigation

Address Bar & Search

Places

Sync / Firefox Accounts

Test Pilot

  • This sprint is focused on updating our dependencies which are very out of date, and cleaning up some old cruft.

Web Payments

  • Made the dialog contents hackable from file: URIs for quick iteration
  • Implemented the first Custom Element (currency-amount)
  • Finishing up the store for dialog state that Custom Elements will listen to
  • Starting to implement the UX spec

Planet MozillaHappy bmo tiny push day!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1418204] Update Firefox logo
  • [1419541] Fix improper quoting on socorro lens URL

discuss these changes on mozilla.tools.bmo.

Planet MozillaAnnouncing Rust 1.22 (and 1.22.1)

The Rust team is happy to announce two new versions of Rust, 1.22.0 and 1.22.1. Rust is a systems programming language focused on safety, speed, and concurrency.

Wait, two versions? At the last moment, we discovered a late-breaking issue with the new macOS High Sierra in 1.22.0, and for various reasons, decided to release 1.22.0 as usual, but also put out a 1.22.1 with the patch. The bug is actually in Cargo, not rustc, and only affects users on macOS High Sierra.

If you have a previous version of Rust installed via rustup, getting Rust 1.22.1 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.22.0 and 1.22.1 on GitHub.

What’s in 1.22.0 and 1.22.1 stable

The headline feature for this release is one many have been anticipating for a long time: you can now use ? with Option<T>! About a year ago, in Rust 1.13, we introduced the ? operator for working with Result<T, E>. Ever since then, there’s been discussion about how far ? should go: should it stay only for results? Should it be user-extensible? Should it be usable with Option<T>?

In Rust 1.22, basic usage of ? with Option<T> is now stable. This code will now compile:

fn try_option_some() -> Option<u8> {
    let val = Some(1)?;
    Some(val)
}
assert_eq!(try_option_some(), Some(1));

fn try_option_none() -> Option<u8> {
    let val = None?;
    Some(val)
}
assert_eq!(try_option_none(), None);

However, this functionality is still a bit limited; you cannot yet write code that mixes results and options with ? in the same function, for example. This will be possible in the future, and already is in nightly Rust; expect to hear more about this in a future release.

Types that implement Drop are now allowed in const and static items. Like this:

struct Foo {
    a: u32,
}

impl Drop for Foo {
    fn drop(&mut self) {}
}

const F: Foo = Foo { a: 0 };
static S: Foo = Foo { a: 0 };

This change doesn’t bring much on its own, but as we improve our ability to compute things at compile-time, more and more will be possible in const and static.

Additionally, some small quality-of-life improvements:

Two recent compiler changes should speed up compiles in debug mode. We don’t have any specific numbers to commit to with these changes, but as always, compile times are very important to us, and we’re continuing to work on improving them.

T op= &T now works for primitive types, which is a fancy way of saying:

let mut x = 2;
let y = &8;

// this didn't work, but now does
x += y;

Previously, you’d have needed to write x += *y in order to de-reference, so this solves a small papercut.

Backtraces are improved on MacOS.

You can now create compile-fail tests in Rustdoc, like this:

/// ```compile_fail
/// let x = 5;
/// x += 2; // shouldn't compile!
/// ```

Please note that these kinds of tests can be more fragile than others, as additions to Rust may cause code to compile when it previously would not. Consider the first announcement with ?, for example: that code would fail to compile on Rust 1.21, but compile successfully on Rust 1.22, causing your test suite to start failing.

Finally, we removed support for the le32-unknown-nacl target. Google itself has deprecated PNaCl, instead throwing its support behind WebAssembly. You can already compile Rust code to WebAssembly today, and you can expect to hear more developments regarding this in future releases.

See the detailed release notes for more.

Library stabilizations

A few new APIs were stabilized this release:

See the detailed release notes for more.

Cargo features

If you have a big example to show your users, Cargo has grown the ability to build multi-file examples by creating a subdirectory inside examples that contains a main.rs.

Cargo now has the ability to vendor git repositories.

See the detailed release notes for more.

Contributors to 1.22.0 and 1.22.1

Many people came together to create Rust 1.22. We couldn’t have done it without all of you. Thanks! (and Thanks again!)

Planet MozillaPI-Requests Weekly, Issue #1

Update for the week ending Friday, November 17, 2017.

New Requests

  • Private Browsing Bug Raid
  • PDFium printing on Windows
  • Update test failures in 58 Beta
  • Accessibility inspector Developer Tools Tab
  • about:kill accounts
  • Firefox 57 issues related to CJK characters in profile paths

Metrics

 Requests This Week This Month This Year
New 6 34 277
Responded 4 (66%) 32 (94%) 267 (96%)
Responded within 48h 4 (66%) 26 (76%) 236 (85%)
Avg. Response Time 18 hours 35 hours 23 hours

Planet MozillaStorybook + Test Pilot = ❤

<figure><figcaption>This is Storybook.</figcaption></figure>

As a web site, Test Pilot can look deceptively simple. We offer experimental features for Firefox, along with an archive of past experiments. Experiment detail pages give details on how each feature works, what we measure, who’s working on it, and how to get involved. A nice, simple web site that definitely isn’t an application.

Oh, but there’s also the button to enable an experimental feature. It shows up if you’ve already installed the Test Pilot add-on. If you haven’t, then we offer to install both Test Pilot and the experiment in one click. Unless, of course, you aren’t using Firefox. Or, if you aren’t using a supported version of Firefox. Then, we invite you to install Firefox or upgrade.

<figure></figure><figure></figure><figure></figure><figure><figcaption>Enabling an experiment turns out to be a little complicated.</figcaption></figure>

Suffice to say there are many variables affecting what we display. In working on the site, we need to see every variation. But, it can be laborious to arrange things such as the current date & time, browser vendor & version, locale, and add-on install state.

Luckily, we don’t have to do that. Our site is built with React components — what they display is based on their parameters. On the live site, these parameters come from real sources like your browser & site content. But, we can also supply artificial data.

This is where Storybook comes in: Storybook enables us to rig up components with predefined scenarios — “what if” stories akin to unit tests. But, instead of a pass/fail report, we see how things look & act. And, since these stories are expressed in code, everything is repeatable.

https://medium.com/media/1d8a8d1076cbc6d9d99eb625b0947988/href

Better yet, Storybook is based on Webpack and Hot Module Replacement. Changes appear quickly: No rebuilds, no reloads, no tedious interaction. Just edit files and watch.

<figure></figure><figure><figcaption>Before and after stories of a component that got a recent facelift.</figcaption></figure>

Storybook can also produce static snapshots. This is handy for comparing different points in project history and changes proposed in Pull Requests.

In fact, we’ve added Storybook to our build process: Alongside linting & tests, a Storybook snapshot is built and published to an Amazon S3 based website for almost every commit to our Github repository. We even post comments every time a related pull request is updated.

<figure><figcaption>Turns out writing a GitHub “bot” is just a simple cURL command!</figcaption></figure>

There are wrinkles, of course. But, they’re not really the fault of Storybook so much as our build system. We have to massage the generated snapshot a bit. We should be able to do less of that as we move more of our build process to Webpack.

Additionally, we deploy Storybook snapshots only for commits pushed to our main repository by core collaborators. Otherwise, anyone could submit a pull request to our project and cause whatever they want to be published to our development site.

If you’d like to help out, we have a few open issues to enhance the process. We’d also like to rework & package this stuff to be used in other projects. If you’re working on a React-based web site or application, we recommend Storybook. We’ve found it to be just as valuable as unit tests, syntax linting, and type validation.


Storybook + Test Pilot = ❤ was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaUniversity of Ottawa Tech Career panel

Thursday November 16, I had the opportunity to participate in a panel at the University of Ottawa on Leading Your Own Career that was organized by the Software Engineering Student Association.  Thank you to all of the students who organized it,  especially Melody Habbouche. I enjoyed it tremendously and really enjoyed speaking with all of you.  It was really fantastic to discuss the state tech careers with other folks from a variety of companies in the Ottawa area too. I really appreciated their perspective and candour. I also enjoyed meeting you all in person!

This panel was organized as project by students as part of a class on Engineering Leadership taught by Dr. Catherine Mavriplis.  What a great project!

Screen Shot 2017-11-21 at 1.24.44 PM

The questions for the panelists were given to us in advance so that we would have a chance to prepare our thoughts.  I wrote my notes down to prepare so I thought I’d share them.  These aren’t the exact words that I used the panel, but they are pretty close.  I also added some notes to reflect some comments I made during the panel.

These notes just reflect my experience.  They are not intended to be advice of the path you should take in your career. I must also acknowledge my privilege.  I am a straight, white, cisgender woman who was born in Canada. I grew up in a household where learning was encouraged and I had access to a computer from the age of nine since my Dad also spent his entire career in the tech industry. I also live in a city where there are many job opportunities for my skills. My husband has always been a champion for my work and shares in childcare and household responsibilities. With that caveat, here are my notes:

1.  How did you figure out what kind of development work you wanted to do? E.g. what your niche is?

My approach has been to do what I like.  There are lots of opportunities in software, you might as well do something that is fun every day. I don’t like front end work. I love large distributed systems, debugging them, making them more scalable and more efficient. I like learning new things, fixing them, and moving on to learn something new.   I like building large build and release pipelines and seeing us ship complex open source projects from them to millions of users. It makes me happy.  And I’ve been lucky to have the opportunity to work on many of them, to make the more efficient, upgrade the underlying machinations of them to new technology while they continue to run in place. That is kind of magic to me.

2.  As a university student looking for internships who finds it hard to balance academic and side projects, do you look more for involvement outside of campus, side projects and taking initiatives outside the curriculum or do you value higher academic standing?

Grades are of diminishing importance the longer away you are from school.  If you’re planning to attend grad school, obviously that’s a different situation.  I’m interviewing interns right now for next summer.  I don’t really look at the GPA. That being said, most of the people who pass our tech screening test have high GPAs because it indicates a mastery of the course material.*

What I do care about is that you can show me your approach to problem solving.  Are you detail oriented?  Do you have good debugging skills?  How do you interact with others?  Are you able to help others learn new skills or are you dismissive of people who are not at the same skill level as you on a particular toolset? Are you a self-directed learner? Can you communicate why we should use a particular technical solution?

I also think it’s really important to have a life outside of the tech industry.  One of the problems I see on a recurring basis with the tech industry is that we are very disconnected from the impact of our large platforms on our local communities.   So I would really encourage you to volunteer with different organizations, join athletic clubs,  write for the school paper,  work in theatre do something you find rewarding outside the tech bubble.   Some of these volunteer opportunities will allow you to use your tech skills for good.  Personally, I really regret not having more time to do different things during university. I had a lot of part-time jobs because it was a struggle for me to pay for tuition and housing. **

3.  What is the coolest project you’ve ever worked on and what non-technical skills played an important role in completing this project.

This is a difficult question.  I’ve worked on a lot of interesting projects.  Worked a committer on two major open source projects.  Eclipse was shipped to millions of users, Firefox is shipped to hundreds millions of users.  After a while that feels routine because that is that just what you do every day.  I’ve written a chapter for a book on open source architecture, organized events and spoken at conferences, taught a graduate student workshop on release engineering.  One week when I worked at IBM I got to teach Lego Mindstorms (robotics) to girls and I always think that was a great week, because I was paid to play with Lego.

But I like to think that the thing that I have done that has had the most impact is the opportunity to mentor others, especially interns and new hires.  To see people progress from understanding the code base, to help them implement a new feature or system, and deploy the code to production.  And then to see them understand the limitations and potential of that code, and make suggestions about how it can be improved.  I find that very experience very valuable. I read this statement few years ago, and I still think about it all the time, that the output of a senior engineer is a new group of senior engineers.  It is our role to mentor others and to help them level up.  So for this role, empathy, patience, understanding and sharp communications skills are needed. And by the way, you learn a lot from teaching others that gives you a fresh perspective to the work you do, and how it can be improved.

4.  As a CS student, is it better to be specialized in one subject of matter or be diverse and mediocre in many subjects? and once you specialize in one field, is it hard to transition immediately to another field meanwhile keeping your seniority? Basically generalist vs. specialist.

Everything you learned in school will change and you will have to learn something else.  I think school gives you the opportunity to experience a lot of different subject areas: databases, front end or back end work, mobile, cryptography, data science, different languages.  What do you find interesting and what to dig into to learn more? I think if you are willing to spend time learning and work an environment that supports that growth, you will have lots of opportunities.

5.  How have you managed to address either career-uncertainty when your company is facing hard times? E.g. fear of losing a large client.   Or self-uncertainty when working on a project and feeling like an imposter? E.g. imposter syndrome

Career uncertainty: Well, do the work to the best of your ability.  In the end some things are beyond your control. An investor may pull funding for a startup.  A competitor may bring out a product that kills demand for yours.  Employee morale may decline because the company focuses on the short-term stock price versus delighting customers with their products.

I read a summary of a panel on  “Attracting and Retaining a Diverse Workforce” from the LISA conference a few weeks ago and one of the comments that really resonated with me was “skill up before you ship out”.  In other words, if you are thinking of changing jobs, start to learn some new things so your skills are current. Keep in contact with your coworkers at the different companies you’ve worked at, they may be able to help you find your next opportunity.  Ensure you have a good reputation as someone people would like to work with.   Ottawa has a great meetup community for various topics – JS, Python, AWS and so on where you can learn from others in the field and make new contacts.   You can present at conferences or write a blog about the work you do.  This improves your communication skills and makes your work more visible to the larger tech community.

When you’re at your current job, think about what you would like to learn next, will your current company provide that?  If not, make a list of companies that offer that opportunity and check every so often if they have openings or check in with people who you know at those organizations.  We are really lucky that we work in an industry where there are more opportunities than people to fill them.  Most people aren’t dealing with recruiter emails on an ongoing basis in other industries.

Imposter Syndrome:  Imposter syndrome is your brain lying to you about what you can do.  It is particularly difficult when you belong to a group that is underrepresented in the tech industry (e.g. not a straight white male) because people often challenge you to prove to them that you are indeed technical.   I’ve been told multiple times at conferences, in volunteer groups or sports activities that I don’t look like a developer and asked if I really write code.  And that is really frustrating to deal with, because you just want to do the work and people keep telling you that you don’t belong.

To give myself confidence, I tend to think about all the projects I’ve completed before in the past, and try to gain confidence from rethinking my past accomplishments.  Go through the code or specs for a project before diving in.  Ask a lot of questions when you get stuck, don’t just spin your wheels.

Find your tribe.  This may be your co-workers or people in an outside group.  For instance, I belong to a 2000+ women in tech Slack group and they are super supportive and helpful. There are also groups for other vectors of diversity such as LGBTQ+ or black engineers, or folks who are new Canadians .  These groups can be very supportive for people who are underrepresented in tech and provide of community of people with similar experiences.

The panel closed up with some conclusions and various other comments.  My closing comment was to have a life outside of tech, because sitting at a desk all day is bad for your body and Repetitive strain injury (RSI) is really painful.  I also recommended reading The Manager’s path by Camille Fournier, even if you have don’t have intentions of moving into management, because it has extremely valuable advice about how to make an impact as you progress in your career.  (I also wrote a review of this book)

* I thought a lot about this answer over the weekend.  Unfortunately due to bias, people who are underrepresented in tech may have to reach a higher bar with grades because of bias in hiring that skews toward with the same background and interests. So they have to be seen as better to be seen as equal.  The Harvard Business Review has a lot of research articles on this topic.

** I’m going to write a separate blog post about side projects because this was such a huge topic at the panel.

 


At the end of the panel, I set up a table with Firefox stickers and copies of Mozilla internship job descriptions  at the mini-career fair. I had a lot of great conversations with students about internships.  So much enthusiasm and interest!  Several students remarked that they tried the Firefox Quantum that we had released on Tuesday and liked that it was so fast!  That was really interesting because as a remote employee, I don’t get that sort of feedback in person often. So that felt really good.  I referred them to Lin Clark’s blog post about how we made Firefox faster which is a really interesting from a software architecture perspective.

I got feedback from a few women engineers that I had inspired them.  This made me so happy. They said that they were unsure about applying for internships because they weren’t sure if they had the right skills yet because they were in first or second year.  I told them to act like Mindy Kaling and go through life with the confidence of a mediocre white man and apply for those internships.  (Mindy Kaling’s words are actually different, but have the same sentiment).

I also had a lot of questions about how many side projects they should work on and what their GitHub profiles should look like.  What do I do in my job day to day? Do I travel for work?  Are there many other women on my team?  What projects are do interns work on?  How does the mentorship process work?  What sort of projects do interns work on?  What’s it like to work remotely?  What do you like best about your job? What do you like the least?  Can you describe Mozilla’s VR story? (Sorry had to refer you to a web page, that’s not my area of expertise). Anyways, it was a lot of fun, I enjoyed speaking to all of you.  You are all an inspiration to me and I look forward to hearing about your future careers.


Planet MozillaStatement on FCC proposal to roll back net neutrality in the U.S.

Today, the U.S. Federal Communications Commission (FCC) announced the next step in their plan to roll back net neutrality. The FCC still has time to remove the vote from the docket, which we hope they do before the December 14 meeting.

If the FCC votes to roll back these net neutrality protections, they would end the internet as we know it, harming every day users and small businesses, eroding free speech, competition, innovation and user choice in the process.

Our position is clear: the end of net neutrality would only benefit Internet Service Providers (ISPs). That’s why we’ve led the charge on net neutrality for years to ensure everyone has access to the entire internet.

It is imperative that all internet traffic be treated equally, without discrimination against content or type of traffic — that’s the how the internet was built and what has made it one of the greatest inventions of all time.

As the organization that fights to keep the internet free and open for all, we urge the FCC and Chairman Pai to protect net neutrality and keep this vote off its docket.

The post Statement on FCC proposal to roll back net neutrality in the U.S. appeared first on The Mozilla Blog.

Planet MozillaSkunk Works

Forgive me for indulging in a bit of armchair management. I want to talk about organising an R&D team, something I have no experience with, but plenty of opinions, and it is the internet, so listen up!

If you don't know, the Skunk Works was (is) a small 'research group' inside Lockheed Martin formed around 1943. They invented and built some of the most incredible aircraft to ever fly, including the SR-71 Blackbird, U-2 spyplane, and F-117 (the first 'stealth' aircraft). If you're interested, the book by Ben Rich is a great read (although a bit over-the-top macho, which is not surprising given the time period and the products).

The Skunk Works did some awesome stuff and they are unsurprisingly idolised by a lot of tech folk. In fact, it's become almost a cliché to compare any group that is doing even vaguely novel engineering to the Skunk Works. It kind of makes sense to want to emulate them and their work, so sometimes organisations try to copy the structure. However, I think there is a misunderstanding about how the Skunk Works actually worked, what they did, and the lessons that can carry over to software engineering under modern management practices.

An exaggerated version of how people think the Skunk Works worked.

"We need to do some R&D!" realises the CTO. They've read the Ben Rich book or were into planes as a kid, so they know how cool the Skunk Works was (or they just think this is how R&D has to work). They create a new team, hire brand new people (the smartest they can find). The new team doesn't have to work on existing product or talk to customers or all that boring stuff, they work on high risk, high reward technology that could make the org's products 10x better. They get their own offices and license to do their own thing without too much management oversight. And they get t-shirts and stickers so everyone knows how smart and special they are.

A year later, they've done some cool things and the CTO gets to go up on stage and tell the rest of the company how smart and special this new team of ninjas are, and how their work is 10x better than the existing product and now we just need to integrate it and everything will be golden.

Why this goes bad

It's optimising all the wrong bits. Doing the research is actually pretty easy, it is integrating it with real life which is difficult. By taking real life constraints away, of course you can come up with new awesome and technology, but it is the 'tech transfer' from research project to product which needs work.

Furthermore (and most importantly) this model is terrible for the team 'jel' of the organisation as a whole. By treating a small group like elites, the morale of a small group has been improved at the expense of the morale of the rest of the organisation. Working on new, exciting tech is fun and I bet 90% of engineers would rather do that than slog away on thankless bugs in existing products. Instead, that privilege has been given to a bunch of new engineers without product-specific experience, and the long-time engineers get more thankless work integrating the new tech (which is probably missing important conventions, constraints, etc.) which takes a surprising (to management) amount of time, is not that exciting, and usually doesn't get much credit because there are fewer exciting graphs.

How to do it better

Give 'regular' engineers space to do greenfield research. They're smart people and have experience and I promise they've been thinking about how to do things 'properly' if they could start from scratch. Either let them take a 'sabatical' to work on new, exciting tech, or give them 20% time, or just makes sure they are not so snowed under with bug and feature work that they have some time to experiment.

Don't isolate the research group from the product group. Sometimes greenfield research requires skills that are not present in the main org. That's a fine reason to hire new people. Not having upper management breathing down their necks is certainly a good thing (as it is for everyone else in the org, to be honest). But they should work closely with product - to make sure they are aware of technical constraints, and to benefit from experience with previous experiments. This also gets the teams working together, rather than have the R&D folk be outsiders.

Emphasise and invest in tech transfer. Make sure engineers get credit for the hard work of turning prototypes into products and for integrating research into existing products. Make the excited announcements when a product is improved, not when a prototype shows improvement. Make sure the R&D group are involved in integration and don't just through stuff over a wall to engineering.

How the Skunk Works actually worked

The Skunk Works was not really an R&D group in the sense of most software companies. The Skunk Works built planes, not just prototyped them. They were essentially their own product unit, a company within a company. The planes which the Skunk Works developed were never built by the rest of Lockheed Martin (at least not during the 'golden age'). Technology only indirectly filtered out of the group - there was no tech transfer goal.

That's a fine model for a kind of R&D, but I'm not sure it crosses over to software tech. In the best case, an R&D group should be force multiplier for product engineering, not a dead end (albeit a useful and profitable one) for smart people and good ideas.

(Note that this whole post is talking about R&D as opposed to foundational research. Some organisations have groups doing foundational research (which is more often found in academia). In that case, the structure is completely different).

Planet MozillaNew in Firefox 58: Developer Edition

Firefox Quantum made Firefox fast again, but speed is only part of the story. A ton of work has gone into making Firefox an exceptional tool for creating on the Web. Let’s dive into the changes coming in Firefox 58, currently available to preview in Firefox Developer Edition.

More Control for CSS Authors

Following the success of Firefox’s powerful CSS Grid Inspector, we’re excited to introduce a CSS Shapes Highlighter for elements with a clip-path property.

Try it yourself on this CodePen by Chris Coyier.

We’ve also implemented the CSS font-display property, allowing authors to specify how long the browser should wait for a web font, and when it should consider swapping in a font once it’s loaded.

Firefox Quantum also introduced a brand new CSS engine (“Quantum CSS”) which fixed numerous bugs and inconsistencies with CSS in Firefox. For example, calc() now works everywhere that the spec says it should.

An Even Better Debugger

Piece by piece, we’ve been rewriting our developer tools in standard Web technologies. In Developer Edition, the Console, Debugger, Network Monitor, and Responsive Design Mode are all implemented in plain HTML, JavaScript, and CSS atop common libraries like React and Redux. This means that you can use your existing web development skills to hack on the DevTools. The source for debugger.html is on GitHub, and we do our best to tag good first bugs and mentor new contributors.

We’ve implemented tons of new features during the rewrite, but the debugger deserves special mention. First, source maps finally work everywhere, and even include proper syntax highlighting for markup like JSX:

Screenshot of the Debugger showing JSX syntax highlighting for a React component You might also notice that the debugger recognized Webpack, and appropriately labeled it in the Sources tree.

Similarly, the debugger can recognize two dozen common JavaScript libraries and group their stack frames in the call stack. This makes it easy to separate the code you wrote from code provided by a framework when you’re tracking down a bug:

Screenshot showing the call stack in the Debugger. Instead of one undifferentiated list, the new Debugger has grouped the stack frames by library, showing React calling Redux calling Lodash.We even implemented “sticky” breakpoints that intelligently move with your code when you refactor or rearrange declarations in a file.

The other tools have also improved: console groups can now be collapsed, the network monitor can be paused, etc.

The best way to discover the new DevTools is to download Developer Edition and try them yourself.

WebVR, FLAC, and Other Tidbits

Firefox is driving new, fundamental capabilities of the Web. Firefox 55 introduced support for WebVR on Windows, and included experimental support for macOS. With Firefox 58, WebVR now is supported by default on both Windows and macOS.

If you’re interested in creating virtual reality experiences on the Web, check out the A-Frame library, or read our article on how Firefox Quantum delivers smooth WebVR performance at 90 fps.

In other firsts, Firefox 51 was the first browser to support FLAC, a lossless audio format, on the Web. Until now, this support was limited to Firefox on desktop platforms (Windows, macOS, and Linux), but Firefox 58 brings FLAC support to Android. That means that Firefox, Chrome, and Edge all support FLAC on every platform but iOS.

We also landed a few changes to help measure and improve Firefox’s performance:

  • The PerformanceNavigationTiming API provides access to performance metrics related to page loading.
  • Off Main Thread Painting (“OMTP”) has been enabled by default on Windows, which improves Firefox’s responsiveness by reducing the workload on the main thread.
  • We’ve enabled budget-based background timeout throttling which slows down scripts running in background tabs to save further CPU resources.

Lastly, Content Security Policies (CSPs) now support the worker-src directive.

WebExtension API Additions

Firefox Quantum removed support for legacy add-ons and added dozens of new WebExtension APIs. Firefox 58 adds even more APIs, including ones to:

For example, Tree Style Tab can now adopt theme colors from WebExtensions like VivaldiFox:

Animated screenshot of Tree Style Tab adopting dynamic theme colors from VivaldiFox

We’re currently planning additional WebExtension capabilities for 2018, including looking into possibilities for hiding individual tabs, or the entire tab bar.

Wrapping Up

These are just the highlights. To learn more about what to expect in Firefox 58—currently available in Beta and Developer Edition—check out the Release Notes and MDN’s Firefox 58 for Developers.

Planet MozillaTop immediate priorities for NoScript Quantum

Based on the immediate user feedback, here's my TODO list for what I'm doing today:Temporarily allow on NoScript 10 Quantum

  • Fixing the Private Browsing (Incognito) bug making the UI unusable on private windows (even though everything else, including the XSS filter, still works)
  • Getting rid of all the "legacy" localization strings that are creating confusion on internationalized browsers, and restart fresh with just English, refining the messages for maximum clarity and adherence with the new UI paradigm
  • Tweaking a bit the permissions preset system by making them customizable only on the options page, rather than in the popup, except for the CUSTOM preset.
  • Figuring out ways to make more apparent that
    • temporary permissions are still there: you just need to toggle the clock button on the preset (TRUSTED or CUSTOM) you choose: the permission will go away as soon as you close the browser;
    • selecting DEFAULT as a preset really means "forget about this site", even though you keep seeing its entry until you close the UI (for convenience, in case you made a mistake or change your mind);
    • the "lock" icon is actually another toggle button, and dictates how sites are matched: if its locked/green, as suggested by the title ("Match HTTPS only"), only sites served on secured connections will be matched, even if the rule is for a (base) domain and cascades to all its subdomains. This is a convenience to, say, make just "noscript.net" TRUSTED and match also "https://www.noscript.net" and "https://static.noscript.net" but not http:www.noscript.net" neither http:noscript.net".

    OK, an updated guide/tutorial/manual with screenshots is sorely needed, to. One thing at a time. Back to work now!

Planet MozillaMartes Mozilleros, 21 Nov 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaMartes Mozilleros, 21 Nov 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaThis Week in Rust 209

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Ammonia, a crate for sanitizing HTML to prevent cross-site scripting (XSS), layout breaking and clickjacking. Thanks to Jules Kerssemakers for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

110 pull requests were merged in the last week

New Contributors

  • Alexey Orlenko
  • Benjamin Hoffmeyer
  • Chris Vittal
  • Collin Anderson
  • Dan Gohman
  • Jeff Crocker
  • Laurentiu Nicola
  • loomaclin
  • Martin Lindhe
  • Michael Lamparski
  • Ramana Venkata
  • Ritiek Malhotra
  • Robert T Baldwin

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust's abstraction layers feel both transparent and productive. It's like being on a glass-bottomed boat, you see the sharks, but they can't get you. It's like a teaching language that you can also use in production. Rust helped me understand C. Also Rust people are amazing.

@gibfahn on Twitter.

Thanks to @sebasmagri for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaExtensions in Firefox 58

With the release of Firefox Quantum on November 14, 2017, we officially entered a WebExtensions-only world for add-on development. While that event was certainly the news of the day, Firefox 58 quietly entered Beta and a host of new APIs and improvements landed. As always, documentation for the APIs discussed here can be found on MDN Web Docs.

Additional Theme API

The API around themes continues to grow, allowing you customize even more of the browser appearance. In Firefox 58, you can now:

Reader Mode API Added to Tabs

The API available for interacting with tabs continues to grow. Firefox reader view (or reader mode) strips away clutter like buttons, ads and background images, and changes the page’s text size, contrast, and layout for better readability. It can even read the page out loud to you, if you want.

The image below shows a page that can be viewed in reader mode, indicated by the page icon in the URL bar (circled in red).

MDN article in Normal Mode.Clicking on the icon puts the page in reader mode, removing most of the page elements except the text and adding buttons to the left-hand side that modify the reading experience.

MDN article in Reader Mode.This powerful browser feature is now available via the WebExtensions API.

Improved webRequest API

Extensions can now easily get the entire URL ancestor chain, even in an HTTP environment. webRequest.onBeforeRequest() now includes another parameter in its callback object called frameAncestors. This is an array that contains information for each document in the frame hierarchy up to the top-level document.

Additionally, to enable proxy authorization to work smoothly,  webRequest.onAuthRequired() now fires for system events. If an extension has the correct permissions, it will be able to use onAuthRequired to supply credentials for proxy authorization.

Flexible XHR and Fetch Headers

When a content script makes requests using the standard window.XMLHttpRequest or window.fetch() API, the Origin and Referer headers are not set like they would be when requests come from the web page itself. This is often desirable in a cross-domain situation so that the content script does not appear to come from a different domain.

However, some sites only allow XHR and fetch to retrieve content if the correct Referer and Origin headers are set. Starting in Firefox 58, the WebExtensions API permits the use of content.XMLHttpRequest() and content.fetch() to perform requests that look as if they were sent by the web page content itself.

Improved Content Security Policy (CSP) Handling

Work also continues in the WebExtensions CSP area. Starting with Firefox 58, the CSP of a web page does not apply to content inserted by an extension. This allows, for example, the extension to load its own resources into a page.

This is a fairly large effort requiring some substantial architectural work. In Firefox 58, the first part of this work has landed, permitting basic injection of content generated by DOM APIs. There will be follow-ups for parser-generated content and inline stylesheets and scripts.

Setting the Default Search Engine

Using chrome_settings_override, an extension can now install a new default search engine by setting the is_default key to TRUE.  To protect the user, this cannot be done silently and the user will see an additional dialog that prompts them to confirm the change.

Search Engine ConfirmationThe user will also see if their default search engine has been overridden in the Options (about:preferences) page, which is explained in more detail below.

User Notification of Extensions Overrides

As the scope and power of the WebExtensions API increases, it is important to maintain the user’s security and privacy. In addition to the permission dialog that a user sees upon installation, Firefox tries to make sure that users are aware of which parts of the browser are under the control of an extension, and provide a way for them to revert back to default behavior, if desired.

Firefox 58 landed a couple of features in this area. First, when an extension has taken control of the New Tab Page, a notice is shown in Options (about:preferences) along with a button to disable the extension.  This is shown in the screenshot below.

Extension OverridesAlong similar lines, if an extension has set a user’s default search engine, this will be shown on the Options (about:preferences) page.

Extension OverridesOver the next few releases expect to see Firefox show even more areas where an extension is in control of a browser behavior along with options to revert back to a default state.

Additional Privacy Controls

In keeping with Mozilla’s mission to protect an individual’s online security and privacy, two new browser settings related to user privacy are now exposed via the WebExtensions API.  Within privacy.websites, we’ve added:

  • firstPartyIsolate – This preference makes the browser associate all data (including cookies, HSTS data, cached images, and more) for any third party domains with the domain in the address bar.
  • resistFingerprinting – Browser fingerprinting is the practice by which websites collect data associated with the browser or the device it’s running on to personally identify you. This preference makes the browser report spoofed information for data that’s commonly used for fingerprinting.

Browser Action Fixes

A number of changes landed in Firefox 58 that fix issues with Browser Action buttons:

Support for PKCS #11 Security Devices

Firefox supports manual installation of external security devices via a dialog under the Options (about:preferences) screen. Now, WebExtensions includes API support for PKCS #11 security devices. Similar to native messaging and managed storage, a native manifest must be installed outside of an extension before the API become useful.

Android

On Android, users get install-time prompts for WebExtension permissions, but under Firefox 58 they now also get prompts when an extension adds additional permissions at runtime.

Miscellaneous Changes

More to Come

The items above represent some of the bigger changes, but Firefox 58 landed a total of 79 items in the WebExtensions area. Thank you to everyone who had a part in getting Firefox 58 to Beta, especially volunteer contributors apoorvasingh2811, DW-dev, Tom Schuster, Kevin Jones, Ian Moody, Tim Nguyen, Tomislav Jovanovic, Masatoshi Kimura, Wouter Verhelst.

We continue to receive a lot of feedback from developers and, based on that feedback, work is progressing on new features for Firefox 59 and beyond. Expect to see the WebExtensions API improve and grow, particularly in regards to the organization and management of tabs, as well as the theming API. As always, thank you for using Firefox and helping ensure that individuals have the ability to shape the Internet and their own experiences on it.

The post Extensions in Firefox 58 appeared first on Mozilla Add-ons Blog.

Planet MozillaNoScript 10.1.1 Quantum Powerball Finish... and Rebooting

noscript-quantum.jpg

v 10.1.1
=============================================================
+ First pure WebExtension release
+ CSP-based first-party script script blocking
+ Active content blocking with DEFAULT, TRUSTED, UNTRUSTED
  and CUSTOM (per site) presets
+ Extremely responsive XSS filter leveraging the asynchronous
  webRequest  API
+ On-the-fly cross-site requests whitelisting

Thanks to the Mozilla WebExtensions team, and especially to Andy, Kris and Luca, for providing the best Browser Extensions API available on any current browser, and most importantly for the awesome tools around it (like the Add-on debugger).

Thanks to the OTF and to all the users who supported and are supporting this effort financially, morally and otherwise.

Coming soon, in the next few weeks: ClearClick, ABE and a public code repository on Github.

Did I say that we've got a chance to reshape the user experience for the best after more than a dozen years of "Classic" NoScript?
Make your craziest ideas rain, please.

Long Live Firefox Quantum, long live NoScript Quantum.

Update

Just gave a cursory look at the comments before getting some hours of sleep:

  • Temporary allow is still there, one click away, just toggle the clock inside the choosen preset button.
  • For HTTPS sites the base domain is selected by default with cascading, while for non-secure sites the default match is the full address.
  • For domain matching you can decide if only secure sites are matched by clicking on the lock icon.
  • You can tweak your "on the fly" choices in the Options tab by searching and entering base domains, full domains or full addresses in the text box, then customizing the permissions of each.

Next to come (already implemented in the backend, working on the UI) contextual permissions (e.g. "Trust facebook.net on facebook.com only").
And yes, as soon as I get a proper sleep refill, I need to refresh those 12 years old instructions and screenshots. I know I've said it a lot already, but please keep being patient. Thank you so much!

Update 2

Thank for reporting the Private Browsing Window bug, I'm gonna fix it ASAP.

Update 3

Continues here...

Planet MozillaTackle Black Friday Shopping with the Help of Firefox Add-Ons

The biggest shopping season of the year is upon us. And if you are starting to feel the stress of making your shopping lists, we offer this quick list of … Read more

The post Tackle Black Friday Shopping with the Help of Firefox Add-Ons appeared first on The Firefox Frontier.

Planet MozillaFirefox 58 Beta 3 Testday Results

As you may already know, last Friday – November 17th – we held a new Testday event, for Firefox 58 Beta 3.

Thank you all for helping us make Mozilla a better place!

From India team: Surentharan.R.A, Nagarajan Rajamanickam, Baranitharan, Fahima Zulfath, Aishwarya Narasimhan.

From Bangladesh team: Nazir Ahmed Sabbir, Md.Rahimul Islam, Md Maruf Hasan Hridoy, Tanvir Mazharul, Maruf Rahman, Sajedul Islam, Iftekher Alam, Mizanur Rahman Rony, Anika Alam, Forhad Hossain, Ratul Islam.

Results:
– several test cases executed for Web Compatibility and Tabbed Browser;

– 3 bugs verified: 13781111415728 and 1413758;

– 1 new bug filed: 1418588.

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

 

Planet MozillaFirefox Feels Like the Speed of Light

The New Firefox came out of a project we at Mozilla call “Quantum,” a massive technology implementation that modernized our web browser engine using a new programming language called Rust. … Read more

The post Firefox Feels Like the Speed of Light appeared first on The Firefox Frontier.

Planet MozillaComparing Browser Page Load Time: An Introduction to Methodology

On blog.mozilla.org, we shared results of a speed comparison study to show how fast Firefox Quantum with Tracking Protection enabled is compared to other browsers. While the blog post there focuses on the results and the speed benefits that Tracking Protection can deliver to users even outside of Private Browsing, we also wanted to share some insights into the methodology behind these page load time comparison studies and benchmarks for different browsers.

A general approach to comparing performance across browsers

The most important part to consider when comparing performance across browsers is to select metrics that are comparable between them. Most commonly, these metrics come from standardized Web APIs. Regarding performance comparisons, the Navigation Timing API offers a great source of data that is available across browsers. In particular, its PerformanceTiming interface offers access to properties that provide performance data for various events that occur during page load.

For completeness, we want to mention that there are also visual inspection metrics that focus on screen captures. Metrics of interest here include timings from the start of the page request, e.g. mouse click or key press, to specific points in time: First Paint or visual metrics like SpeedIndex, Perceptual Speed Index, or Visual Complete, the moment when no more changes happen above the fold.

For this blog post, we will focus on comparisons based on the Navigation Timing API.

Selecting a test set and designing your test

In addition to a well-defined set of test metrics, it’s also necessary to select meaningful test content. Benchmarks and comparisons that focus on technical aspects may select a set of challenging websites. In our tests, we focus on the user and hence look for a selection of websites that matter to our users. Often, a set of popular websites is a good approach, although sometimes it makes sense to focus on specific categories as well. Alexa, for example, offers sets of top sites in different categories and countries.

In our study, we focused on the global top 200 news websites, selected because these tend to have the most trackers. The websites were loaded in Chrome v61.0.3163.100 in normal and Incognito mode and in Firefox Quantum Beta v57.0b10 in normal and Private Browsing mode with Tracking Protection enabled. Per browser, each website was loaded 10 times.

With metrics and a set of test websites chosen, you can start running your test.

Controlling your experiment with Selenium WebDriver

Everybody who has run some benchmarks and performance comparisons may agree with me that running the tests and collecting the data can be tiresome. Therefore, it makes sense to automate your testing when possible. Again, it’s important to choose an automation method that works across browsers. This can either be done through an external scripting application like Mozilla’s Hasal project or through a browser automation framework like Selenium WebDriver which is a remote control interface to control user agents.

Our page load test was based on a Python script that used the Selenium Python bindings to control the browsers through geckodriver and chromedriver, respectively. A functional, but not yet perfect, script similar to the one that loads a set of websites in both Chrome and Firefox and stores window.performance.timing after each load can be found here. I am looking forward to patches for improvements.

We performed our tests on a recent Macbook Pro (13” Macbook Pro 2017, 3.1GHz i5, 16GB memory, OSX 10.13) that was connected to a Webpass 100Mbps connection over WiFi (802.11ac, 867Mbit/s). The script loaded a website in one of the browsers and saved the performance load times with return window.performance.timing as a csv file. Each website was loaded 10 times per browser. In the script, a page load timeout was set to 60 seconds. Especially with ads on websites, pages can load extremely slowly. In this case, the page load was interrupted after 60 seconds by the script. It used the PerformanceTimingAPI to check if loadEventEnd was already present.

loadEventEnd represents the moment when the load event for the requested page is completed, i.e all static content of the page is fully loaded. If there was a loadEventEnd time saved, it was then stored in a csv file. If not, the script tried to load the respective page anew. In the few cases when the repeated page request also timed out, the page was loaded manually without any automated timeout set and the window.performance.timing was requested manually after the page was fully loaded.

A look at results

The raw data for our study can be found here. For reference, the analysis in R is available as RMarkdown notebook, too.

First, let’s compare browsers’ mean page load times. For this, we look at the mean page load time per browser by averaging across all 2000 measurement (200 news websites * 10 runs per page) per browser. The means per browser are plotted as orange points below. In addition, and to better understand the spread in the data, boxplots per browser are also shown.

The load time difference between Chrome’s Incognito mode and Firefox Quantum Private Browsing is 2.4x. We see no difference between Chrome’s normal and Incognito mode. This shows that the differences between Firefox Quantum and its Private Browser option, which is similar to Chrome’s Incognito mode + Tracking Protection, must come from Tracking Protection.

It is a valid concern that loadEventEnd may not be the best indicator for what users experience on screen when loading a page. However, both loadEventEnd as well as average session load time were recently found to be good predictors for user bounce rate. From the results of the third-party study by SOASTA Inc., we find that an average session load time of 6 seconds leads to a 70% bounce rate. Let’s look at the share of pages in our data that has a load time longer than 6 seconds and compare across browsers. While only about 5.5% of page loads take longer than 6 seconds for Firefox Quantum with Tracking Protection, it’s about 31% of all pages for Google Chrome.

One last question of interest is if the data can also be used to understand where the differences between browsers occur during page load. Let’s only look at Chrome Incognito and Firefox Quantum Private Browsing again. performance.timing gives events over the course of the page load process. We can print these events in order of appearance during page load and look at differences between browsers, using newsweek.com as an example.

It becomes evident that the main differences occur towards the end of loading process. The work required to create the DOM is similarly fast in both browsers, but Chrome waits for content significantly longer than Firefox. The main differences start to occur with domComplete, i.e., the moment when the parser finishes its work on the main document. This underlines again the fact that Firefox’s Tracking Protection, used in Private Browsing, blocks slow third-party content from being loaded by blocking trackers.

Summary and call to action

This study shows that you can derive interesting insights from page load speed comparisons with a relatively simple approach. If you want to perform your own benchmarks and compare your favorite browser to competitors, you’re free to take this methodology and adapt it to your own tests. We have chosen news websites as our test set because we were looking for websites that have many trackers. Some ideas for extending this research are:

  • Compare page load times for other sets of websites
  • Extend these findings with measurements on other browsers
  • Extend these desktop findings with measurements on mobile

Or, if you are up for repeating our study on your machine for the top news websites in your country, then we’d be happy if you could share the results in the comments below.

Interested in experiencing what these results mean in terms of speed? Try Private Browsing for yourself!

If you’d like to take it up a notch and enable Tracing Protection every time you use Firefox, then download Firefox Quantum. Keep in mind that Tracking Protection may block social “Like” buttons, commenting tools and some cross-site video content. In Firefox Quantum, open Preferences. Choose Privacy & Security and scroll down until you find the Tracking Protection section. Alternatively, simply search for “Tracking Protection” in the Find in Preferences field. Enable Tracking Protection ‘Always’, and you are set to enjoy both improved speed and privacy whenever you use Firefox Quantum.

Planet MozillaWork Week Logistics, Revisited

I’ve written before about how to be productive when distributed teams get together and was anxious to try it out on my “new” (read: six-month-old) team, Developer Workflow. As mentioned in that previous post, we just had a work week in Mountain View, so here’s a quick recap.

Process Improvements

We often optimize work week location around where the fewest people would need to travel to attend. While this does make things logistically easier, it also introduces imbalance. Some people will have traveled very far, while some people will be able to sleep in their own beds. Conversely, the local people may feel they need to go home every night in order to be with their partners/families/cats and may miss out on the informal bonding that can happen at group dinners and such.

We had originally intended to meet in San Francisco, but other conferences had jacked up hotel rates, so we decided to decamp to the Valley. I offered to have the SF residents book rooms to avoid the daily commute up and down the peninsula. They didn’t all take me up on it, but it was an opportunity to put everyone on more equal footing.

Schedule-wise, I set things up so that we had our discussion and planning pieces in the morning each day while we were still fresh and caffeinated. After lunch, we would get down to hacking on code. Ted threw together a tracking tool to help visualize the Makefile burndown. Ted is also great at facilitating meetings, keeping us on track especially later in the week as we all started to fade.

Accomplishments

So what did we actually get done? Like the old adage about station wagon full of tapes, never underestimate the review bandwidth of 4 build peers hacking in a room together for an afternoon. We accomplished quite a bit during our time together.

Aside from the 2018 planning detailed in the previous post, we also met with mobile build peer Nick Alexander and planned how to handle mobile Makefiles. The mobile version of Firefox now builds with gradle, so it was important not to step on each others toes. Another huge proportion of the remaining Makefiles involve l10n. We figured out how to work-around l10n for now, i.e. don’t break repacks, to get a tup build working, and we’ve setup a meeting with l10n team for Austin to discuss their plans for langpacks and a future that might not involve makefiles at all. The l10n stuff is hairy, and might be partially my fault (see previous comment re: cargo-culting), so thanks to my team for not shying away from it.

On a concrete level, Ted reports that we’ve removed 13 Makefiles and ~100 lines of other Makefile content in the past month, much of which happened over the past few weeks. Greg has also managed to remove big pieces of complexity from client.mk, assisted by reviews from Chris, Mike, Nick and other build peers. We’re getting into the trickier bits now, but we’re persevering.

All in all, a very successful work week with my “new” team. I continue to find subtle ways to make these get-togethers more effective.

Planet MozillaTrip Report: C++ Standards Meeting in Albuquerque, November 2017

Summary / TL;DR

Project What’s in it? Status
C++17 See below Publication imminent
Library Fundamentals TS v2 source code information capture and various utilities Published!
Concepts TS Constrained templates Merged into C++20 with some modifications
Parallelism TS v2 Task blocks, library vector types and algorithms and more Nearing feature-completion; expect PDTS ballot at next meeting
Transactional Memory TS Transaction support Published! Not headed towards C++20
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it headed for C++20
Concurrency TS v2 See below Under active development
Networking TS Sockets library based on Boost.ASIO Publication imminent
Ranges TS Range-based algorithms and views Publication imminent
Coroutines TS Resumable functions, based on Microsoft’s await design Publication imminent
Modules TS A component system to supersede the textual header file inclusion model Resolution of comments on Proposed Draft in progress
Numerics TS Various numerical facilities Under active development; no new progress
Graphics TS 2D drawing API Under active design review; no new progress
Reflection Code introspection and (later) reification mechanisms Introspection proposal awaiting wording review. Targeting a Reflection TS.
Contracts Preconditions, postconditions, and assertions Proposal under wording review

Some of the links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 27, 2017). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Albuquerque, New Mexico. This was the third committee meeting in 2017; you can find my reports on previous meetings here (February 2017, Kona) and here (July 2017, Toronto). These reports, particularly the Toronto one, provide useful context for this post.

With the final C++17 International Standard (IS) having been voted for publication, this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight, most notably Modules.

What’s the status of C++17?

The final C++17 International Standard (IS) has been sent off for publication in September. The final document is based on the Draft International Standard (DIS), with only minor editorial changes (nothing normative) to address comments on the DIS ballot; it is now in ISO’s hands, and official publication is imminent.

In terms of implementation status, the latest versions of GCC and Clang both have complete support for C++17, modulo bugs. MSVC is said to be on track to be C++17 feature-complete by March 2018; if that ends up being the case, C++17 will be quickest standard version to date to be supported by these three major compilers.

C++20

This is the second meeting that the C++20 Working Draft has been open for changes. (To use a development analogy, think of the current Working Draft as “trunk”; it was opened for changes as soon as C++17 “branched” earlier this year). Here, I list the changes that have been voted into the Working Draft at this meeting. For a list of changes voted in at the previous meeting, see my Toronto report.

Technical Specifications

In addition to the C++ International Standard, the committee publishes Technical Specifications (TS) which can be thought of “feature branches” (to continue the development analogy from above), where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At the last meeting, we published three TSes: Coroutines, Ranges, and Networking. The next steps for these features is to wait for a while (usually at least a year) to give users and implementers a chance to try them out and provide feedback. Once we’re confident the features are ripe for final standardization, they will be merged into a future version of the International Standard (possibly C++20).

Modules TS

The Modules TS made significant progress at the last meeting: its Proposed Draft (PDTS) was published and circulated for balloting, a process where national standards bodies evaluate, vote on, and submit comments on a proposed document. The ballot passed, but numerous technical comments were submitted that the committee intends to address before final publication.

A lot of time at this meeting was spent working through those comments. Significant progress was made, but not enough to vote out the final published TS at the end of the meeting. The Core Working Group (CWG) intends to hold a teleconference in the coming months to continue reviewing comment resolutions. If they get through them all, a publication vote may happen shortly thereafter (also by teleconference); otherwise, the work will be finished, and the publication vote held, at the next meeting in Jacksonville.

I summarize some of the technical discussion about Modules that took place at this meeting below.

The state of Modules implementation is also progressing: in addition to Clang and MSVC, Facebook has been contributing to a GCC implementation.

Parallelism TS v2

The Parallelism TS v2 is feature-complete, with one final feature, a template library for parallel for loops voted in at this meeting. A vote to send it out for its PDTS ballot is expected at the next meeting.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet) continues to be under active development. Three new features targeting it have received design approval at this meeting: std::cell, a facility for deferred reclamation; apply() for synchronized_value; and atomic_ref. An initial working draft that consolidates the various features slated for the TS into a single document is expected at the next meeting.

Executors, slated for a separate TS, are making progress: the Concurrency Study Group approved the design of the unified executors proposal, thereby breaking the lockdown that has been holding the feature up for a number of years.

Stackful coroutines continue to be a unique beast of their own. I’ve previously reported them to be slated for the Concurrency TS v2; I’m not sure whether that’s still the case. They change the semantics of code in ways that impacts the core language, and thus need to be reviewed by the Evolution Working Group; one potential concern is that the proposal may not be implementable on all platforms (iOS came up as a concrete example during informal discussion). For the time being, the proposal is still being looked at by the Concurrency Working Group, where there continues to be strong interest in standardizing them in some form, but the details remain to be nailed down; I believe the latest development is that an older API proposal may end up being preferred over the latest call/cc one.

Future Technical Specifications

There are some planned future Technical Specifications that don’t have an official project or working draft yet:

Reflection

The static introspection / “reflexpr” proposal (see its summary, design, and specification for details), headed for a Reflection TS, has been approved by the Evolution and Library Evolution Working Groups, and is awaiting wording review. The Reflection Study Group (recently renamed to “Compile-Time Programming Study Group”) approved an extension to it, concerning reflection over functions, at this meeting.

There are more reflection features to come beyond what will be in the static introspection TS. One proposal that has been drawing a lot of attention is metaclasses, an updated version of which was reviewed at this meeting (details below).

Graphics

I’m not aware of much new progress on the planned Graphics TS (containing 2D graphics primitives inspired by cairo) since the last meeting. The latest draft spec can be found here, and is still on the Library Evolution Working Group’s plate.

Numerics

Nothing particularly new to report here either; the Numerics Study Group did not meet this week. The high-level plan for the TS remains as outlined previously. There are concrete proposals for several of the listed topics, but not working draft for the TS yet.

Other major features

Concepts

As I related in my previous report, Concepts was merged into C++20, minus abbreviated function templates (AFTs) and related features which remain controversial.

I also mentioned that there will likely be future proposals to get back AFTs in some modified form, that address the main objection to them (that knowing whether a function is a template or not requires knowing whether the identifiers in its signature name types or concepts). Two such proposals were submitted in advance of this paper; interestingly, both of them proposed a very similar design: an adjective syntax where in an AFT, a concept name would act as an adjective tacked onto the thing it’s constraining – most commonly, for a type concept, typename or auto. So instead of void sort(Sortable& s);, you’d have void sort(Sortable& auto s);, and that makes it clear that a template is being defined.

These proposals were not discussed at this meeting, because some of the authors of the original Concepts design could not make it to the meeting. I expect a lively discussion in Jacksonville.

Now that Concepts are in the language, the question of whether new library proposals should make use of them naturally arose. The Library Evolution Working Group’s initial guidance is “not yet”. The reason is that most libraries require some foundational concepts to build their more specific concepts on top of, and we don’t want different library proposals to duplicate each other / reinvent the wheel in that respect. Rather, we should start by adding a well-designed set of foundational concepts, and libraries can then start building on top of those. The Ranges TS is considered a leading candidate for providing that initial set of foundational concepts.

Operator Dot

I last talked about overloading operator dot a year ago, when I mentioned that there are two proposals for this: the original one, and an alternative approach that achieves a similar effect via inheritance-like semantics.

There hasn’t been much activity on those proposals since then. I think that’s for two reasons. First, the relevant people have been occupied with Concepts. Second, as the reflection proposals develop, people are increasingly starting to see them as a more general mechanism to satisfy operator dot’s use cases. The downside, of course, is that reflection will take longer to arrive in C++, while one of the above two proposals could plausibly have been in C++20.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

All proposals discussed in EWG at this meeting were targeting C++20 (except for Modules, where we discussed some changes targeting the Modules TS). I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standardizing feature test macros (and another paper effectively asking for the same thing). Feature test macros are macros like __cpp_lambdas that tell you whether your compiler or standard library supports a particular feature without having to resort to the more indirect approach of having a version check for each of your supported compilers. The committee maintains a list of them, but they’re not an official part of the standard, and this has led some implementations to refuse to support them, thus significantly undermining their usefulness. To rectify this, it was proposed that they are made part of the official standard. This was first proposed at the last meeting, but failed to gain consensus at that time. It appears that people have since been convinced (possibly by the arguments laid out in the linked papers), as this time around EWG approved the proposal.
  • Bit-casting object representations. This is a library proposal, but EWG was asked for guidance regarding making this function constexpr, which requires compiler support. EWG decided that it could be made constexpr for all types except a few categories – unions, pointers, pointers-to-members, and references – for which that would have been tricky to implement.
    • As a humorous side-note about this proposal, since it could only apply to “plain old data” types (more precisely, trivially copyable types; as mentioned above, “plain old data” was deprecated as a term of art), one of the potential names the authors proposed for the library function was pod_cast. Sadly, this was voted down in favour of bit_cast.
  • Language support for empty objects. This addresses some of the limitations of the empty base optimization (such as not being able to employ it with types that are final or otherwise cannot be derived from) by allowing data members to opt out of the rule that requires them to occupy at least 1 byte using an attribute, [[no_unique_address]]. The resulting technique is called the “empty member optimization”.
  • Efficient sized delete for variable-sized classes. I gave some background on this in my previous post. The authors returned with sign-off from all relevant implementers, and a clearer syntax (the “destroying delete” operator is now identified by a tag type, as in operator delete(Type*, std::destroying_delete_t), and the proposal was approved.
  • Attributes for likely and unlikely statements. This proposal has been updated as per previous EWG feedback to allow placing the attribute on all statements. It was approved with one modification: placing the attribute on a declaration statement was forbidden, because other attributes on declaration statements consistently apply to the entity being declared, not the statement itself.
  • Deprecate implicit capture of *this. Only the implicit capture of *this via [=] was deprecated; EWG felt that disallowing implicit capture via [&] would break too much idiomatic code.
  • Allow pack expansions in lambda init-capture. There was no compelling reason to disallow this, and the workaround of constructing a tuple to store the arguments and then unpacking it is inefficient.
  • String literals as template parameters. This fixes a longstanding limitation in C++ where there was previously no way to do compile-time processing of strings in such a way that the value of the string could affect the type of the result (as an example, think of a compile-time regex parsing library where the resulting type defines an efficient matcher (DFA) for the regex). The syntax is very simple: template <auto& String>; the auto then gets deduced as const char[N] (or const char16_t[N] etc. depending on the type of the string literal passed as argument) where N is the length of the string. (You can also write template <const char (&String)[N]> if you know N, but you can’t write template <size_t N, const char (&String)[N]> and have both N and String deduced from a single string literal template argument, because EWG did not want to create a precedent for a single template argument matching two template parameters. That’s not a big deal, though: using the auto form, you can easily recover N via traits, and even constrain the length or the character type using a requires-clause.)
  • A tweak to the Contracts proposal. An issue came up during CWG review of the proposal regarding inline functions with assertion checks inside them: what should happen if the function is called from two translation units, one of which is compiled with assertion checks enabled and one of them not? EWG’s answer was that, as with NDEBUG today, this is technically an ODR (one definition rule) violation. The behaviour in practice is fairly well understood: the linker will pick one version or the other, and that version will be used by both translation units. (There are some potential issues with this: what if, while compiling a caller in one of the translation units, the optimizer assumed that the assertion was checked, but the linker picks the version where the assertion isn’t checked? That can result in miscompilation. The topic remains under discussion.)

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:

  • Fixing small-ish functionality gaps in concepts. This consisted of three parts, two of which were accepted:
    • requires-clauses in lambdas. This was accepted.
    • requires-clauses in template template parameters. Also accepted.
    • auto as a parameter type in regular (non-lambda) functions. This was mildly controversial due to the similarity to AFTs, whose design is still under discussion, so it was deferred to be dealt with together with AFTs.
  • Access specifiers and specializations.
  • Deprecating “plain old data” (POD).
  • Default constructible and assignable stateless lambdas.

  • Proposals for which further work is encouraged:

    • Standard containers and constexpr. This is the latest version of an ongoing effort by compiler implementers and others to get dynamic memory allocation working in a constexpr context. The current proposal allows most forms of dynamic allocation and related constructs during constant evaluation: non-trivial destructors, new and delete expressions, placement new, and use of std::allocator; this allows reusing a lot of regular code, including code that uses std::vector, in a constexpr context. Direct use of operator new is not allowed, because that returns void*, and constant evaluation needs to track the type of dynamically allocated objects. There is also a provision to allow memory that is dynamically allocated during constant evaluation to survive to runtime, at which point it’s treated as static storage. EWG liked the direction (and particularly the fact that compiler writers were on the same page regarding its implementability) and encouraged development of a more concrete proposal along these lines.
    • Supporting offsetof for stable-layout classes. “Stable-layout” is a new proposed category of types, broader than “standard-layout”, for which offsetof could be implemented. EWG observed that the definition of “standard-layout” itself could be broadened a bit to include most of the desired use cases, and expressed a preference for doing that instead of introducing a new category. There was also talk of potentially supporting offsetof for all types, which may be proposed separately as a follow-up.
    • short float. This proposal for a 16-bit floating-point type was approved by EWG earlier this year, but came back for some reason. There was some re-hashing of previous discussions about whether the standard should mandate the size (16 bits) and IEEE behaviour.
    • Adding alias declarations to concepts. This paper proposed three potential enhancements to concept declarations to make writing concepts easier. EWG was not particularly convinced about the need for this, but believed at least the first proposal could be entertained given stronger motivation.
    • [[uninitialized]] attribute. This attribute is intended to suppress compiler warnings about variables that are declared but not initialized in cases where this is done intentionally, thus facilitating the use of such warnings in a codebase to catch unintentional cases. EWG pointed out that most compiler these days warn not about uninitialized declarations, but uninitialized uses. There was also a desire to address the broader use case of allocating dynamic memory that is purposely uninitialized (e.g. std::vector<char> buffer(N) currently zero-initializes the allocated memory).
    • Relaxed incomplete multidimensional array type declaration. This is a companion proposal to the std::mdspan library proposal, which is a multi-dimensional array view. It would allow writing things like std::mdspan<double[][][]> to denote a three-dimensional array where the size in each dimension is determined at runtime. Note that you still would not be able to create an object of type double[][][]; you could only use it in contexts that do not require creating an object, like a template argument. Basically, mdspan is trying to (ab)use array types as a mini-DSL to describe its dimensions, similar to how std::function uses function types as a mini-DSL to describe its signature. This proposal was presented before, when mdspan was earlier in its design stage, and EWG did not find it sufficiently motivating. Now that the mdspan is going forward, the authors tried again. EWG was open to entertaining the idea, but only if technical issues such as the interaction with template argument deduction are ironed out.
    • Class types in non-type template parameters. This has been proposed before, but EWG was stuck on the question of how to determine equivalence (something you need to be able to do for template arguments) for values of class types. Now, operator<=> has given us a way to move forward on this question, basically by requiring that class types used in non-type template parameters have a defaulted operator<=>. It was observed that there is some overlap with the proposal to allow string literals as template parameters (since one way to pass a character array as a template parameter would be to wrap it in a struct), but it seemed like they also each have their own use cases and there may be room for both in the language.
    • Dynamic library loading. The C++ standard does not talk about dynamic libraries, but some people would find it useful to have a standardized library interface for dealing with them anyways. EWG was asked for input on whether it would be acceptable to standardize a library interface without saying too much about its semantics (since specifying the semantics would require that the C++ standard start talking about dynamic libraries, and specifying their behaviour in relation to exceptions, thread-local storage, the One Definition Rule, and so on). EWG was open to this direction, but suggested that the library interface be made much more general, as in its current incarnation it seemed to be geared towards certain platforms and unimplementable on others.
    • Various proposed extensions to the Modules TS, which I talk about below.

    There was also a proposal for recursive lambdas that wasn’t discussed because its author realized it needed some more work first.

    Rejected proposals:

    • A proposed trait has_padding_bits, the need for which came up during review of an atomics-related proposal by the Concurrency Study Group. EWG expressed a preference for an alternative approach that removed the need for the trait by putting the burden on compiler implementers to make things work correctly.
    • Attributes for structured bindings. This was proposed previously and rejected on the basis of insufficient motivation. The author came back with additional motivation: thread-safety attributes such as [[guarded_by]] or [[locks_held]]. However, it was pointed out that the individual bindings are just aliases to fields of an (unnamed) object, so it doesn’t make sense to apply attributes to them; attributes can be applied to the deconstructed object as a whole, or to one of its fields at the point of the field’s declaration.
    • Keeping the alias syntax extendable. This proposed reverting the part of the down with typename! proposal, approved at the last meeting, that allowed omitting the typename in using alias = typename T::type; where T was a dependent type. The rationale was that even though today only a type is allowed in that position (thus making the typename disambiguator redundant), this prevents us from reusing the same syntax for expression aliases in the future. EWG already considered this, and didn’t find it compelling: the preference was to make the “land grab” for a syntax that is widely used today, instead of keeping it in reserve for a hypothetical future feature.
    • Forward without forward.The idea here is to abbreviate the std::forward<decltype(x)>(x) boilerplate that often occurs in generic code, to >>x (i.e. a unary >> operator applied to x). EWG sympathized with the desire to eliminate this boilerplate, but felt that >>, or indeed any other unary operator, would be too confusing of a syntax, especially when occuring after an = in a lambda init-capture (e.g. [foo=>>foo](...){ ... }). EWG was willing to entertain a keyword instead, but the best people could come up with was fwdexpr and that didn’t have consensus; as a result, the future of this proposal is uncertain.
    • Relaxing the rules about invoking an explicit constructor with a braced-init-list. This would have allowed , among a few other changes, writing return {...}; instead of return T{...}; in a function whose declared return type is T, even if the invoked constructor was explicit. This has been proposed before, but rejected on the basis that it makes it easy to introduce bugs (see e.g. this response). The author proposed addressing those concerns by introducing some new rules to limit the cases in which this was allowed, but EWG did not find the motivation sufficiently compelling to further complicate C++’s already complex initialization rules.
    • Another attempt at standardizing arrays of runtime bound (ARBs, a pared-down version of C’s variable-length arrays), and a C++ wrapper class for them, stack_array. ARBs and a wrapper class called dynarray were previously headed for standardization in the form of an Array Extensions TS, before the project was scrapped because dynarray was found to be unimplementable. This proposal would solve the implementability concerns by restricting the usage of stack_array (e.g. it couldn’t be used as a class member). EWG was concerned that the restrictions would result in a type that’s not very usable. (It was pointed out that a design to make such a type more composable was proposed previously, but the author didn’t have time to pursue it further.) Ultimately, EWG didn’t feel that this proposal had a better chance of succeeding than the last time standardization of ARBs was attempted. However, a future direction that might be more promising was outlined: introducing a core language “allocation expression” that allocates a unnamed (and runtime-sized) stack array and returns a non-owning wrapper, such as a std::span, to access it.
    • A modern C++ signature for main(). This would have introduced a new signature for main() (alongside the existing allowed signatures) that exposed the command-line arguments using an iterable modern C++ type rather than raw pointers (the specific proposal was int main(std::initializer_list<std::string_view>). EWG was not convinced that such a thing would be easier to use and learn than int main(int argc, char*[] argv);. It was suggested that instead, a trivial library facility that took argc and argv as inputs and exposed an iterable interface could be provided; alternatively (or in addition), a way to access command-line arguments from anywhere in the program (similar to Rust’s std::env::args()) could be explored.
    • Abbreviated lambdas for fun and profit. This proposal would introduce a new abbreviated syntax for single-expression lambdas; a previous version of it was presented and largely rejected in Kona. Not much has changed to sway EWG’s opinion since then; if anything, additional technical issues were discovered.

      For example, one of the features of the abbreviated syntax is “automatic SFINAE”. That is, [x] => expr would mean [x] -> decltype(expr) { return expr; }; the appearance of expr in the return type rather than just the body would mean that a substitution failure in expr wouldn’t be a hard error, it would just remove the function overload being considered from the overload set (see the paper for an example). However, it was pointed out that in e.g. [x] -> decltype(x) { return x; }, the x in the decltype and the x in the body refer to two different entities: the first refers to the variable in the enclosing scope that is captured, and the second to the captured copy. If we try to make [x] => x “expand to” that, then we get into a situation where the x in the abbreviated form refers to two different entities for two different purposes, which would be rather confusing. Alternatively, we could say in the abbreviated form, x refers to the captured copy for both purposes, but then we are applying SFINAE in new scenarios, and some implementers are strongly opposed to that.

      It was also pointed out that the abbreviated form’s proposed return semantics were “return by reference”, while regular lambdas are “return by value” by default. EWG felt it would be confusing to have two different defaults like this.
    • Making the lambda capture syntax more liberal in what it accepts. C++ currently requires that in a lambda capture list, the capture-default, if present, come before any explicit captures. This proposal would have allowed them to be written in any order; in addition, it would have allowed repeating variables that are covered by the capture-default as explicit captures for emphasis. EWG didn’t find the motivation for either of these changes compelling.
    • Lifting overload sets into objects. This is a resurrection of an earlier proposal to allow passing around overload sets as objects. It addressed previous concerns with that proposal by making the syntax more explicit: you’d pass []f rather than just f, where f was the name of the overloaded function. There were also provisions for passing around operators, and functions that performed member access. EWG’s feedback was that this proposal seems to be confused between two possible sets of desired semantics:
      1. a way to build super-terse lambdas, which essentially amounts to packaging up a name; the overload set itself isn’t formed at the time you create the lambda, only later when you instantiate it
      2. a way to package and pass around overload sets themselves, which would be formed at the time you package them

      EWG didn’t have much of an appetite for #1 (possibly because it had just rejected another terse-lambda proposal), and argued that #2 could be achieved using reflection.

    Discussion papers

    There were also a few papers submitted to EWG that weren’t proposals per se, just discussion papers.

    These included a paper arguing that Concepts does not significantly improve upon C++17, and a response paper arguing that it in fact does. The main issue was whether Concepts delivers on its promise of making template error messages better; EWG’s consensus was that they do when compared to unconstrainted templates, but perhaps not as much as one would hope when compared to C++17 techniques for constraining templates, like enable_if. There may be room for implementations (to date there is just the one in GCC) to do a better job here. (Of course, Concepts are also preferable over enable_if in other ways, such as being much easier to read.)

    There was also a paper describing the experiences of the author teaching Concepts online. One of the takeaways here is that students don’t tend to find the variety of concept declaration syntaxes confusing; they tend to mix them freely, and they tend to like the abbreviated function template (AFT) syntax.

    Modules

    I mentioned above that a significant focus of the meeting was to address the national body comments on the Modules PDTS, and hopefully get to a publication vote on the final Modules TS.

    EWG looked at Modules on two occasions: first to deal with PDTS comments that had language design implications, and second to look at new proposals concerning Modules. The latter were all categorized as “post-TS”: they would not target the Modules TS, but rather “Modules v2”, the next iteration of Modules (for which the ship vehicle has not yet been decided).

    Modules TS

    The first task, dealing with PDTS comments in EWG, was a short affair. Any comment that proposed a non-trivial design change, or even remotely had the potential to delay the publication of the Modules TS, was summarily rejected (with the intention that the concern could be addressed in Modules v2 instead). It was clear that the committee leadership was intent on shipping the Modules TS by the end of the meeting, and would not let it get derailed for any reason.

    “That’s a good thing, right?” you ask. After all, the sooner we ship the Modules TS, the sooner people can start trying it out and providing feedback, and thus the sooner we can get a refined proposal into the official standard, right? I think the reality is a bit more nuanced than that. As always, it’s a tradeoff: if we ship too soon, we can risk shipping a TS that’s not sufficiently polished for people to reasonably implement and use it; then we don’t get much feedback and we effectively waste a TS cycle. In this case, I personally feel like EWG could have erred a bit more on the side of shipping a slightly more polished TS, even if that meant delaying the publication by a meeting (it ended up being delayed by at least a couple of months anyways). That said, I can also sympathize with the viewpoint that Modules has been in the making for a very long time and we need to ship something already.

    Anyways, for this reason, most PDTS comments that were routed to EWG were rejected. (Again, I should emphasize that this means “rejected for the TS“, not “rejected forever”.) The only non-rejection response that EWG gave was to comment US 041, where EWG confirmed that the intent was that argument-dependent lookup could find some non-exported entities in some situations.

    Of course, there were other PDTS comments that weren’t routed to EWG because they weren’t design issues; these were routed to CWG, and CWG spent much of the week looking at them. At one point towards the end of the week, CWG did consult EWG about a design issue that came up. The question concerned whether a translation unit that imports a module sees a class type declared in that module as complete or incomplete in various situations. Some of the possibilities that have to be considered here are whether the module exports the class’s forward declaration, its definition, or both; whether the module interface unit contains a definition of the class (exported or not) at all; and whether the class appears in the signature of an exported entity (such as a function) without itself being exported.

    There are various use cases that need to be considered when deciding the behaviour here. For example, a module may want to export functions that return or take as parameters pointers or references to a type that’s “opaque” to the module’s consumer, i.e. the module’s consumer can’t create an instance of such a class or access its fields; that’s a use case for exporting a type as incomplete. At the same time, the module author may want to avoid splitting her module into separate interface and implementation units at all, and thus wants to define the type in the interface unit while still exporting it as incomplete.

    The issue that CWG got held up on was that the rules as currently specified seemed to imply that in a consumer translation unit, an imported type could be complete and incomplete at the same time, depending on how it was named (e.g. directly vs. via decltype(f()) where it was the return type of a function f). Some implementers indicated that this would be a significant challenge to implement, as it would require a more sophisticated implementation model for types (where completeness was a property of “views of types” rather than of types themselves) that no existing language feature currently requires.

    Several alternatives were proposed which avoided these implementation challenges. While EWG was favourable to some of them, there was also opposition to making what some saw as a design change to the Modules TS at this late stage, so it was decided that the TS would go ahead with the current design, possibly annotated as “we know there’s a potential problem here”, and it would be fixed up in v2.

    I find the implications of this choice a bit unfortunate. It sounded like the implementers that described this model as being a significant challenge to implement, are not planning to implement it (after all, it’s going to be fixed in v2; why redesign your compiler’s type system if ultimately you won’t need it). Other implementers may or may not implement this model. Either way, we’ll either have implementation divergence, or all implementations will agree on a de facto model that’s different from what the spec says. This is one of those cases where I feel like waiting to polish the spec a bit more, so that it’s not shipped in a known-to-be-broken state, may have been advised.

    I mentioned in my previous report that I thought the various Modules implementers didn’t talk to each other enough about their respective implementation strategies. I still feel like that’s very much the case. I feel like discussing each other’s implementation approaches in more depth would have unearthed this issue, and allowed it to be dealt with, sooner.

    Modules v2

    Now moving on to the proposals targeting Modules v2 that EWG reviewed:

    • Two of them (module interface imports and namespace pervasiveness and modules) it turned out were already addressed in the Modules TS by changes made in response to PDTS comments.
    • Placement of module declarations. Currently, if a module unit contains declarations in the global module, the module declaration (which effectively “starts” the module) needs to go after those global declarations. However, this makes it more difficult for both humans and tools to find the module declaration. This paper proposes a syntax that allows having the module declaration be the first declaration in the file, while still having a way to place declarations in the global module. It was observed that this proposal would make it easier to make module a context-sensitive keyword, which has also been requested. EWG encouraged continued exploration in this direction.
    • Module partitions. This iterates on the previous module partitions proposal (found in this paper), with a new syntax: module basename : partition; (unlike in the previous version, partition here is not a keyword, it’s the partition’s name). EWG liked this approach as well. Module partitions also make proclaimed-ownership-declarations unnecessary, so those can be axed.
    • Making module names strings. Currently, module names are identifier sequences separated by dots (e.g. foo.bar.baz), with the dots not necessarily implying a hierarchical relationship; they are mapped onto files in an implementation-defined manner. Making them strings instead would allow mapping onto the filesystem more explicitly. There was no consensus for this change in EWG.
    • Making module a context-sensitive keyword. As always, making a common word like module a hard keyword breaks someone. In this case, it shows up as an identifier in many mature APIs like Vulkan, CUDA, Direct X 9, and others, and in some of these cases (like Vulkan) the name is enshrined into a published specification. In some cases, the problem can be solved by making the keyword context-sensitive, and that’s the case for module (especially if the proposal about the placement of module declarations is accepted). EWG agreed to make the keyword context-sensitive. The authors of this paper asked if this could be done for the TS rather than for Modules v2; that request was rejected, but implementers indicated that they would implement it as context-sensitive ASAP, thus avoiding problems in practice.
    • Modules TS does not support intended use case. The bulk of the concerns here were addressed in the Modules TS while addressing PDTS comments, except for a proposed extension to allow using-declarations with an unqualified name. EWG indicated it was open to such an extension for v2.
    • Two papers about support for exporting macros, which remains one of the most controversial questions about Modules. The first was a “rumination” paper, which was mostly arguing that we need a published TS and deployment experience before we can settle the question; the second argued that having deployed modules (clang’s pre-TS implementation) in a large codebase (Apple’s), it’s clear that macro support is necessary. A number of options for preserving hygiene, such as only exporting and importing individual macros, were discussed. EWG expressed a lukewarm preference to continuing to explore macro support, particularly with such fine-grained control for hygiene.

    Other Working Groups

    The Library Evolution Working Group, as usual, reviewed a decent amount of proposed new library features. While I can’t give a complete listing of the proposals discussed and their outcomes (having been in EWG all week), I’ll mention a few highlights of accepted proposals:

    Targeting C++20:

    std::span (formerly called array_view) is also targeting C++20, but has not quite gotten final approval from LEWG yet.

    Targeting the Library Fundamentals TS v3:

    • mdspan, a multi-dimensional array view. (How can a multi-dimensional array view be approved sooner than a single-dimensional one, you ask? It’s because mdspan is targeting a TS, but span is targeting the standard directly, so span needs to meet a higher bar for approval.)
    • std::expected<T>, a “value or error” variant type very similar to Rust’s Result

    Targeting the Ranges TS:

    • Range adaptors (“views”) and utilities. Range views are ranges that lazily consume elements from an underlying range, while performing an additional operation like transforming the elements or filtering them. This finally gives C++ a standard facility that’s comparable to C#’s LINQ (sans the SQL syntax), Java 8’s streams, or Rust’s iterators. C++11 versions of the facilities proposed here are available today in the range-v3 library (which was in turn inspired by Boost.Range).

    There was an evening session to discuss the future of text handling in C++. There was general agreement that it’s desirable to have a text handling library that has notions of code units, code points, and grapheme clusters; for many everyday text processing algorithms (like toupper), operating at the level of grapheme clusters makes the most sense. Regarding error handling, different people have different needs (safety vs. performance), and a policy-based approach to control error handling may be advisable. Some of the challenges include standard library implementations having to ship a database of Unicode character classifications, or hook into the OS’s database. The notion of whether we should have a separate character type to represent UTF-8 encoded text, or just use char for that purpose, remains contentious.

    SG 7 (Compile-Time Programming)

    SG 7, the Compile-Time Programming (previously Reflection) Study Group met for an evening session.

    An updated version of a proposed extension to the static reflection proposal to allow reflecting over functions was briefly reviewed and sent onwards for review in EWG and LEWG at future meetings.

    The rest of the evening was spent discussing an updated version of the metaclasses proposal. To recap, a metaclass defines a compile-time transformation on a class, and can be applied to a class to produce a transformed class (possibly among other things, like helper classes / functions). The discussion focused on one particular dimension of the design space here: how the transformation should be defined. Three options were given:

    1. The metaclass operates on a mutable input class, and makes changes to it to produce the transformed class. This is how it worked in the original proposal.
    2. Like #1, but the metaclass operates on an immutable input class, and builds the transformed class from the ground up as its output.
    3. Like #2, but the metaclass code operates on the “meta level”, where the representation of the input and output types is an ordinary object of type meta::type. This dispenses with most of the special syntax of the first two approaches, making the metaclass look a lot like a normal constexpr function.

    SG 7 liked the third approach the best, noting that it not only dispenses with the need for the $ syntax (which couldn’t have been the final syntax anyways, it would have needed to be something uglier), but makes the proposal more general (opening up more avenues for how and where you can invoke/apply the metaclass), and more in line with the preference the group previously expressed to have reflection facilities operate on a homogeneous value representation of the program’s entities.

    Discussion of other dimensions of the design space, such as what the invocation syntax for metaclasses should look like (i.e. how you apply them to a class) was deferred to future meetings.

    SG 12 (Undefined Behaviour and Vulnerabilities)

    SG 12, the Undefined Behaviour Study Group, recently had its scope expanded to also cover documenting vulnerabilities in the C++ language, and ways to avoid them.

    This latter task is a joint effort between SG 12 and WG 23, a sister committee of the C++ Standards Committee that deals with vulnerabilities in programming languages in general. WG 23 produces a language-agnostic document that catalogues vulnerabilities without being specific to a language, and then language-specific annexes for a number of programming languages. For the last couple of meetings, WG 23 has been collaborating with our SG 12 to produce a C++ annex; the two groups met for that purpose for two days during this meeting. The C++ annex is at a pretty early stage, but over time it has the potential to grow to be a comprehensive document outlining many interesting types of vulnerabilities that C++ programmers can run into, and how to avoid them.

    SG 12 also had a meeting of its own, where it looked at a proposal to make certain low-level code patterns that are widely used but technically have undefined behaviour, have defined behaviour instead. This proposal was reviewed favourably.

    C++ Stability and Velocity

    On Friday evening, there was a session to discuss the stability and velocity of C++.

    One of the focuses of the session was reviewing the committee’s policy on deprecating and removing old features that are known to be broken or that have been superseded by better alternatives. Several language features (e.g. dynamic exception specifications) and library facilities (e.g. std::auto_ptr) have been deprecated and removed in this way.

    One of the library facilities that were removed in C++17 was the deprecated “binders” (std::bind1st and std::bind2nd). These have been superseded by the C++11 std::bind, but, unlike say auto_ptr, they aren’t problematic or dangerous in any way. It was argued that the committee should not deprecate features like that, because it causes unnecessary code churn and maintenance cost for codebases whose lifetime and update cycle is very long (on the order of decades); embedded software such as an elevator control system was brought up as a specific example.

    While some sympathized with this viewpoint, the general consensus was that, to be able to evolve at the speed it needs to to satisfy the needs of the majority of its users, C++ does need to be able to shed old “cruft” like this over time. Implementations often do a good job of maintaining conformance modes with older standard versions (and even “escape hatches” to allow old features that have been removed to be used together with new features that have since been added), thus allowing users to continue using removed features for quite a while in practice. (Putting bind1st and bind2nd specifically back into C++20 was polled, but didn’t have consensus.)

    The other focus of the session was the more general tension between the two pressures of stability and velocity that C++ faces as it evolves. It was argued that there is a sense in which the two are at odds with each other, and the committee needs to take a clearer stance on which is the more important goal. Two examples of cases where backwards compatibility constraints have been a drag on the language that were brought up were the keywords used for coroutines (co_await, co_yield, etc. – wouldn’t it have been nice to just be able to claim await and yield instead?), and the const-correctness issue with std::function which still remains to be fixed. A poll on which of stability or velocity is more important for the future direction of C++ revealed a wide array of positions, with somewhat of a preference for velocity.

    Conclusion

    This was a productive meeting, whose highlights included the Modules TS making good progress towards its publication; C++20 continuing to take shape as the draft standard gained the consistent comparisons feature among many other smaller ones; and range views/adaptors being standardized for the Ranges TS.

    The next meeting of the Committee will be in Jacksonville, Florida, the week of March 12th, 2018. It, too, should be an exciting meeting, as design discussion of Concepts resumes (with the future of AFTs possibly being settled), and the Modules TS is hopefully finalized (if that doesn’t already happen between meetings). Stay tuned for my report!

    Other Trip Reports

    Others have written reports about this meeting as well. Some that I’ve come across include Herb Sutter’s and Bryce Lelbach’s. I encourage you to check them out!


    Planet MozillaFirefox Private Browsing vs. Chrome Incognito: Which is Faster?

    With the launch of Firefox Quantum, Mozilla decided to team up with Disconnect Inc. to compare page load times between desktop versions of Chrome’s Incognito mode and Firefox Quantum’s Private Browsing.

    Firefox Quantum is the fastest version of Firefox we’ve ever made. It is twice as fast as Firefox 52 and often faster than the latest version of Chrome in head to head page load comparisons. By using key performance benchmarks, we were able to optimize Firefox to eliminate unnecessary delays and give our users a great browsing experience.

    Most browser performance benchmarks focus on the use of a regular browsing mode. But, what about Private Browsing? Given that Private Browsing use is so common, we wanted to see how Firefox’s Private Browsing compared with Chrome’s Incognito when it came to page load time (that time between a click and the page being fully loaded on the screen).

    Spoiler Alert…. Firefox Quantum’s Private Browsing is fast…. really fast.

    Why would Private Browsing performance be any different?

    Websites have the ability to load content and run scripts from a variety of sources. Some of these scripts include trackers. Trackers are used for a variety of reasons including everything from website analytics to tracking website engagement for the purposes of targeted advertising. The use of trackers on websites is very common. Unfortunately trackers can delay the completion of page loads while the browser waits for tracking scripts to respond.

    In 2015, Firefox became the only browser to include Tracking Protection enabled by default in Private Browsing mode. Tracking Protection, as the name implies, blocks resources from loading if the URL being loaded is on a list of known trackers as defined by Disconnect’s Tracking Protection list. This list is a balanced approach to blocking and does not include sites that obey Do Not Track (as defined in the EFF guidelines).  While the feature is meant to help keep users from being tracked when they have explicitly opted to use Private Browsing, the side effect is a much faster browsing experience on websites which attempt to load content from URLs on the tracking list. A previous Firefox study in 2015 showed that there was a reduction in median page load time on top News websites of 44%.

    Since Firefox Quantum is the fastest version of Firefox yet, we thought it would be interesting to compare page load times between Firefox Quantum’s Private Browsing (which includes Tracking Protection), and Chrome’s Incognito mode which does not include a tracking protection feature.

    Study Methodology

    The study was conducted by Disconnect, the organization behind the domain lists used to power Tracking Protection. Page load times for the top 200 news websites as ranked by Alexa.com were measured using Firefox Quantum (v57.0b10v57 beta) in both default and Private Browsing modes and the most recent Chrome version (v61.0.3163.100) that was available at the time of testing – also in default and Incognito modes. News sites were tested because these sites tend to have the most trackers.

    Each of the news websites were loaded 10 times. In order for the test to measure comparable timings and to be reproducible by others, load times were measured using the PerformanceTiming API for both Firefox and Chrome for each page load. In particular, the total load time is considered as the difference between PerformanceTiming.loadEventEnd and PerformanceTiming.navigationStart. The tests were controlled through an automated script.

    All rounds of testing were conducted on a new Macbook Pro (13’’ Macbook Pro 2017, 3.1GHz i5, 16GB memory, OSX 10.13). We tested on a fast network connection with the Macbook Pro connected to a Webpass 100Mbps connection over WiFi (802.11ac, 867Mbit/s). For a deep dive into the methodology, check out our Mozilla Hacks post.

    Results

    Across the top 200 news websites tested, the average page load time for Firefox Quantum’s Private Browsing is 3.2 seconds compared to Chrome’s Incognito mode which took an average of 7.7 seconds to load a page for the fast Gigabit connection. This means that, on average, Firefox Quantum’s Private Browsing loads page 2.4x faster than Chrome in Incognito mode.

    On average, Firefox Quantum’s Private Browsing loads page 2.4x faster than Chrome in Incognito mode

    Comparing the average load times for Chrome also shows that Incognito mode alone does not bring any speed improvements. It is the Tracking Protection that makes the difference as can be seen from the results for Firefox Quantum.

    Another way to look at this data is by looking at the time that is acceptable to users for pages to be loaded. A third party study by SOASTA Inc. recently found that an average session load time of 6 seconds already leads to a 70% user bounce rate. Therefore, it makes sense to put our measurements in the context of looking at the share of pages per browser that took longer than 6 seconds to load.

    95% of page loads met the 6 second or faster threshold using Firefox Quantum Private Browsing with Tracking Protection

    95% of page loads met the 6 second or faster threshold using Firefox Quantum Private Browsing with Tracking Protection enabled, while only 70% of page loads made the cut on Chrome, leaving nearly a third of the news sites unable to load within that time frame.

    What’s next?

    While the speed improvements in Firefox Quantum will vary depending on the website, overall users can expect that Private Browsing in Firefox will be faster than Chrome’s Incognito mode right out of the box.

    In fact, due to these findings, we wanted users to be able to benefit from the increased speed and privacy outside of Private Browsing mode. With Firefox Quantum, users now have the ability to enable Tracking Protection in Firefox at any time.

    Interested? Then try Private Browsing for yourself!

    If you’d like to take it up a notch and enable Tracking Protection every time you use Firefox, then download Firefox Quantum, open Preferences. Choose Privacy & Security and scroll down until you find the Tracking Protection section. Alternatively, simply search for “Tracking Protection” in the Find in Preferences field. Enable Tracking Protection “Always” and you are set to enjoy both improved speed and privacy whenever you use Firefox Quantum.

    When enabling it, please keep in mind that Tracking Protection may block social “like” buttons, commenting tools and some cross-site video content.

    Tracking Protection in the new Firefox browser

    If Tracking Protection is a feature that you’ve commonly used or that you will want to use more regularly, give Firefox Quantum a try to experience how fast it is!


    Disconnect Inc. and Mozilla partnered up in 2015 to power Firefox’s Tracking Protection giving you control over the data that third parties receive from you online. The blocklist is based on a list of known trackers as defined by Disconnect’s Tracking Protection list. As a follow-up, we asked ourselves if Firefox’s Private Browsing mode with Tracking Protection might also offer speed benefits.

    Contributors: Peter Dolanjski & Dominik Strohmeier – Mozilla, Casey Oppenheim & Eason Goodale – Disconnect Inc.

    The post Firefox Private Browsing vs. Chrome Incognito: Which is Faster? appeared first on The Mozilla Blog.

    Planet MozillaEnabling the Social 3D Web

    Enabling the Social 3D Web

    As hinted in our recent announcement of our Mixed Reality program we’d like to share more on our efforts to help accelerate the arrival of the Social 3D Web.

    Mixed Reality is going to provide a wide variety of applications that will change the way we live and work. One which stands to be the most transformative is the ability to naturally communicate with another person through the Internet. Unlike traditional online communication tools like text, voice, and video, the promise of Mixed Reality is that you can be present with other people, much like real life, and engage with them in a more interactive way. You can make eye contact, nod your head, high five, dance, and even play catch with anyone around the world, regardless of where you are!

    Enabling the Social 3D Web

    Mozilla’s mission is to ensure the Internet remains a global public resource. As part of this mission, fostering this new and transformative form of Internet-based communication is something we care deeply about. We believe that the web is the platform that will provide the best user experience for Mixed Reality-based communication. It will ensure that people can connect and collaborate in this new medium openly, across devices and ecosystems, free from walled gardens, all while staying in control of their identity. Meeting with others around the world in Mixed Reality should be as easy as sharing a link, and creating a virtual space to spend time in should be as easy as building your first website.

    To help realize this vision, we have formed a dedicated team focused on Social Mixed Reality. Today, we’ll be sharing our vision and roadmap for accelerating the arrival of Social MR experiences to the web.

    WebVR has come a long way since its very first steps in 2014. The WebVR and A-Frame communities have produced amazing experiences, but multi-user social experiences are still few and far between. Without a shared set of libraries and APIs, Social MR experiences on the web are often inconsistent, with limited support for identity and avatars, and lack basic tools to allow users to find, share, and experience content together.

    To address this, we are investing in areas that we believe will help fill in some of the missing pieces of Social MR experiences on the web. To start, we will be delivering open source components and services focused on making it possible for A-Frame developers to deliver rich and compelling Social MR experiences with a few lines of code. In addition, we will be building experimental Social MR products of our own in order to showcase the technology and to provide motivating examples.

    Although we plan to share some initial demos in the near future, our work has advanced enough that we wanted to share our github repositories and invite the community to join the conversation, provide feedback, or even begin actively contributing.

    We’ll be focused on the following areas:

    Avatars + Identity

    We want to enable A-Frame (and eventually, other frameworks and game engines) to easily add real-time, natural, human communication to their experiences. This includes efficient networking of voice and avatars, consistent locomotion controls, and mechanisms for self-expression and customization with an identity that you control. We will also be exploring ways for creators to publish and distribute custom avatars and accessories to users of Social MR applications.

    Communication

    Within a Social MR experience, you’ll often want to find your friends or meet new people, while having the controls you need to ensure your comfort and safety. We are aiming to bring traditional text and voice-based communication, social networking, and cross-app interactions like messaging to Mixed Reality A-Frame apps. This includes traditional features like friending, messaging, and blocking, as well as Mixed Reality-specific features like personal space management.

    Entities

    Once you are able to be with other people in Mixed Reality, you’ll want to interact with shared 3D objects. You should be able to throw a frisbee to one another, play cards, or even create a sculpture together. Also, objects in the world should be where you left them when you come back later. We’ll make it easy for Social MR apps to support the live manipulation, networking, and persistence of physics-enabled 3D objects.

    Tools

    There are some things you always want to be able to do in a social setting. In real life, no matter where you are, you can always pull out your phone to snap a photo or send a message. Similarly, we want to provide components and tools that are useful in — and across — all Social MR experiences. Tied together via your identity, these components could allow drawing in 3D, navigating to a web page, sharing a video, or taking a selfie in any app.

    Search + Discovery

    The MR Web is full of disconnected experiences. How do you find new apps to try, or share one with friends, or even join an app your friends are currently using? How do you meet new people? We’ll improve the discovery and sharing of MR content, and give users ways to meet new and interesting people, all through the web within Mixed Reality. For example, matchmaking for apps that need groups of multiple players, a friends list showing who is online and in which app, or a content feed highlighting the best Social MR experiences on the web.

    How to get involved

    If you are interested in helping bring Social MR to the web, first and foremost please join the conversation with us in the #social channel on the WebVR Slack.

    Additionally, feel free to browse our Github repositories and open an issue if you have questions or feedback. We are just getting started, so there may be quite a bit of churn over the coming months, but we’re excited to share what we’ve done so far.

    • mr-social-client - A prototype of a social experience, where we’ll be testing out features and services as we build them.
    • janus-plugin-sfu - A Selective Forwarding Unit plugin for the Janus WebRTC gateway that allows centralized WebRTC message processing, to reduce upstream bandwidth and (eventually) provide authoritative networking services.
    • reticulum - A Phoenix networking and application server that will be the entry point to most of the services we provide.
    • mr-ops - Our infrastructure as code, operations, and deployment scripts.
    • socialmr - A container for tracking Github issues we’re working on.

    We’ll also be contributing back to other existing projects. For example we recently contributed positional audio support and are working on a Janus networking adapter for the awesome networked-aframe project by Hayden Lee.

    F.A.Q

    How long has Mozilla been working on this?

    We’ve been exploring this area for over a year, but the formation of a dedicated team (and the repos published above) kicked off over the last several weeks. Although this is relatively new body of work, now that we have put together a roadmap and have some initial efforts in-progress we wanted to share it with the community.

    Will you support 3rd party game engines like Unity and Unreal Engine?

    This is something we will be definitely be exploring down the road. Our near-term goals are focused on developing services for the web and libraries for A-Frame and three.js Social MR applications, but we hope to make these services and libraries more generally available to other platforms. We would be interested in your feedback on this!

    Will you be proposing new browser APIs?

    As we progress, we may find it makes sense to incorporate some of the functionality we’ve built into the browsers directly. At that point we would propose a solution (similar to how we recently proposed the WebXR API to address extending WebVR to encompass AR concepts) to solicit feedback from developers and other browser vendors.

    Planet MozillaThe Week is Not Over Yet

    I apologize for not providing a constant information feed about NoScript 10's impending release, but I've got no press office or social media staff working for me: when I say "we" about NoScript, I mean the great community of volunteers helping with user support (and especially the wonderful moderators of the NoScript forum).NoScript 10 object placeholder

    By the way, as most but not all users know, there's no "NoScript development team" either: I'm the only developer, and yesterday I also had to temporarily suspend my NoScript 10 final rush, being forced to release two emergency 5.x versions (5.1.6 and 5.1.7) to cope with Firefox 58 compatibility breakages (yes, in case you didn't notice, "Classic" NoScript 5 still works on Firefox 58 Developer Edition with some tricks, even though Firefox 52 ESR is still the best "no surprises" option).

    Anyway, here's my update: the week, at least in Italy, finishes on Sunday night, there's no "disaster recovery" going on, and NoScript 10's delay on Firefox 57's release is still going to be measured in days, not weeks.

    Back to work now, and thank you again for your patience and support :)

    Planet MozillaPhabricator and Lando November Update

    With work on Phabricator–BMO integration wrapping up, the development team’s focus has switched to the new automatic-landing service that will work with Phabricator. The new system is called “Lando” and functions somewhat similarly to MozReview–Autoland, with the biggest difference being that it is a standalone web application, not tightly integrated with Phabricator. This gives us much more flexibility and allows us to develop more quickly, since working within extension systems is often painful for anything nontrivial.

    Lando is split between two services: the landing engine, “lando-api”, which transforms Phabricator revisions into a format suitable for the existing autoland service (called the “transplant server”), and the web interface, “lando-ui”, which displays information about the revisions to land and kicks off jobs. We split these services partly for security reasons and partly so that we could later have other interfaces to lando, such as command-line tools.

    When I last posted an update I included an early screenshot of lando-ui. Since then, we have done some user testing of our prototypes to get early feedback. Using a great article, “Test Your Prototypes: How to Gather Feedback and Maximise Learning”, as a guide, we took our prototype to some interested future users. Refraining from explaining anything about the interface and providing only some context on how a user would get to the application, we encouraged them to think out loud, explaining what the data means to them and what actions they imagine the buttons and widgets would perform. After each session, we used the feedback to update our prototypes.

    These sessions proved immensely useful. The feedback on our third prototype was much more positive than on our first prototype. We started out with an interface that made sense to us but was confusing to someone from outside the project, and we ended with one that was clear and intuitive to our users.

    For comparison, this is what we started with:

    And here is where we ended:

    A partial implementation of the third prototype, with a few more small tweaks raised during the last feedback session, is currently on http://lando.devsvcdev.mozaws.net/revisions/D6. There are currently some duplicated elements there just to show the various states; this redundant data will of course be removed as we start filling in the template with real data from Phabricator.

    Phabricator remains in a pre-release phase, though we have some people now using it for mozilla-central reviews. Our team continues to use it daily, as does the NSS team. Our implementation has been very stable, but we are making a few changes to our original design to ensure it stays rock solid. Lando was scheduled for delivery in October, but due to a few different delays, including being one person down for a while and not wanting to launch a new tool during the flurry of the Firefox 57 launch, we’re now looking at a January launch date. We should have a working minimal version ready for Austin, where we have scheduled a training session for Phabricator and a Lando demo.

    Planet MozillaNovember 2017 CA Communication

    Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about Mozilla’s expectations regarding version 2.5 of Mozilla’s Root Store Policy, annual CA updates, and actions the CAs need to take. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 8 action items:

    1. Review version 2.5 of Mozilla’s Root Store Policy, and update the CA’s CP/CPS documents as needed to become fully compliant.
    2. Confirm understanding that non-technically-constrained intermediate certificates must be disclosed in the Common CA Database (CCADB) within one week of creation, and of new requirements for technical constraints on intermediate certificates issuing S/MIME certificates.
    3. Confirm understanding that annual updates (audits, CP, CPS, test websites) are to be provided via Audit Cases in the CCADB.
    4. Confirm understanding that audit statements that are not in English and do not contain all of the required information will be rejected by Mozilla, and may result in the CA’s root certificate(s) being removed from our program.
    5. Perform a BR Self Assessment and send it to Mozilla. This self assessment must cover the CA Hierarchies (and all of the corresponding CP/CPS documents) that chain up to their CA’s root certificates that are included in Mozilla’s root store and enabled for server authentication (Websites trust bit).
    6. Provide a tested email address for the CA’s Problem Reporting Mechanism.
    7. Follow new developments and effective dates for Certification Authority Authorization (CAA)
    8. Check issuance of certs to .tg domains between October 25 and November 11, 2017.

    The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

    With this CA Communication, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

    Mozilla Security Team

    The post November 2017 CA Communication appeared first on Mozilla Security Blog.

    Planet MozillaLegacy Firefox Extensions and "Userspace"

    This week’s release of Firefox Quantum has prompted all kinds of feedback, both positive and negative. That is not surprising to anybody – any software that has a large number of users is going to be a topic for discussion, especially when the release in question is undoubtedly a watershed.

    While I have previously blogged about the transition to WebExtensions, now that we have actually passed through the cutoff for legacy extensions, I have decided to add some new commentary on the subject.

    One analogy that has been used in the discussion of the extension ecosystem is that of kernelspace and userspace. The crux of the analogy is that Gecko is equivalent to an operating system kernel, and thus extensions are the user-mode programs that run atop that kernel. The argument then follows that Mozilla’s deprecation and removal of legacy extension capabilities is akin to “breaking” userspace. [Some people who say this are using the same tone as Linus does whenever he eviscerates Linux developers who break userspace, which is neither productive nor welcomed by anyone, but I digress.] Unfortunately, that analogy simply does not map to the legacy extension model.

    Legacy Extensions as Kernel Modules

    The most significant problem with the userspace analogy is that legacy extensions effectively meld with Gecko and become part of Gecko itself. If we accept the premise that Gecko is like a monolithic OS kernel, then we must also accept that the analogical equivalent of loading arbitrary code into that kernel, is the kernel module. Such components are loaded into the kernel and effectively become part of it. Their code runs with full privileges. They break whenever significant changes are made to the kernel itself.

    Sound familiar?

    Legacy extensions were akin to kernel modules. When there is no abstraction, there can be no such thing as userspace. This is precisely the problem that WebExtensions solves!

    Building Out a Legacy API

    Maybe somebody out there is thinking, “well what if you took all the APIs that legacy extensions used, turned that into a ‘userspace,’ and then just left that part alone?”

    Which APIs? Where do we draw the line? Do we check the code coverage for every legacy addon in AMO and use that to determine what to include?

    Remember, there was no abstraction; installed legacy addons are fused to Gecko. If we pledge not to touch anything that legacy addons might touch, then we cannot touch anything at all.

    Where do we go from here? Freeze an old version of Gecko and host an entire copy of it inside web content? Compile it to WebAssembly? [Oh God, what have I done?]

    If that’s not a maintenance burden, I don’t know what is!

    A Kernel Analogy for WebExtensions

    Another problem with the legacy-extensions-as-userspace analogy is that it leaves awkward room for web content, whose API is abstract and well-defined. I do not think that it is appropriate to consider web content to be equivalent to a sandboxed application, as sandboxed applications use the same (albeit restricted) API as normal applications. I would suggest that the presence of WebExtensions gives us a better kernel analogy:

    • Gecko is the kernel;
    • WebExtensions are privileged user applications;
    • Web content runs as unprivileged user applications.

    In Conclusion

    Declaring that legacy extensions are userspace does not make them so. The way that the technology actually worked defies the abstract model that the analogy attempts to impose upon it. On the other hand, we can use the failure of that analogy to explain why WebExtensions are important and construct an extension ecosystem that does fit with that analogy.

    Planet MozillaLegacy Firefox Exensions and "Userspace"

    This week’s release of Firefox Quantum has prompted all kinds of feedback, both positive and negative. That is not surprising to anybody – any software that has a large number of users is going to be a topic for discussion, especially when the release in question is undoubtedly a watershed.

    While I have previously blogged about the transition to WebExtensions, now that we have actually passed through the cutoff for legacy extensions, I have decided to add some new commentary on the subject.

    One analogy that has been used in the discussion of the extension ecosystem is that of kernelspace and userspace. The crux of the analogy is that Gecko is equivalent to an operating system kernel, and thus extensions are the user-mode programs that run atop that kernel. The argument then follows that Mozilla’s deprecation and removal of legacy extension capabilities is akin to “breaking” userspace. [Some people who say this are using the same tone as Linus does whenever he eviscerates Linux developers who break userspace, which is neither productive nor welcomed by anyone, but I digress.] Unfortunately, that analogy simply does not map to the legacy extension model.

    Legacy Extensions as Kernel Modules

    The most significant problem with the userspace analogy is that legacy extensions effectively meld with Gecko and become part of Gecko itself. If we accept the premise that Gecko is like a monolithic OS kernel, then we must also accept that the analogical equivalent of loading arbitrary code into that kernel, is the kernel module. Such components are loaded into the kernel and effectively become part of it. Their code runs with full privileges. They break whenever significant changes are made to the kernel itself.

    Sound familiar?

    Legacy extensions were akin to kernel modules. When there is no abstraction, there can be no such thing as userspace. This is precisely the problem that WebExtensions solves!

    Building Out a Legacy API

    Maybe somebody out there is thinking, “well what if you took all the APIs that extensions used, turned that into a ‘userspace,’ and then just left that part alone?”

    Which APIs? Where do we draw the line? Do we check the code coverage for every legacy addon in AMO and use that to determine what to include?

    Remember, there was no abstraction; installed legacy addons are fused to Gecko. If we pledge not to touch anything that legacy addons might touch, then we cannot touch anything at all.

    Where do we go from here? Freeze an old version of Gecko and host an entire copy of it inside web content? Compile it to WebAssembly? [Oh God, what have I done?]

    If that’s not a maintenance burden, I don’t know what is!

    A Kernel Analogy for WebExtensions

    Another problem with the legacy-extensions-as-userspace analogy is that it leaves awkward room for web content, whose API is abstract and well-defined. I do not think that it is appropriate to consider web content to be equivalent to a sandboxed application, as sandboxed applications use the same (albeit restricted) API as normal applications. I would suggest that the presence of WebExtensions gives us a better kernel analogy:

    • Gecko is the kernel;
    • WebExtensions are privileged applications;
    • Web content runs as unprivileged user applications.

    In Conclusion

    Declaring that legacy extensions are userspace does not make them so. The way that the technology actually worked defies the abstract model that the analogy attempts to impose upon it. On the other hand, we can use the failure of that analogy to explain why WebExtensions is important and allows us to construct an extension ecosystem that does fit with that analogy.

    Planet MozillaMeasuring the noise in Performance tests

    Often I hear about our talos results, why are they so noisy?  What is noise in this context- by noise we are referring to a larger stddev in the results we track, here would be an example:

    noise_example

    With the large spread of values posted regularly for this series, it is hard to track improvements or regressions unless they are larger or very obvious.

    Knowing the definition of noise, there are a few questions that we often need to answer:

    • Developers working on new tests- what is the level of noise, how to reduce it, what is acceptable
    • Over time noise changes- this causes false alerts, often not related to to code changes or easily discovered via infra changes
    • New hardware we are considering- is this hardware going to post reliable data for us.

    What I care about is the last point, we are working on replacing the hardware we run performance tests on from old 7 year old machines to new machines!  Typically when running tests on a new configuration, we want to make sure it is reliably producing results.  For our system, we look for all green:

    all_green

    This is really promising- if we could have all our tests this “green”, developers would be happy.  The catch here is these are performance tests, are the results we collect and post to graphs useful?  Another way to ask this is are the results noisy?

    To answer this is hard, first we have to know how noisy things are prior to the test.  As mentioned 2 weeks ago, Talos collects 624 metrics that we track for every push.  That would be a lot of graph and calculating.  One method to do this is push to try with a single build and collect many data points for each test.  You can see that in the image showing the all green results.

    One method to see the noise, is to look at compare view.  This is the view that we use when comparing one push to another push when we have multiple data points.  This typically highlights the changes that are easy to detect with our t-test for alert generation.  If we look at the above referenced push and compare it to itself, we have:

    self_compare

     

    Here you can see for a11y, linux64 has +- 5.27 stddev.  You can see some metrics are higher and others are lower.  What if we add up all the stddev numbers that exist, what would we have?  In fact if we treat this as a sum of the squares to calculate the variance, we can generate a number, in this case 64.48!  That is the noise for that specific run.

    Now if we are bringing up a new hardware platform, we can collect a series of data points on the old hardware and repeat this on the new hardware, now we can compare data between the two:

    hardware_compare

    What is interesting here is we can see side by side the differences in noise as well as the improvements and regressions.  What about the variance?  I wanted to track that and did, but realized I needed to track the variance by platform, as each platform could be different- In bug 1416347, I set out to add a Noise Metric to the compare view.  This is on treeherder staging, probably next week in production.  Here is what you will see:

    noise_view

    Here we see that the old hardware has a noise of 30.83 and the new hardware a noise of 64.48.  While there are a lot of small details to iron out, while we work on getting new hardware for linux64, windows7, and windows10, we now have a simpler method for measuring the stability of our data.

     

     

     


    Planet MozillaService Workers and a Promise to Catch

    I love Service Workers. I've written previously about my work to use them in Thimble. They've allowed us to support all of the web's dynamic loading mechanisms for images, scripts, etc., which wasn't possible when we only used Blob URLs.

    But as much as I love them, they add a whole new layer of pain when you're trying to debug problems. Every web developer has dealt with the frustration of a long debug session, baffled when code changes won't get picked up in the browser, only to realize they are running on cached resources. Now add another ultra powerful cache layer, and an even more nuanced debug environment, and you have the same great taste with twice the calories.

    We've been getting reports from some users lately that Thimble has been doing odd things in some cases. One of thing things we do with a Service Worker is to simulate a web server, and load web resources out of the browser filesystem and/or editor. Instead of seeing their pages in the preview window, they instead get an odd 404 that looks like it comes from S3.

    Naturally, none of us working on the code can recreate this problem. However, today, a user was also kind enough to include a screenshot that included their browser console:

    And here, finally, is the answer! Our Service Worker has failed to register, which means requests for resources are hitting the network directly vs. the Service Worker and Cache Storage. I've already got a patch up that should fix this, but while I wait, I wanted to say something to you about how you can avoid this mess.

    First, let's start with the canonical Service Worker registration code one finds on the web:

    if ('serviceWorker' in navigator) {  
      // Register a service worker hosted at the root of the
      // site using the default scope.
      navigator.serviceWorker.register('/sw.js').then(function(registration) {
        console.log('Service worker registration succeeded:', registration);
      }).catch(function(error) {
        console.log('Service worker registration failed:', error);
      });
    } else {
      console.log('Service workers are not supported.');
    }
    

    Here, after checking if serviceWorker is defined in the current browser, we attempt (I use the word intentionally) to register the script at /sw.js as a Service Worker. This returns a Promise, which we then() do something with after it completes. Also, there's an obligatory catch().

    I want to say something about that catch(). Of course we know, I know, that you need to deal with errors. However, errors come in all different shapes and sizes, and when you're only anticipating one kind, you can get surprised by rarer, and more deadly varieties.

    You might, for example, find that you have a syntax error in /sw.js, which causes registration to fail. And if you do, it's the kind of error you're going to discover quickly, because it will break instantly on your dev machine. There's also the issue that certain browsers don't (yet) support Service Workers. However, our initial if ('serviceWorker' in navigator) {...} check should deal with that.

    So having dealt with incompatible browsers, and incompatible code, it's tempting to conclude that you're basically done here, and leave a console.log() in your catch(), like so many abandoned lighthouses, still there for tourists to take pictures, but never used by mariners.

    Until you crash. Or more correctly, until a user crashes, and your app won't work. In which case you begin your investigation: "Browser? OS? Versions?" You replicate their environment, and can't make it happen locally. What else could be wrong?

    I took my grief to Ben Kelly, who is one of the people behind Firefox's Service Worker implementation. He in turn pointed me at Bug 1336364, which finally shed light on my problem.

    We run our app on two origins: one which manages the user credentials, and talks to our servers; the other for the editor, which allows for arbitrary user-written JS to be executed. We don't want the latter accessing the cookies, session, etc. of the former. Our Service Worker is thus being loaded in an iframe on our second domain.

    Or, it's being loaded sometimes. The user might have set their browser's privacy settings to block 3rd party cookies, which is really a proxy for "block all cookie-like storage from 3rd parties," and that includes Service Workers. When this happens, our app continues to load, minus the Service Worker (failing to register with a DOM security exception), which various parts of the app expect to be there.

    In our case, the solution is to add an automatic failover for the case that Service Workers are supported but not available. Doing so means having more than a console.log() in our catch(e) block, which is what I'd suggest you do when you try to register() your Service Workers.

    This is one of those things that makes lots of sense when you know about it, but until you've been burned by it, you might not take it seriously. It's an easy one to get surprised by, since different browsers behave differently here, and testing for it means not just testing with different browsers, but also different settings per browser.

    Having been burned by it, I wanted to at least write something that might help you in your our of need. If you're going to use Service Workers, you have to Promise to do more with what you Catch than just Log it.

    Planet Mozilla11/16 Mozilla Curriculum Wksp. Fall 2017

    11/16 Mozilla Curriculum Wksp. Fall 2017 Join us for a special, series finale “ask me anything” (AMA) episode of the Mozilla Curriculum Workshop at 1:00 PM ET, on Tuesday, November 16th,...

    Planet Mozilla11/16 Mozilla Curriculum Wksp. Fall 2017

    11/16 Mozilla Curriculum Wksp. Fall 2017 Join us for a special, series finale “ask me anything” (AMA) episode of the Mozilla Curriculum Workshop at 1:00 PM ET, on Tuesday, November 16th,...

    Planet MozillaData Science is Hard: What’s in a Dashboard

    <figure class="wp-caption alignnone" id="attachment_5596">1920x1200-4-COUPLE-WEEKS-AFTER<figcaption class="wp-caption-text">The data is fake, don’t get excited.</figcaption></figure>

    Firefox Quantum is here! Please do give it a go. We have been working really hard on it for quite some time, now. We’re very proud of what we’ve achieved.

    To show Mozillians how the release is progressing, and show off a little about what cool things we can learn from the data Telemetry collects, we’ve built a few internal dashboards. The Data Team dashboard shows new user count, uptake, usage, install success, pages visited, and session hours (as seen above, with faked data). If you visit one of our Mozilla Offices, you may see it on the big monitors in the common areas.

    The dashboard doesn’t look like much: six plots and a little writing. What’s the big deal?

    Well, doing things right involved quite a lot more than just one person whipping something together overnight:

    1. Meetings for this dashboard started on Hallowe’en, two weeks before launch. Each meeting had between eight and fourteen attendees and ran for its full half-hour allotment each time.

    2. In addition there were several one-off meetings: with Comms (internal and external) to make sure we weren’t putting our foot in our mouth, with Data ops to make sure we weren’t depending on datasets that would go down at the wrong moment, with other teams with other dashboards to make sure we weren’t stealing anyone’s thunder, and with SVPs and C-levels to make sure we had a final sign-off.

    3. Outside of meetings we spent hours and hours on dashboard design and development, query construction and review, discussion after discussion after discussion…

    4. To say nothing of all the bikeshedding.

    It’s hard to do things right. It’s hard to do even the simplest things, sometimes. But that’s the job. And Mozilla seems to be pretty good at it.

    One last plug: if you want to nudge these graphs a little higher, download and install and use and enjoy the new Firefox Quantum. And maybe encourage others to do the same?

    :chutten


    Planet MozillaReps Weekly Meeting Nov. 16, 2017

    Reps Weekly Meeting Nov. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Planet MozillaReps Weekly Meeting Nov. 16, 2017

    Reps Weekly Meeting Nov. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Planet MozillaA super-stable WebVR user experience thanks to Firefox Quantum

    On Tuesday, Mozilla released Firefox Quantum, the 57th release of the Firefox browser since we started counting. This landmark release replaces some core browser components with newer, faster and more modern implementations. In addition, the Quantum release incorporates major optimizations from Quantum Flow, an holistic effort to modernize and improve the foundations of the Firefox web engine by identifying and removing the main sources of jank without rewriting everything from scratch, an effort my colleague Lin Clark describes as “a browser performance strike force.”

    Quantum Flow has had an important and noticeable effect on WebVR stability as you can see in the following video:

    This video shows the execution profiles of the Snowglobe demo while running Speedometer in the background to simulate the effect of multiple tabs open in a regular scenario.

    <figure><figcaption>The charts at the top of the video show several gaps where the lines go flat. The width of the gap represents a period of time during which the browser could not meet the deadline imposed by the VR system. If this situation continues for long enough, the headset will try to bring the user to a safe space to prevent dizziness and other annoying conditions.</figcaption></figure>

    The intermittent flashes in Firefox 55 correspond to the wide gaps in which the VR system tries to bring the user to the safe space. Notice that in Quantum the effect does not happen, and the experience is smoother and more comfortable.

    The difference is due to the fact that Quantum Flow has removed the bottlenecks interfering with the browser’s ability to send fresh images to the VR system on time.

    To understand how comprehensive optimizations affect virtual reality presentation, it is necessary to know the strict requirements of VR systems and to understand the communication infrastructure of Firefox.

    The VR frame

    In the browser, regular 3D content is displayed at 60 Hz. This implies that the web content has around 16.6 ms to simulate the world, render the scene, and send the new image to the browser compositor thread. If the web page meets the 16.6 ms deadline, the frame rate will be a constant 60 fps (frames per second) and the animations will run smoothly and jank-free.

    <figure><figcaption>The picture above shows three frames, with the current frame highlighted in green. Each vertical line marks the end of the frame, the moment at which the rendered scene is shown to the user.</figcaption></figure>

    VR content is displayed at 90 Hz so the rendering time for VR is reduced to 11.1 ms. In WebVR, the web content sends the rendered frames to a dedicated WebVR thread.

    More important, in VR, we should take into account a fact previously ignored: The delay between when a web page starts rendering the VR scene and when a new image is displayed in the headset has a direct impact on the user’s perception.

    This happens because the scene starts to render after placing the camera at the beginning of the frame based on the user’s head position, but the scene is displayed a little bit after, with just enough time for the user to change their orientation. This delay is known as motion-to-photon latency and can cause dizziness and nausea.

    <figure><figcaption>The effect of motion-to-photon causes reality to fall behind the user’s view.</figcaption></figure>

    Fortunately, VR systems can partially fix this effect without increasing latency, by warping the rendered scene before displaying it in the headset, in a process known as reprojection.

    However, the lower the latency, the more accurate the simulation. So, to reduce latency length, the browser does not start rendering immediately after showing the last frame.

    <figure><figcaption>Following the same approach as the traditional frame, motion-to-photon latency would last for the complete frame.</figcaption></figure>

    Instead, it asks the VR system for a waiting period to delay rendering the scene.

    <figure><figcaption>Waiting at the beginning of the frame, without changing the rendering time, shortens the motion-to-photon latency.</figcaption></figure>

    As discussed below, web content and the WebVR thread run in different processes but they need to coordinate to render the scene. Before Quantum Flow, communication between processes came with the potential risk of becoming a bottleneck. In the VR frame, there are two critical communication points: one after waiting, when WebVR yields the execution to the web page for rendering the scene; and another, after rendering, when the web content sends the frame to the WebVR thread.

    An unexpected delay in either would cause the motion-to-photon latency to peak and the headset would reject the frame.

    Inter-process communication messages in Firefox

    Firefox organizes its execution into multiple processes: the parent process, which contains the browser UI and can access the system resources; the GPU process, specifically intended to communicate with the graphics card and containing the Firefox compositor and WebVR threads; and several content processes, which run the web content but can not communicate with other parts of the system. This separation of processes enable future increases in browser security and prevents a buggy web page from crashing the entire browser.

    <figure><figcaption>Parent, GPU and content processes communicate with each other using inter-process communication (IPC) messages.</figcaption></figure>

    Quite often, processes need to communicate with each other. To do so, they use Inter-Process Communication (IPC) messages. An IPC message consists of three parts: 1) sending the request, 2) performing a task in the recipient, and 3) returning a result to the initiator. These messages can be synchronous or asynchronous.

    We speak of synchronous IPC when any other attempt at messaging via IPC must wait until the current communication finishes. This includes waiting to send the message, completing the task, and returning the result.

    <figure><figcaption>Synchronous IPC implies long waiting times and slow queues.</figcaption></figure>

    The problem with synchronous IPC is that an active tab attempting to communicate with the parent process may block delivery. This forces a wait until the result of a different communication reaches the initiator, even when the latter is a background tab (and therefore, not urgent) or the ongoing task has nothing to do with the attempted communication.

    In contrast, we speak of asynchronous IPC when sending the request, performing the task, and returning the result are independent operations. New communications don’t have to wait to be sent. Execution and result delivery can happen out-of-order and the tasks can be reprioritized dynamically.

    <figure><figcaption>Although task duration and time spent on the trip remain the same, this animation is 34% shorter than the previous one. Asynchronous IPC does not avoid queues but it resolves them faster.</figcaption></figure>

    One of the goals of Quantum Flow, over the course of Firefox 55, 56 and 57 releases, has been to identify synchronous IPCs and transform them into asynchronous IPCs. Ehsan Akhgari, in his series “Quantum Flow Engineering Newsletter”, perfectly reviews the progress of the Quantum Flow initiative this year.

    Now that we’ve explored the performance risks that come with synchronous IPC, let’s revisit the two critical communications inside the VR frame: the one for yielding execution to the web page to start rendering, and the one that sends the frame to the headset; both requiring an IPC from GPU-to-content and content-to-GPU respectively.

    <figure><figcaption>Risk points during the VR frame happen once per frame, i.e., 180 times per second.</figcaption></figure>

    Due to the VR frame rate, these critical moments of risk happen 180 times per second. During the early stages of Quantum Flow in Firefox 55, the high frame rates, in addition to the background activity of other open tabs, increased the probability of being delayed by an ongoing synchronous IPC request. Wait times were not uncommon. In this situation, the browser was constantly missing the deadlines imposed by the VR gear.

    After advancing Quantum Flow efforts in Firefox 56 and 57, the ongoing removal of synchronous IPC reduces the chance of being interrupted by an unexpected communication, and now the browser does not miss a deadline.


    Although Quantum Flow was not aimed specifically at improving WebVR, by removing communication bottlenecks new components can contribute effectively to the global performance gain. Without Quantum Flow, it does not matter how fast, new or modern the browser is, if new features and capabilities are blocked waiting for unrelated operations to finish.

    And thus, Firefox Quantum is not only the fastest version of Firefox for 2D content rendering, it is also the browser that will bring you the most stable and comfortable viewing experience in WebVR so far. And the best is yet to come.

    Planet MozillaFirefox 58 Beta 3 Testday, November 17th

    Hello Mozillians!

    We are happy to let you know that Friday, November 17th, we are organizing Firefox 58 Beta 3 Testday. We’ll be focusing our testing on Web Compatibility and Tabbed Browser.

    Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better!

    See you on Friday!

    Planet MozillaIntroducing The Developer Workflow Team

    I’ve neglected to write about the *other* half of my team, not for any lack of desire to do so, but simply because the code sheriffing situation was taking up so much of my time. Now that the SoftVision contractors have gained the commit access required to be fully functional sheriffs, I feel that I can shift focus a bit.

    Meet the team

    The other half of my team consists of 4 Firefox build system peers. My team consists of:

    Justice League Unlimited

    When the group was first established, we talked a lot about what we wanted to work on, what we needed to work on, and what we should be working on. Those discussions revealed the following common themes:

    • We have a focus on developers. Everything we work on is to help developers be more productive, and go more quickly.
    • We accomplish this through tooling to support better/faster workflows.
    • Some of these improvements can also assist in automation, but that isn’t our primary focus, except where those improvements are also wins for developers, e.g. faster time to first feedback on commit.
    • We act as consultants/liaisons to many other groups that also touch the build system, e.g. Servo, WebRTC, NSS etc.

    Based on that list of themes, we’ve adopted the moniker of “Developer Workflow.” We are all build peers, yes, but to pigeon-hole ourselves as the build system group seemed short-sighted. Our unique position at the intersection of the build system, VCS, and other services meant that our scope needed to match what people expect of us anyway.

    While new to me, Developer Workflow is a logical continuation of build system tiger team organized by David Burns in 2016. This is the same effort that yielded sea change improvements such as artifact builds and sccache.

    In many ways, I feel extremely fortunate to be following on the heels of that work. During the previous year, all the members of my team formed the working relationships they would need to be more successful going forward. All the hard work for me as their manager was already done! ;)

    What are we doing

    We had our first, dedicated work week as a team last week in Mountain View. Aside from getting to know each other a little better, during the week we hashed out exactly what our team will be focused on next year, and made substantial progress towards bootstrapping those efforts.

    Next year, we’ll be tackling the following projects:

    • Finish the migration from Makefiles to moz.build files: A lot of important business logic resides in Makefiles for no good reason. As someone who has cargo-culted large portions of l10n Makefile logic during my tenure at Mozilla, I may be part of the problem.
    • Move build logic out of *.mk files: Greg recently announced his intent to remove client.mk, a foundational piece of code in the Mozilla recursive make build system that has existed since 1998. The other .mk files won’t be far behind. Porting true build logic to moz.build files and removing non-build tasks to task-based scripts will make the build system infinitely more hackable, and will allow us to pursue performance gains in many different areas. For example, decoupled tests like package tests could be run asynchronously, getting results to developers more quickly.
    • Stand-up a tup build in automation: this is our big effort for the near-term. A tup build is not necessarily an end goal in-and-of itself — we may very well end up on bazel or something else eventually — but since the Mike Shal created tup, we control enough of the stack to make quick progress. It’s a means of validating the Makefile migration.
    • Move our Linux builder in automation from Centos6 to Debian: This would move move us closer to deterministic builds, and has alignment with the TOR project, but requires we host our own package servers, CDN, etc. This would also make it easier for developers to reproduce automation builds locally. glandium has a proof-of-concept. We hope to dig into any binary compatibility issues next year.
    • Weening off mozharness for builds: mozharness was a good first step at putting automated build configuration information in the tree for developers. Now that functionality could be better encapsulated elsewhere, and largely hidden by mach. The ultimate goal would be to use the same workflow for developer builds and automation.

    What are we *not* doing

    It’s important to be explicit about things we won’t be tackling too, especially when it’s been unclear historically or where there might be different expectations.

    The biggest one to call out here is github integration. Many teams at Mozilla are using github for developing standalone projects or even parts of Firefox. While we’ve had some historical involvement here and will continue to consult as necessary, other teams are better positioned to drive this work.

    We are also not currently exploring moving Windows builds to WSL. This is something we experimented with in Q3 this year, but build performance is still so slow that it doesn’t warrant further action right now. We continue to follow the development of WSL and if Microsoft is able to fix filesystem performance, we may pick this back up.

    Footnotes

    Updated: .  Michael(tm) Smith <mike@w3.org>