Planet MozillaGuest post: India uses Firefox Nightly: Kick off on May 1, 2017

Biraj Karmakar This is a guest post by Biraj Karmakar, who has been active promoting Mozilla and Mozilla software in India for over 7 years. Biraj is taking the initiative of organizing a series of workshops throughout the country to convince technical people to (mozillians or not) that may be interested in getting involved in Mozilla to use Firefox Nightly.

 

 

In my last blog, I have announced that Mozilla India is going to organize a special campaign on Firefox Nightly Usage in India. RSVP here.

Everything is set. Gearing up for the campaign.

#INUsesFxNightly_fb

BTW Recently we have organized one community call on this campaign. You can watch it to know more about how to organize events and technical things.

  • How to get involved:
    • Online Activities
      • Telling and inviting friends!
      • Create the event in social media!
      • Writing about it on Facebook & Twitter.
      • Posting updates on social media when the event is running.
      • Running an online event like webinar for this campaign. Please, check the event flow.
      • Blog posting regarding Firefox Nightly technical things, features and events.
    • Offline Activities
      • Introduction to Mozilla
      • Introduction to Firefox Nightly Release cycle details
      • Why we need Firefox Nightly users?
      • Showing various stats regarding firefox
      • Installing Nightly on participant’s PC
      • WebCompat on Firefox Nightly
      • How they can contribute in Nightly (QA and Promotion)
      • Swag Distribution
  • Duration of Campaign: 2 months
  • Total Number of offline events: 15 only. 
  • Hashtag: #INUsesFxNightly
  • Duration of each event: 3-5 hours

Swag is ready! 

 

IMG_20170425_174228.jpg

Swag for offline events

For requesting swag, please read here.

Also, we have the budget for these events. You can request it. Know more here .

Other than that if you want to know more about activity format, event flow, resource and more thing, please read the wiki.

If you have a special query, please send a mail to Biraj Karmakar [brnet00 AT gmail DOT com]. Don’t forget to join our telegram group for a realtime chat. 

Planet MozillaFSFE Fellowship Representative, OSCAL'17 and other upcoming events

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

Planet MozillaIntroducing FilterBubbler

Brainfood and Mozilla’s Open Innovation Team Kick Off Text Classification Open Source Experiment

Mozilla’s Open Innovation team is beginning a new effort to understand more about motivations and rewards for open source collaboration. Our goal is to expand the number of people for whom open source collaboration is a rewarding activity.

An interesting question is: While the server side benefits from opportunities to work collaboratively, can we explore them further on the client side, beyond browser features and their add-on ecosystems? User interest in “filter bubbles” gives us an opportunity to find out. The new FilterBubbler project provides a platform that helps users experiment with and explore what kind of text they’re seeing on the web. FilterBubbler lets you collaboratively “tag” pages with descriptive labels and then analyze any page you visit to see how similar it is to pages you have already classified.

<figure></figure>

You could classify content by age or reading-level rating, category like “current events” or “fishing”, or even how much you trust the source like “trustworthy” or “urban legend”. The system doesn’t have any bias and it doesn’t limit the number of tags you apply. Once you build up a set of classifications you can visit any page and the system will show you which classification has the closest statistical match. Just as a web site maintainer develops a general view of the technologies and communities of practice required to make a web site, we will use filter bubble building and sharing to help build client-side understanding.

The project aims to reach users who are motivated to understand and maybe change their information environment. Who want to transform their own “bubble” space and participate in collaborative work, but do not have add-on development skills.

Can the browser help users develop better understanding and control of their media environments? Can we emulate the path to contribution that server-side web development has? Please visit the project and help us find out. FilterBubbler can serve as a jumping off point for all kinds of specific applications that can be built on top of its techniques. Ratings systems, content suggestion, fact checking and many other areas of interest can all use the classifiers and corpora that the FilterBubbler users will be able to generate. We’ll measure our success by looking at user participation in filter bubble data sharing, and by how our work gets adapted and built on by other software projects.

Please find more information on the project, ways to engage and contact points on http://www.filterbubbler.org.


Introducing FilterBubbler was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaTalking about building the next interfaces with Machine Learning and AI at hackingui

Yesterday I was proud to be an invited speaker at the HackingUI masterclass where I presented about what Machine Learning and Artificial Intelligence means for us as developers and designers. I will be giving a similar talk tomorrow in Poland in my Code Europe talk.

Speaking at the masterclass

The Masterclass is using Crowdcast to allow for discussions between the moderators and the presenter, for the presenter to show his slides/demos and for people to chat and submit questions. You can see the whole one hour 45 minutes session by signing up to Hacking UI.

Master Class #4: The Soul in The Machine – Developing for Humans

It was exciting to give this presentation and the questions of the audience were interesting which meant that in addition to the topics covered in the talk I also managed to discuss the ethics of learning machines, how having more diverse teams can battle the issue of job loss because of automation and how AI can help combat bullying and antisocial behaviour online.

The materials I covered in the talk:

All in all there is a lot for us to be excited about and I hope I managed to make some people understand that the machine revolution is already happening and our job is it to make it benefit humankind, not work against it.

Planet MozillaHeadless Firefox

Over in Headless SlimerJS with Firefox, fellowzillian Brendan Dahl writes about the work he’s been doing to support running Firefox headlessly. A headless mode for Firefox makes it easier to test websites with the browser, especially in continuous integration, to ensure Firefox remains compatible with the Web. It also enables a variety of other interesting use cases.

Brendan started with Linux, the most popular platform for CI services like Travis, and focused first on SlimerJS, a popular tool for testing websites with Firefox (and scripting the browser more generally) that uses Firefox to run a different XUL application (rather than running Firefox itself). Now he’s working on support for full headless Firefox as well as Windows and Mac.

Check out his blog post for more details and to tell him how you’d use the feature!

Planet MozillaHarassment of Open Source Maintainers or Contributors

On Friday I had the unfortunate pleasure of taking the brunt on an unhappy Selenium user. Their issue? My team said that a release of GeckoDriver would happen when we are confident in the code. They said that was not professional. They started by telling me that they contribute to Mozilla and this is not acceptable for them as a customer.

Below is a break down of why I took exception to this:

  • My team was being extremely professional. Software, by its very nature, has bugs but we try minimize the amount of bugs we ship. To do this we don't set release dates, we set certain objectives. My team is relatively small compared to the user group it needs to service so we need to triage bugs, fix code. We have both groups inside and outside of Mozilla. By saying we can only release when it is ready is going to be the best we can do.
  • Please don't ever tell open source maintainers you are their customer unless you are paying for support and you have a contract with SLAs. So that there is no issue with definition of customer I suggest you look at Merriam Webster's definition. It says "one that purchases a commodity or service". Mozilla, just like Google, Microsoft, and Apple, are working on WebDriver to help web developers. There is no monetary benefit from doing this. The same goes for the Selenium project. The work and products are given freely.
  • And finally, and this goes for any F/OSS project even if it comes from large corporations like Google or Facebook, never make demands. Ask how you can help instead. If you disagree with the direction of the project, fork it. Make your own project. They have given everything away for free. Take it, make it better for whatever better means for you.

Now, even after explaining this, the harassment continued. It has lead to that user being blocked on social media for me and my team as well as them being blocked on Github. I really dislike blocking people because I know when they approach us they are frustrated but taking that frustration out on my team doesn't help anyone. If you continue, after being warned, you will be blocked. This is not a threat, this a promise.

Next time you feel frustrated with open source ask the maintainers if you can donate time/money/resources to make their lives easier. Don't be the moron that people will instantly block.

Planet MozillaRelease Notes for Nightly

release notes for NightlyEvery day, multiple changesets are merged or backed out on mozilla-central and every day we compile a new version of Firefox Nightly based on these changes so as to provide builds that our core community can use, test and report feedback on.

This is why we historically don’t issue release notes for Nightly, it is hard to maintain release notes for software to gets a new release every day. However, knowing what happens, what’s new, what should be tested, has always been a recurring request from our community over the years.

So as to help with this legitimate request, we set up a twitter account that regularly informs about significant new features, and we also have the great “These weeks in Firefox” posts by Mike Conley every two weeks. These new communication channels certainly did improve things for our community over the last year.

We are now going a step further and we just started maintaining real release notes for Nightly at this address: Release Notes for Firefox Nightly

But what does it mean to have release notes for a product released every day?

It means that in the context of Project Dawn, we have started monitoring all the commits landing on mozilla-central so as to make sure changes that would merit a mention in Firefox final release notes are properly documented. This is something that we used to do with the Aurora channel, we are just doing it for Nightly instead and we do that several times a week.

Having release notes for Nightly of course means that those are updated continuously and that we only document features that have not been merged yet to Beta. We also do not intend to document unstable features or features currently hidden behind a preference flag in about:config.

The focus today is Firefox Desktop, but we will  also  produce release notes for Firefox Nightly for Android at a later stage, once we have polished the process for Desktop.

Planet MozillaOn the utility of filing bugs

During my five years working at Mozilla, I’ve been known to ask people to file bugs when they encountered an issue. Most of the time, the answer was that they didn’t have time to do so and it was useless. I think it is actually very valuable. You get to learn from that experience: how to file actionable bugs, getting deeper knowledge into a specification, maybe a workaround for the problem.

A recent example

Three weeks ago, at work, we launched a new design for the website header. We got some reports that the logo was missing in Firefox on some pages. After investigation, we discovered that Firefox (and also Edge) had a different behaviour with SVG’s <use xlink:href> on pages with a <base> element. We fixed it right away by using an absolute URL for our logo. But we also filed bugs against Gecko and Edge. As part of filing those bugs, I found the change in the SVG specification clarifying how it should be handled. Microsoft fixed the issue in less than two weeks. Mozilla fixed it in less than three weeks.

In October this year1, all browsers should behave the same way in regard to that issue. And a four year old workaround will be obsolete. We will be able to remove the code that we had to introduce. Less code, yeah!

I hope this will convince you that filing bugs has an impact. You can learn more on how to file actionable bugs. If you’d like an easier venue to file bugs when browsers are incompatible, the WebCompat project is a nice place to start.


  1. Firefox 55 should be released on August 8 and the next Edge should be released in September (maybe even earlier, I’m not clear on Edge’s release schedule) 

Planet MozillaThis Week In Servo 99

In the last week, we landed 127 PRs in the Servo organization’s repositories.

By popular request, we added a ZIP archive link to the Servo nightlies for Windows users.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • hiikezoe corrected the animation behaviour of pseudo-elements in Stylo.
  • UK992 added some auto cleanup mechanisms for TravisCI.
  • Manishearth implemented system font support in Stylo.
  • glennw added groove and ridged border support to WebRender.
  • bholley converted simple CSS selectors and combinators to use inline storage for improved performance.
  • MortimerGoro implemented the missing GetShaderPrecisionFormat WebGL API.
  • sbwtw corrected the behaviour of CSS’ calc API in certain cases.
  • metajack removed the DOMRectList API.
  • BorisChious extended CSS transition support to shorthand properties.
  • nox improved the parsing of the background-size CSS property.
  • avadacatavra added support for creating Rust-based extensions of the C++ JSPrincipals API for SpiderMonkey.
  • kvark avoided a panic in WebRender encountered when using it through Firefox.
  • paulrouget clamed mouse scrolling to a single dimension at a time.
  • Gankro added IPC overhead profiling to WebRender.
  • stshine improved the inline size calculation for inline block layout.
  • mrobinson fixed several problems with laying out absolute positioned blocks.
  • canaltinova implemented support for the -moz-transform CSS property for Stylo.
  • MortimerGoro modernized the infrastructure surrounding Android builds.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Planet MozillaHow bad is a buffer overflow in an Emscripten-compiled application?

Emscripten allows compiling C++ code to JavaScript. It is an interesting approach allowing porting large applications (games) and libraries (crypto) to the web relatively easily. It also promises better performance and memory usage for some scenarios (something we are currently looking into for Adblock Plus core). These beneficial effects largely stem from the fact that the “memory” Emscripten-compiled applications work with is a large uniform typed array. The side-effect is that buffer overflows, use-after-free bugs and similar memory corruption mistakes are introduced to JavaScript that was previously safe from them. But are these really security-relevant?

Worst-case scenario are obviously memory corruption bugs that can be misused in order to execute arbitrary code. At the first glance, this don’t seem to be possible here — even with Emscripten the code is still running inside the JavaScript sandbox and cannot escape. In particular, it can only corrupt data but not change any code because code is kept separately from the array serving as “memory” to the application. Then again, native applications usually cannot modify code either due to protection mechanisms of modern processors. So memory corruption bugs are typically abused by manipulating function pointers such as those found on the stack.

Now Emscripten isn’t working with return pointers on the stack. I could identify one obvious place where function pointers are found: virtual method tables. Consider the following interface for example:

class Database {
  virtual User* LookupUser(char* userName) = 0;
  virtual bool DropTable(char* tableName) = 0;
  ...
};

Note how both methods are declared with the virtual keyword. In C++ this means that the methods should not be resolved at compile time but rather looked up when the application is running. Typically, that’s because there isn’t a single Database class but rather multiple possible implementations for the Database interface, and it isn’t known in advance which one will be used (polymorphism). In practice this means that each subclass of the Database interface will have a virtual method table with pointers to its implementations of LookupUser and DropTable methods. And that’s the memory area an attacker would try to modify. If the virtual method table can be changed in such a way that the pointer to LookupUser is pointing to DropTable instead, in the next step the attacker might make the application try to look up user "users" and the application will inadvertently remove the entire table.

There are some limitations here coming from the fact that function pointers in Emscripten aren’t actual pointers (remember, code isn’t stored in memory so you cannot point to it). Instead, they are indexes in the function table that contains all functions with the same signature. Emscripten will only resolve the function pointer against a fixed function table, so the attacker can only replace a function pointer by a pointer to another function with the same signature. Note that the signature of the two methods above is identical as far as Emscripten is concerned: both have an int-like return value (as opposed to void, float or double), both have an int-like value as the first parameter (the implicit this pointer) and another int-like value as the second parameter (a string pointer). Given that most types end up as an int-like values, you cannot really rely on this limitation to protect your application.

But the data corruption alone can already cause significant security issues. Consider for example the following memory layout:

char incomingMessage[256];
bool isAdmin = false;

If the application fails to check the size of incoming messages properly, the data will overflow into the following isAdmin field and the application might allow operations that aren’t safe. It is even possible that in some scenarios confidential data will leak, e.g. with this memory layout:

char response[256];
char sessionToken[32];

If you are working with zero-terminated strings, you should be really sure that the response field will always contain the terminating zero character. For example, if you are using some moral equivalent of the _snprintf function in Microsoft Visual C++ you should always check the function return value in order to verify that the buffer is large enough, because this function will not write the terminating zero when confronted with too much data. If the application fails to check for this scenario, an attacker might trick it into producing an overly large response, meaning that the secret sessionToken field will be sent along with the response due to missing terminator character.

These are the problematic scenarios I could think of, there might be more. Now all this might be irrelevant for your typical online game, if you are only concerned about are cheaters then you likely have bigger worries — cheaters have much easier ways to mess with code that runs on their end. A website on the other hand, which might be handling data from a third-party site (typically received via URL or window.postMessage()), is better be more careful. And browser extensions are clearly endangered if they are processing website data via Emscripten-compiled code.

Planet MozillaUnification in Chalk, part 2

In my previous post, I talked over the basics of how unification works and showed how that “mathematical version” winds up being expressed in chalk. I want to go a bit further now and extend that base system to cover associated types. These turn out to be a pretty non-trival extension.

What is an associated type?

If you’re not a Rust programmer, you may not be familiar with the term “associated type” (although many langages have equivalents). The basic idea is that traits can have type members associated with them. I find the most intuitive example to be the Iterator trait, which has an associated type Item. This type corresponds to kind of elements that are produced by the iterator:

trait Iterator {
    type Item;
    
    fn next(&mut self) -> Option<Self::Item>;
}

As you can see in the next() method, to reference an associated type, you use a kind of path – that is, when you write Self::Item, it means “the kind of Item that the iterator type Self produces”. I often refer to this as an associated type projection, since one is “projecting out”1 the type Item.

Let’s look at an impl to make this more concrete. Consider the type std::vec::IntoIter<T>, which is one of the iterators associated with a vector (specifically, the iterator you get when you invoke vec.into_iter()). In that case, the elements yielded up by the iterator are of type T, so we have an impl like:

impl<T> Iterator for IntoIter<T> {
    type Item = T;
    fn next(&mut self) -> Option<T> { ... }
}

This means that if we have the type IntoIter<i32>::Item, that is equivalent to the type i32. We usually call this process of converting an associated trait projection (IntoIter<i32>::Item) into the type found in the impl normalizing the type.

In fact, this IntoIter<i32>::Item is a kind of shorthand; in particular, it didn’t explicitly state what trait the type Item is defined in (it’s always possible that IntoIter<i32> implements more than one trait that define an associated type called Item). To make things fully explicit, then, one can use a fully qualified path like this:

<IntoIter<i32> as Iterator>::Item
 ^^^^^^^^^^^^^    ^^^^^^^^   ^^^^
 |                |          |
 |                |          Associated type name
 |                Trait
 Self type

I’ll use these fully qualified paths from here on out to avoid confusion.

Integrating associated types into our type system

In this post, we will extend our notion of types to include associated type projections:

T = ?X               // type variables
  | N<T1, ..., Tn>   // "applicative" types
  | P                // "projection" types   (new in this post)
P = <T as Trait>::X

Projection types are quite different from the existing “applicative” types that we saw before. The reason is that they introduce a kind of “alias” into the equality relationship. With just applicative types, we could always make progress at each step: that is, no matter what two types were being equated, we could always break the problem down into simpler subproblems (or else error out). For example, if we had Vec<?T> = Vec<i32>, we knew that this could only be true if ?T == i32.

With associated type projections, this is not always true. Sometimes we just can’t make progress. Imagine, for example, this scenario:

<?X as Iterator>::Item = i32

Here we know that ?X is some kind of iterator that yields up i32 elements: but we have no way of knowing which iterator it is, there are many possibilities. Similarly, imagine this:

<?X as Iterator>::Item = <T as Iterator>::Item

Here we know that ?X and T are both iterators that yield up the same sort of items. But this doesn’t tell us anything about the relationship between ?X and T.

Normalization constraints

To handle associated types, the basic idea is that we will introduce normalization constraints, in addition to just having equality constraints. A normalization constraint is written like this:

<IntoIter<i32> as Iterator>::Item ==> ?X   

This constraint says that the associated type projection <IntoIter<i32> as Iterator>::Item, when normalized, should be equal to ?X (a type variable). As we will see in more detail in a bit, we’re going to then go and solve those normalizations, which would eventually allow us to conclude that ?X = i32.

(We could use the Rust syntax IntoIter<i32>: Iterator<Item=?X> for this sort of constraint as well, but I’ve found it to be more confusing overall.)

Processing a normalization constraint is very simple to processing a standard trait constraint. In fact, in chalk, they are literally the same code. If you recall from my first Chalk post, we can lower impls into a series of clauses that express the trait that is being implemented along with the values of its associated types. In this case, if we look at the impl of Iterator for the IntoIter type:

impl<T> Iterator for IntoIter<T> {
    type Item = T;
    fn next(&mut self) -> Option<T> { ... }
}

We can translate this impl into a series of clauses sort of like this (here, I’ll use the notation I was using in my first post):

// Define that `IntoIter<T>` implements `Iterator`,
// if `T` is `Sized` (the sized requirement is
// implicit in Rust syntax.)
Iterator(IntoIter<T>) :- Sized(T).

// Define that the `Item` for `IntoIter<T>`
// is `T` itself (but only if `IntoIter<T>`
// implements `Iterator`).
IteratorItem(IntoIter<T>, T) :- Iterator(IntoIter<T>).

So, to solve the normalization constraint <IntoIter<i32> as Iterator>::Item ==> ?X, we translate that into the goal IteratorItem(IntoIter<i32>, ?X), and we try to prove that goal by searching the applicable clauses. I sort of sketched out the procedure in my first blog post, but I’ll present it in a bit more detail here. The first step is to “instantiate” the clause by replacing the variables (T, in this case) with fresh type variables. This gives us a clause like:

IteratorItem(IntoIter<?T>, ?T) :- Iterator(IntoIter<?T>).

Then we can unify the arguments of the clause with our goals, leading to two unification equalities, and combine that with the conditions of the clause itself, leading to three things we must prove:

IntoIter<?T> = IntoIter<i32>
?T = ?X
Iterator(IntoIter<?T)

Now we can recursively try to prove those things. To prove the equalities, we apply the unification procedure we’ve been looking at. Processing the first equation, we can simplify because we have two uses of IntoIter on both sides, so the type arguments must be equal:

?T = i32 // changed this
?T = ?X
Iterator(IntoIter<?T>)

From there, we can deduce the value of ?T and do some substitutions:

i32 = ?X
Iterator(IntoIter<i32>)

We can now unify ?X with i32, leaving us with:

Iterator(IntoIter<i32>)

We can apply the clause Iterator(IntoIter<T>) :- Sized(T) using the same procedure now, giving us two fresh goals:

IntoIter<i32> = IntoIter<?T>
Sized<?T>

The first unification will yield (eventually):

Sized<i32>

And we can prove this because this is a built-in rule for Rust (that is, that i32 is sized).

Unification as just another goal to prove

As you can see in the walk through in the previous section, in a lot of ways, unification is “just another goal to prove”. That is, the basic way that chalk functions is that it has a goal it is trying to prove and, at each step, it tries to simplify that goal into subgoals. Often this takes place by consulting the clauses that we derived from impls (or that are builtin), but in the case of equality goals, the subgoals are constructed by the builtin unification algorithm.

In the previous post, I gave various pointers into the implementation showing how the unification code looks “for real”. I want to extend that explanation now to cover associated types.

The way I presented things in the previous section, unification flattens its subgoals into the master list of goals. But in fact, for efficiency, the unification procedure will typically eagerly process its own subgoals. So e.g. when we transform IntoIter<i32> = IntoIter<?T>, we actually just invoke the code to equate their arguments immediately.

The one exception to this is normalization goals. In that case, we push the goals into a separate list that is returned to the caller. The reason for this is that, sometimes, we can’t make progress on one of those goals immediately (e.g., if it has unresolved type variables, a situation we’ve not discussed in detail yet). The caller can throw it onto a list of pending goals and come back to it later.

Here are the various cases of interest that we’ve covered so far

Fallback for projection

Thus far we showed how projection proceeds in the “successful” case, where we manage to normalize a projection type into a simpler type (in this case, <IntoIter<i32> as Iterator>::Item into i32). But sometimes we want to work with generics we can’t normalize the projection any further. For example, consider this simple function, which extracts the first item from a non-empty iterator (it panics if the iterator is empty):

fn first<I: Iterator>(iter: I) -> I::Item {
    iter.next().expect("iterator should not be empty")
}

What’s interesting here is that we don’t know what I::Item is. So imagine we are given a normalization constraint like this one:

<I as Iterator>::Item ==> ?X

What type should we use for ?X here? What chalk opts to do in cases like this is to construct a sort a special “applicative” type representing the associated item projection. I will write it as <Iterator::Item><I>, for now, but there is no real Rust syntax for this. It basically represents “a projection that we could not normalize further”. You could consider it as a separate item in the grammar for types, except that it’s not really semantically different from a projection; it’s just a way for us to guide the chalk solver.

The way I think of it, there are two rules for proving that a projection type is equal. The first one is that we can prove it via normalization, as we’ve already seen:

IteratorItem(T, X)
-------------------------
<T as Iterator>::Item = X

The second is that we can prove it just by having all the inputs be equal:

T = U
---------------------------------------------
<T as Iterator>::Item = <U as Iterator>::Item

We’d prefer to use the normalization route, because it is more flexible (i.e., it’s sufficient for T and U to be equal, but not necessary). But if we can definitively show that the normalization route is impossible (i.e., we have no clauses that we can use to normalize), then we we opt for this more restrictive route. The special “applicative” type is a way for chalk to record (internally) that for this projection, it opted for the more restrictive route, because the first one was impossible.

(In general, we’re starting to touch on Chalk’s proof search strategy, which is rather different from Prolog, but beyond the scope of this particular blog post.)

Some examples of the fallback in action

In the first() function we saw before, we will wind up computing the result type of next() as <I as Iterator>::Item. This will be returned, so at some point we will want to prove that this type is equal to the return type of the function (actually, we want to prove subtyping, but for this particular type those are the same thing, so I’ll gloss over that for now). This corresponds to a goal like the following (here I am using the notation I discussed in my first post for universal quantification etc):

forall<I> {
    if (Iterator(I)) {
        <I as Iterator>::Item = <I as Iterator>::Item
    }
}

Per the rules we gave earlier, we will process this constraint by introducing a fresh type variable and normalizing both sides to the same thing:

forall<I> {
    if (Iterator(I)) {
        exists<?T> {
            <I as Iterator>::Item ==> ?T,
            <I as Iterator>::Item ==> ?T,
        }
    }
}

In this case, both constraints will wind up resulting in ?T being the special applicative type <Iterator::Item><I>, so everything works out successfully.

Let’s briefly look at an illegal function and see what happens here. In this case, we have two iterator types (I and J) and we’ve used the wrong one in the return type:

fn first<I: Iterator, J: Iterator>(iter_i: I, iter_j: J) -> J::Item {
    iter_i.next().expect("iterator should not be empty")
}

This will result in a goal like:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        <I as Iterator>::Item = <J as Iterator>::Item
    }
}

Which will again be normalized and transformed as follows:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        exists<?T> {
            <I as Iterator>::Item ==> ?T,
            <J as Iterator>::Item ==> ?T,
        }
    }
}

Here, the difference is that normalizing <I as Iterator>::Item results in <Iterator::Item><I>, but normalizing <J as Iterator>::Item results in <Iterator::Item><J>. Since both of those are equated with ?T, we will ultimately wind up with a unification problem like:

forall<I, J> {
    if (Iterator(I), Iterator(J)) {
        <Iterator::Item><I> = <Iterator::Item><J>
    }
}

Following our usual rules, we can handle the equality of two applicative types by equating their arguments, so after that we get forall<I, J> I = J – and this clearly cannot be proven. So we get an error.

Termination, after a fashion

One final note, on termination. We do not, in general, guarantee termination of the unification process once associated types are involved. Rust’s trait matching is turing complete, after all. However, we do wish to ensure that our own unification algorithms don’t introduce problems of their own!

The non-projection parts of unification have a pretty clear argument for termination: each time we remove a constraint, we replace it with (at most) simpler constraints that were all embedded in the original constraint. So types keep getting smaller, and since they are not infinite, we must stop sometime.

This argument is not sufficient for projections. After all, we replace a constraint like <T as Iterator>::Item = U with an equivalent normalization constraint, where all the types are the same:

<T as Iterator>::Item ==> U

The argument for termination then is that normalization, if it terminates, will unify U with an applicative type. Moreover, we only instantiate type variables with normalized types. Now, these applicative types might be the special applicative types that Chalk uses internally (e.g., <IteratorItem><T>), but it’s an applicative type nontheless. When that applicative type is processed later, it will therefore be broken down into smaller pieces (per the prior argument). That’s the rough idea, anyway.

Contrast with rustc

I tend to call the normalization scheme that chalk uses lazy normalization. This is because we don’t normalize until we are actually equating a projection with some other type. In constrast, rustc uses an eager strategy, where we normalize types as soon as we “instantiate” them (e.g., when we took a clause and replaced its type parameters with fresh type variables).

The eager strategy has a number of downsides, not the least of which that it is very easy to forget to normalize something when you were supposed to (and sometimes you wind up with a mix of normalized and unnormalized things).

In rustc, we only have one way to represent projections (i.e., we don’t distinguish the “projection” and “applicative” version of <Iterator::Item><T>). The distinction between an unnormalized <T as Iterator>::Item and one that we failed to normalize further is made simply by knowing (in the code) whether we’ve tried to normalize the type in question or not – the unification routines, in particular, always assume that a projection type implies that normalization wouldn’t succeed.

A note on terminology

I’m not especially happy with the “projection” and “applicative” terminology I’ve been using. Its’s what Chalk uses, but it’s kind of nonsense – for example, both <T as Iterator>::Item and Vec<T> are “applications” of a type function, from a certain perspective. I’m not sure what’s a better choice though. Perhaps just “unnormalized” and “normalized” (with types like Vec<T> always being immediately considered normalized). Suggestions welcome.

Conclusion

I’ve sketched out how associated type normalization works in chalk and how it compares to rustc. I’d like to change rustc over to this strategy, and plan to open up an issue soon describing a strategy. I’ll post a link to it in the internals comment thread once I do.

There are other interesting directions we could go with associated type equality. For example, I was pursuing for some time a strategy based on congruence closure, and even implemented (in ena) an extended version of the algorithm described here. However, I’ve not been able to figure out how to combine congruence closure with things like implication goals – it seems to get quite complicated. I understand that there are papers tackling this topic (e.g, Selsam and de Moura), but haven’t yet had time to read it.

Comments?

I’ll be monitoring [the internals thread] for comments and discussion. =)

Footnotes

  1. Projection is a very common bit of jargon in PL circles, though it typically refers to accessing a field, not a type. As far as I can tell, no mainstream programmer uses it. Ah well, I’m not aware of a good replacement.

Planet MozillaFewer mallocs in curl

Today I landed yet another small change to libcurl internals that further reduces the number of small mallocs we do. This time the generic linked list functions got converted to become malloc-less (the way linked list functions should behave, really).

Instrument mallocs

I started out my quest a few weeks ago by instrumenting our memory allocations. This is easy since we have our own memory debug and logging system in curl since many years. Using a debug build of curl I run this script in my build dir:

#!/bin/sh
export CURL_MEMDEBUG=$HOME/tmp/curlmem.log
./src/curl http://localhost
./tests/memanalyze.pl -v $HOME/tmp/curlmem.log

For curl 7.53.1, this counted about 115 memory allocations. Is that many or a few?

The memory log is very basic. To give you an idea what it looks like, here’s an example snippet:

MEM getinfo.c:70 free((nil))
MEM getinfo.c:73 free((nil))
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d616) (24) = 0x559e73760f98
MEM url.c:294 free((nil))
MEM url.c:297 strdup(0x559e7150d62e) (22) = 0x559e73760fc8
MEM multi.c:302 calloc(1,480) = 0x559e73760ff8
MEM hash.c:75 malloc(224) = 0x559e737611f8
MEM hash.c:75 malloc(29152) = 0x559e737a2bc8
MEM hash.c:75 malloc(3104) = 0x559e737a9dc8

Check the log

I then studied the log closer and I realized that there were many small memory allocations done from the same code lines. We clearly had some rather silly code patterns where we would allocate a struct and then add that struct to a linked list or a hash and that code would then subsequently add yet another small struct and similar – and then often do that in a loop.  (I say we here to avoid blaming anyone, but of course I myself am to blame for most of this…)

Those two allocations would always happen in pairs and they would be freed at the same time. I decided to address those. Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.

So, fixing the hash code and the linked list code to not use mallocs were immediate and easy ways to remove over 20% of the mallocs for a plain and simple ‘curl http://localhost’ transfer.

At this point I sorted all allocations based on size and checked all the smallest ones. One that stood out was one we made in curl_multi_wait(), a function that is called over and over in a typical curl transfer main loop. I converted it over to use the stack for most typical use cases. Avoiding mallocs in very repeatedly called functions is a good thing.

Recount

Today, the script from above shows that the same “curl localhost” command is down to 80 allocations from the 115 curl 7.53.1 used. Without sacrificing anything really. An easy 26% improvement. Not bad at all!

But okay, since I modified curl_multi_wait() I wanted to also see how it actually improves things for a slightly more advanced transfer. I took the multi-double.c example code, added the call to initiate the memory logging, made it uses curl_multi_wait() and had it download these two URLs in parallel:

http://www.example.com/
http://localhost/512M

The second one being just 512 megabytes of zeroes and the first being a 600 bytes something public html page. Here’s the count-malloc.c code.

First, I brought out 7.53.1 and built the example against that and had the memanalyze script check it:

Mallocs: 33901
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 33956
Allocations: 33961
Maximum allocated: 160385

Okay, so it used 160KB of memory totally and it did over 33,900 allocations. But ok, it downloaded over 512 megabytes of data so it makes one malloc per 15KB of data. Good or bad?

Back to git master, the version we call 7.54.1-DEV right now – since we’re not quite sure which version number it’ll become when we release the next release. It can become 7.54.1 or 7.55.0, it has not been determined yet. But I digress, I ran the same modified multi-double.c example again, ran memanalyze on the memory log again and it now reported…

Mallocs: 69
Reallocs: 5
Callocs: 24
Strdups: 31
Wcsdups: 0
Frees: 124
Allocations: 129
Maximum allocated: 153247

I had to look twice. Did I do something wrong? I better run it again just to double-check. The results are the same no matter how many times I run it…

33,961 vs 129

curl_multi_wait() is called a lot of times in a typical transfer, and it had at least one of the memory allocations we normally did during a transfer so removing that single tiny allocation had a pretty dramatic impact on the counter. A normal transfer also moves things in and out of linked lists and hashes a bit, but they too are mostly malloc-less now. Simply put: the remaining allocations are not done in the transfer loop so they’re way less important.

The old curl did 263 times the number of allocations the current does for this example. Or the other way around: the new one does 0.37% the number of allocations the old one did…

As an added bonus, the new one also allocates less memory in total as it decreased that amount by 7KB (4.3%).

Are mallocs important?

In the day and age with many gigabytes of RAM and all, does a few mallocs in a transfer really make a notable difference for mere mortals? What is the impact of 33,832 extra mallocs done for 512MB of data?

To measure what impact these changes have, I decided to compare HTTP transfers from localhost and see if we can see any speed difference. localhost is fine for this test since there’s no network speed limit, but the faster curl is the faster the download will be. The server side will be equally fast/slow since I’ll use the same set for both tests.

I built curl 7.53.1 and curl 7.54.1-DEV identically and ran this command line:

curl http://localhost/80GB -o /dev/null

80 gigabytes downloaded as fast as possible written into the void.

The exact numbers I got for this may not be totally interesting, as it will depend on CPU in the machine, which HTTP server that serves the file and optimization level when I build curl etc. But the relative numbers should still be highly relevant. The old code vs the new.

7.54.1-DEV repeatedly performed 30% faster! The 2200MB/sec in my build of the earlier release increased to over 2900 MB/sec with the current version.

The point here is of course not that it easily can transfer HTTP over 20 Gigabit/sec using a single core on my machine – since there are very few users who actually do that speedy transfers with curl. The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.

On the cost of malloc: The 512MB test I did resulted in 33832 more allocations using the old code. The old code transferred HTTP at a rate of about 2200MB/sec. That equals 145,827 mallocs/second – that are now removed! A 600 MB/sec improvement means that curl managed to transfer 4300 bytes extra for each malloc it didn’t do, each second.

Was removing these mallocs hard?

Not at all, it was all straight forward. It is however interesting that there’s still room for changes like this in a project this old. I’ve had this idea for some years and I’m glad I finally took the time to make it happen. Thanks to our test suite I could do this level of “drastic” internal change with a fairly high degree of confidence that I don’t introduce too terrible regressions. Thanks to our APIs being good at hiding internals, this change could be done completely without changing anything for old or new applications.

(Yeah I haven’t shipped the entire change in a release yet so there’s of course a risk that I’ll have to regret my “this was easy” statement…)

Caveats on the numbers

There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.

More?

Are there more “low hanging fruits” to pick here in the similar vein?

Perhaps. We don’t do a lot of performance measurements or comparisons so who knows, we might do more silly things that we could stop doing and do even better. One thing I’ve always wanted to do, but never got around to, was to add daily “monitoring” of memory/mallocs used and how fast curl performs in order to better track when we unknowingly regress in these areas.

Addendum, April 23rd

(Follow-up on some comments on this article that I’ve read on hacker news, Reddit and elsewhere.)

Someone asked and I ran the 80GB download again with ‘time’. Three times each with the old and the new code, and the “middle” run of them showed these timings:

Old code:

real    0m36.705s
user    0m20.176s
sys     0m16.072s

New code:

real    0m29.032s
user    0m12.196s
sys     0m12.820s

The server that hosts this 80GB file is a standard Apache 2.4.25, and the 80GB file is stored on an SSD. The CPU in my machine is a core-i7 3770K 3.50GHz.

Someone also mentioned alloca() as a solution for one of the patches, but alloca() is not portable enough to work as the sole solution, meaning we would have to do ugly #ifdef if we would want to use alloca() there.

Planet MozillaRevitalize participation by understanding our communities

As part of the bigger Open Innovation strategy project on how openness can better drive Mozilla products and technologies, during the next few months we will be conducting research about our communities and contributors.

We want to take a detailed, data-driven look into our communities and contributors: who we are, what we’re doing, what our motivations are and how we’re connected.

Who: Understanding the people in our communities

  • How many contributors are there in the Mozilla community.
  • Who are we? (how diverse is our community?)
  • Where are we? (geography, groups, projects)

What: Understanding what people are doing

  • What are we doing? (contributing with)
  • What are our skillsets?
  • How much time we’re able to devote to the project.
  • The tools we use.
  • Why do people contribute? (motivations)
  • What blocks people from contributing?
  • What other projects do we contribute to?
  • What other organisations are we connected to?
  • How much do people want to get involved?

Why: Understanding why people contribute

  • What are people’s’ motivations.
  • What are the important factors in contributing for Mozilla (ethical, moral, technological etc).
  • Is there anything Mozilla can do that will lead volunteers to contribute more?
  • For people who have left the project:why do they no longer contribute?)

How & Where: Understanding the shape of our communities and our people’s networks

  • What are the different groups and communities.
  • Who’s inside each group (regional and functional).
  • What is the overlap between people in groups?
  • Which groups have the most overlap, which have the least? (not just a static view, but also over time)
  • How contributors are connected to each other? (related with the “where”)
  • How are our contributors connected to other projects, Mozilla etc

In order to answer all these questions, we have divided the work in three major areas.

Contributors and Contributions Data Analysis

Analyzing past quantitative data about contributions and contributors (from sources like Bugzilla, Github, Mailing Lists, and other sources) to identify patterns and draw conclusions about contributors, contributions and communities.

Communities and Contributors survey

Designing and administering a qualitative survey to as many active contributors as possible (also trying to survey people who have stopped contributing to Mozilla) to get a full view of our volunteers (demographics), motivations, which communities people identify with, and their experience with Mozilla. We’ll use this to identify patterns in motivations.

Insights

We’ll bring together the conclusions and data from both of the above components to articulate a set of insights and recommendations that can be a useful input to the Open Innovation Strategy project.

In particular, one aim that we have is to cross reference individuals from the Mozillians Survey and Data Analysis to better understand — on aggregate — how things like motivations and identity relate to contribution.

Our commitments

In all of this work we are handling data with the care you would expect from Mozilla, in line with our privacy policy and in close consultation with Mozilla’s legal and trust teams.

Additionally, we realize that we at Mozilla often ask for people’s time to provide feedback and you may have recently seen other surveys. Also, we have run research projects of this sort in the past without following up with a clear plan of action. This project is different. It’s more extensive than anything we’ve done, it is connected a much larger project to shape Mozilla’s strategy with respect to open practices, and we will be publishing the results and data.

We would like to know your feedback/input about this project, its scope and implementation:

  • Are we missing any areas/topics we should get information about our communities?
  • Which part do you feel it’s more relevant?
  • Where do you think communities can engage to provide more value to the work we are going to do?
  • Any other ideas we are not thinking about?

Please let us know in this discourse topic.

Thanks everyone!

Planet MozillaRatings and reviews on add-ons.mozilla.org

Hello!

My name is Philip Walmsley, and I am a Senior Visual Designer on the Firefox UX team. I am also one of the people tasked with making addons.mozilla.org (or, “AMO”) a great place to list and find Firefox extensions and themes.

There are a lot of changes happening in the Firefox and Add-ons ecosystem this year (Quantum, Photon, Web Extensions, etc.), and one of them is a visual and functional redesign of AMO. This has been a long time coming! The internet has progressed in leaps and bounds since our little site was launched many years ago, and it’s time to give it some love. We’ve currently got a top-to-bottom redesign in the works, with the goal of making add-ons more accessible to more users.

I’m here to talk with you about one part of the add-ons experience: ratings and reviews. We have found a few issues with our existing approach:

  • The 5-star rating system is flawed. Star ratings are arbitrary on a user by user basis, and it leads to a muddling of what users really think about an add-on.
  • Some users just want to leave a rating and not write a review. Sometimes this is referred to as “blank page syndrome,” sometimes a user is just in a time-crunch, sometimes a user might have accessibility issues. Forcing users to do both leads to glib, unhelpful, and vague reviews.
  • On that note, what if there was a better way to get reviews from users that may not speak your native tongue? What if instead of writing a review, a user had the option to select tags or qualities describing their experience with an add-on? This would greatly benefit devs (‘80% of the global community think my extension is “Easy to use”!’) and other users (‘80% of the global community believe this extension is “Easy to use”!’).
  • We don’t do a very good job of triaging users actual issues: A user might love an extension but have an (unbeknownst to them) easily-solved technical problem. Instead of leaving a negative 1-star review for this extension that keeps acting weird, can we guide that user to the developer or Mozilla support?
  • We also don’t do a great job of facilitating developer/user communication within AMO. Wouldn’t it be great if you could rectify a user’s issue from within the reviews section on your extension page, changing a negative rating to a positive one?

So, as you can see, we’ve got quite a few issues here. So let’s simplify and tackle these one-by-one: Experience, Tags, Triage.

<figure>So many feels</figure>

Experience

<figure>Someone is not familiar with Lisa Hanawalt</figure>

The star rating has its place. It is very useful in systems where the rating you leave is relevant to you and you alone. Your music library, for example: you know why you rate one song two stars and another at four. It is a very personal but very arbitrary way of rating something. Unfortunately, this rating system doesn’t scale well when more than one person is reviewing the same thing: If I love something but rate it two stars because it lacks a particular feature, what does that mean to other users or the overall aggregated rating? It drags down the review of a great add-on, and as other users scan reviews and see 2-stars, they might leave and try to find something else. Not great.

What if instead of stars, we used emotions?

<figure></figure>

Some of you might have seen these in airports or restrooms. It is a straightforward and fast way for a group of people to indicate “Yep, this restroom is sparkling and well-stocked, great experience.” Or “Someone needs to get in here with a mop, PRONTO.” It changes throughout the day, and an attendant can address issues as they arise. Or, through regular maintenance, they can achieve a happy face rating all day.

What if we applied this method to add-ons? What if the first thing we asked a user once they had used an extension for a day or so was: “How are you enjoying this extension?” and presented them with three faces: Grinning, Meh, and Sad. At a very high level, this gives users and developers a clear, overall impression of how people feel about using this add-on (“90% grinning face for this extension? People must like it, let’s give it a try.”).

So! A user has contributed some useful rating data, which is awesome. At this point, they can leave the flow and continue on their merry way, or we can prompt them to quickly leave a few more bits of even MORE useful review data…

<figure></figure>

Tags

<figure>Not super helpful</figure>

Writing a review is hard. Let me rephrase that: Writing a good review is hard. It’s easy to fire off something saying “This add-on is just ok.” It’s hard to write a review explaining in detail why the add-on is “just ok.” Some (read: most) users don’t want to write a detailed review, for many reasons: time, interest, accessibility, etc. What if we provided a way for these users to give feedback in a quick and straightforward way? What if, instead of staring down a blank text field, we displayed a series of tags or descriptors based on the emotion rating the user just gave?

For example, I just clicked a smiling face to review an extension I’m enjoying. Right after that, a grid of tags with associated icons pops up. Words like “fast”, “stable”, “easy to use”, well-designed”, fun”, etc. I liked the speed of this extension, so I click “fast” and “stable” and submit my options. And success: I have submitted two more pieces of data that are useful to devs and users. Developers can find out what users like about their add-on, and users can see what other users are thinking before committing to downloading. We can pop up different tags based on the emotion selected: if a user taps Meh or Sad, we can pop up tags to find out why the user selected that initially. The result is actionable review data that can is translated across all languages spoken by our users! Pretty cool.

<figure></figure>

Triage

Finally, we reach triage. Once a user submits tag review data, we can present them with a few more options. If a user is happy with this extension and wants to contribute even more, we can present them with an opportunity to write a review, or share it with friends, or contact the developer personally to give them kudos. If a user selected Meh, we could suggest reading some developer-provided documentation, contacting support, or writing a review. If the user selected Sad, we’d show them developer or Mozilla support, extension documentation, file a bug/issue, or write a review. That way we can make sure a user gets the help they need, and we can avoid unnecessary poor reviews. All of these options will also be available on the add-on page as well, so a user always has access to these different actions. If a user leaves a review expressing frustration with an add-on, devs will be able to reply to the review in-line, so other users can see issues being addressed. Once a dev has responded, we will ask the user if this has solved their problem and if they’d like to update their review.

We’ve covered a lot here! Keep in mind that this is still in the early proposal stage and things will change. And that’s good; we want to change this for the better. Is there anything we’ve missed? Other ideas? What’s good about our current rating and review flow? What’s bad? We’d love constructive feedback from AMO users, extension developers, and theme artists.

Please visit this Discourse post to continue the discussion, and thanks for reading!

Philip (@pwalm)
Senior Visual Designer, Firefox UX


Ratings and reviews on add-ons.mozilla.org was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaWebdev Beer and Tell: April 2017

Webdev Beer and Tell: April 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: April 2017

Webdev Beer and Tell: April 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaDutch court ruling puts net neutrality in question

On Thursday, April 20th a Rotterdam Court ruled that T-Mobile’s zero rated service “Data Free Music” is legal. The court declared that the Dutch net neutrality law, which prohibits zero rating, is not in accordance with the EU net neutrality law that Brussels lawmakers passed last year.

Zero rating is bad for the long term health of the internet. By disrupting the level playing field and allowing discrimination, zero rating poses a threat to users, competition, and opportunity online.

The Netherlands has been a model to the world in protecting net neutrality. It’s alarming to see these vital protections for users, competition, and opportunity online struck down.

The power and potential of the Internet is greatest when users can access the full diversity of the open Internet, not just some parts of it. We urge the Authority for Consumers & Markets (ACM) to appeal this decision swiftly, and we hope that higher courts will restore the Internet’s level playing field.

The post Dutch court ruling puts net neutrality in question appeared first on Open Policy & Advocacy.

Planet MozillaCan digital literacy be deconstructed into learnable units?

Earlier this week, Sally Pewhairangi got in touch to ask if I’d be willing to answer four questions about digital literacy, grouped around the above question. She’ll be collating answers from a number of people in due course but, in the spirit of working openly, I’m answering her questions here.

Deconstructed

 1. What are the biggest mistakes novices make when becoming digitally literate?

The three things I stress time and time again in my keynotes, writing, and workshops on this subject are:

  1. Digital literacies are plural
  2. Digital literacies are context-dependent
  3. Digital literacies are socially-negotiated

As such, there is no stance from which you could call someone ‘digitally literate’, because (as Allan Martin has pointed out), it is a condition, not a threshold. There is no test you could devise to say whether someone was ‘digitally literate’, except maybe at a very particular snapshot in time, for a very defined purpose, in a certain context.

That being said, and to answer the question, I think the main mistake that we make is to equate surface-level, procedural skills with depth of thought and understanding. I’m certain this is where the myth of the ‘digital native’ came from. Use does not automatically lead to expertise and understanding.

 2. What mistakes are common at a pro level?

By ‘pro level’, I’m assuming that this means someone who is seen as having the requisite digital knowledge, skills, and behaviours to thrive in their given field. As such, and because this is so context-dependent, it’s difficult to generalise.

Nevertheless, what I observe in myself and others is an assumption that I/we/they have somehow ‘made it’ in terms of digital literacies. It’s an ongoing process of development, not something whereby you can sit back and rest on your laurels. I’m constantly surprised by digital practices and the effects that technologies have on society.

As with the stock market, past performance isn’t a reliable guide to future success, so just because something looks ‘stupid’, ‘unimportant’, or otherwise outside my/your/our frame of reference doesn’t mean that it’s not worth investigating.

I’d also comment on how important play is to the development of digital literacies. Learning something because you have to, or because someone has set you a target, is different from doing so of your own accord. Self-directed learning is messy and, from the point of view of an instructor, ‘inefficient’. However, to my mind, it’s the most effective type of learning there is. In general, there should be more remixing and experimentation in life, and less deference and conformity.

 3. Can you learn the building blocks of digital literacy without access to the web? Where would you start? What would be the biggest misuse of time?

To be ‘literate’ means to be part of a community of literate peers. To quote my own thesis:

Given the ubiquitous and mandated use of technology in almost every occupation, students are left with a problem. They ‘seek to enter new communities… but do not yet have the knowledge necessary to act as “knowledgeable peers” in the community conversation’ (Taylor & Ward, 1998, p.18). Educators seeking to perpetuate Traditional (Print) Literacy often exploit the difference between students ‘tool literacy’ on the one-hand (their technical ability) and their understanding of, and proficiency in ‘literacies of representation’ (making use of these abilities for a purpose). Students are stereotyped at having great technical ability but lacking the skills to put these into practice. Given the ‘duty of care’ educational institutions have, reference is therefore made to ‘e-safety’, ‘e-learning’ and ‘e-portfolios’ - slippery terms that sound important and which serve to reinforce a traditional teacher-led model of education. As Bruffee points out, “pooling the resources that a group of peers brings with them to the task may make accessible the normal discourse of the new community they together hope to enter.“ (Taylor & Ward, 1998, p.18). The barrier, in this case, is the traditional school classroom and the view that Traditional Literacy is a necessary and sufficient conditional requirement for entry into such communities.

It’s almost unthinkable to have a digital device that isn’t networked and connected to other devices. As such, I would say that this is a necessary part of digital literacies. Connecting to other people using devices is just the way the world works these days, and to claim to be digitally up-to-date without these digital knowledge/skills/behaviours, would seem out of touch.

As with almost any arena of development, improving takes deliberate practice - something I’ve written about elsewhere. You have to immerse yourself in the thing you want to get better at, whether that’s improving your piano playing, sinking 3-pointers in basketball, or learning how to tweet effectively.

The biggest misuse of time? Learning things that used to be important but which are now anachronisms. Some teachers/mentors/instructors seem to think that those learning digital literacies require a long, boring history lesson on how things used to be. While this may be of some value, there’s enough to learn about the ways things are now - the power structures, the different forms of discourse, important nuances. And I say this as a former History teacher.

 4. What are your favourite instructional books or resources on digital literacies? If people were to teach themselves what would you suggest they use?

I’d recommend the following for a general audience:

There’s plenty of books for those looking to develop digital literacies in an academic context. I’d look out for anything by Colin Lankshear and/or Michele Knobel. I’ve written a book called The Essential Elements of Digital Literacies which people seem to have found useful.

Reading about digital literacies is a bit like dancing about architecture, however. There’s no substitute for keeping up-to-date by following people who are making sense of the latest developments. For that, the following is an short, incomplete, and partial list:

I’ve linked to the Twitter accounts of the above individuals, as I find that particular medium extremely good for encouraging the kind of global, immersive, networked digital literacies that I think are important. However, I may be wrong and out of touch, as Snapchat confuses me.

Finally, because of the context-dependency of digital literacies, it’s important to note that discourse in this arena differs depending on which geographical area you’re talking about. In my experience, and I touched up on this in my thesis, what ‘counts’ as digital literacies depends on whether you’re situated in Manchester, Mumbai, or Melbourne.


Questions? Comments? I’m @dajbelshaw on Twitter, or you can email me: hello@dynamicskillset.com

Image CC0 Florian Klauer

Planet MozillaQuantum Flow Engineering Newsletter #6

I would like to share some updates about some of the ongoing performance related work.
We have started looking at the native stack traces that are submitted through telemetry from the Background Hang Reports that take more than 8 seconds.  (We were hoping to have been able to reduce this threshold to 256ms for a while now, but the road has been bumpy — but this should land really soon now!)  Michael Layzell put together a telemetry analysis job that creates a symbolicated version of this data here: https://people-mozilla.org/~mlayzell/bhr/.  For example, this is the latest generated report.  The grouping of this data is unfortunate, since the data is collected based on the profiler pseudo-stack labels, which is captured after 128ms, and then native stack (if the hang continues for 8 seconds) gets captured after that, so the pseudo-stack and the native stack may or may not correspond, and this grouping also doesn’t help going through the list of native stacks and triage them more effectively.  Work is under way to create a nice dashboard out of this data, but in the mean time this is an area where we could really use all of the help that we can get.  If you have some time, it would be really nice if you can take a look at this data and see if you can make sense of some of these call stacks and find some useful bug reports out of them.  If you do end up filing bugs, these are super important bugs to work on, so please make sure you add “[qf]” to the status whiteboard so that we can track the bug.
Another item worthy of highlight is Mike Conley’s Oh No! Reflow! add-on.  Don’t let the simple web page behind this link deceive you, this add-on is really awesome!  It generates a beep every time that a long running reflow happens in the browser UI (which, of course, you get to turn off when you don’t need to hunt for bugs!), and it logs the sync reflows that happened alongside the JS call stack to the code that triggered them, and it also gives you a single link that allows you to quickly file a bug with all of the right info in it, pre-filled!  In fact you can see the list of already filed bugs through this add-on!
Another issue that I want to bring up is the [qf:p1] bugs.  As you have noticed, there are a lot of them.  🙂  It is possible that some of these bugs aren’t important to work on, for example because they only affect edge case conditions that affects a super small subset of users and that wasn’t obvious when the bug was triaged.  In some other cases it may turn out that fixing the bug requires massive amounts of work that is unreasonable to do in the amount of time we have, or that the right people for it are doing more important work and can’t be interrupted, and so on.  Whatever the issue is, whether the bug was mis-triaged, or can’t be fixed, please make sure to raise it on the bug!  In general the earlier these issues are uncovered the better it is, because everyone can focus their time on more important work.  I wanted to make sure that this wasn’t lost in all of the rush around our communication for Quantum Flow, my apologies if this hasn’t been clear before.
On to the acknowledgement section, I hope I’m not forgetting to mention anyone’s name here!

Planet MozillaThe аррӏе bites back

I've received a number of inquiries about whether TenFourFox will follow the same (essentially wontfix) approach of Firefox for dealing with those international domain names that happen to be whole-script homographs. The matter was forced recently by one enterprising sort who created just this sort of double using Cyrillic characters for https://www.аррӏе.com/, which depending on your font and your system setup, may look identical to https://www.apple.com/ (the site is a proof of concept only).

The circulating advice is to force all IDNs to be displayed in punycode by setting network.IDN_show_punycode to true. This is probably acceptable for most of our users (the vast majority of TenFourFox users operate with a Latin character set), but I agree with Gerv's concern in that Bugzilla entry that doing so disadvantages all other writing systems that are not Latin, so I don't feel this should be the default. That said, I also find the current situation unacceptable and doing nothing, or worse relying on DNS registrars who so far don't really care about anything but getting your money, similarly so. While the number of domains that could be spoofed in this fashion is probably small, it is certainly greater than one, and don't forget that they let the proof-of-concept author register his spoof!

Meanwhile, I'm not sure what the solution right now should be other than "not nothing." Virtually any approach, including the one Google Chrome has decided to take, will disadvantage non-Latin scripts (and the Chrome approach has its own deficiencies and is not IMHO a complete solution to the problem, nor was it designed to be). It would be optimal to adopt whatever solution Firefox eventually decides upon for consistency if they do so, but this is not an issue I'd like to sit on indefinitely. If you use a Latin character set as your default language, and/or you don't care if all domains will appear in either ASCII or punycode, then go ahead and set that pref above; if you don't, or consider this inappropriate, stay tuned. I'm thinking about this in issue 384.

By the way, TenFourFox "FPR0" has been successfully uploaded to Github. Build instructions to follow and the first FPR1 beta should be out in about two to three weeks. I'm also cogitating over a blog post discussing not only us but other Gecko forks (SeaMonkey, Pale Moon, etc.) which for a variety of reasons don't want to follow Mozilla into the unclear misty haze of a post-XUL world. To a first approximation our reasons are generally technical and theirs are primarily philosophical, but we both end up doing some of the same work and we should talk about that as an ecosystem. More later.

Planet MozillaWorldBots Meetup 4/20/17

WorldBots Meetup 4/20/17 WorldBots Meetup 2017-04-20 19:00 - 21:00 We're throwing the first World Bot Meetup! International experts from all over the world will talk about the culture,...

Planet MozillaWorldBots Meetup 4/20/17

WorldBots Meetup 4/20/17 WorldBots Meetup 2017-04-20 19:00 - 21:00 We're throwing the first World Bot Meetup! International experts from all over the world will talk about the culture,...

Planet WebKitA Few Words on Fetching Bytes

Like all good puzzles, a web browser is composed of many different pieces. Some are all shiny, like your favorite web API. Some are less visible, like HTML parsing and web resource loading.

Even dull pieces require lots of work to standardize their behavior across browsers. For example, HTML parsing originally provided only: Give me HTML and I’ll give you a document. Now, it is much more reliable across browsers because it has been standardized in detail. Similarly, the loading of web resources was somehow consistent up to: give me an HTTP request and I’ll get you a HTTP response. But loading a web resource encompasses much more than that. The Fetch specification thoroughly standardizes those details. As well as specifying how the browser loads resources, the Fetch specification also defines a JavaScript API for loading resources. This API, the Fetch API, is a replacement to XMLHttpRequest, providing the lowest-level set of options possible in the context of a web page. Let’s see how shiny Fetch API might be.

The Fetch API

The Fetch API consists of a single Promise-returning method called fetch. The returned value is a Response object which contains the response headers and body information. Let’s use the Fetch API to retrieve the list of WebKit features:

async function isFetchAPIFeelingGood() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(webkitFeaturesURL);
    let features = await response.json();
    return features.specification.find((feature) =>
        feature.name == "Fetch API");
}
isFetchAPIFeelingGood().then((value) => alert(!!value ? "Oh yes!" : "not really!"))

You might notice two await uses in the example above. fetch is returning a promise that gets resolved when the response headers are received. The data being requested is JSON. The second promise resolves when the entire response body is available.

fetch can take either a URL or a Request object. The Request object allows access to a whole new set of options compared to XMLHttpRequest. Let’s try again to check whether fetch API is supported in WebKit, but this time, let’s make sure our cache does not serve us some out-of-date information.

async function isFetchAPIFeelingGoodForReal() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(new Request(webkitFeaturesURL,
        { cache: "no-cache" }
    ));
    let latestFeatures = await response.json();
    return latestFeatures.specification.find((feature) =>
        feature.name == "Fetch API");
}

fetch also provides more flexible access to the response body. In addition to getting it in various flavors (JSON, arrayBuffer, blob, text…), the response provides a ReadableStream body attribute. This makes it possible to process chunks of bytes progressively as they arrive without buffering the whole data, and even aborting the resource load:

async function featureListAsAReader() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(new Request(webkitFeaturesURL));
    return response.body.getReader();
}

function checkChunk(searched, buffer, count)
{
    var i = 0;
    while (i < buffer.length) {
        if (buffer[i++] == searched.charCodeAt(count)) {
            if (++count == searched.length)
               return count;
        } else if (count) {
            --i;
            count = 0;
        }
    }
    return count;
}

async function isFetchAPIFeelingGoodWhileChunky(reader, count)
{
    reader = reader ? reader : await featureListAsAReader();
    count = count ? count : 0;

    let chunk = await reader.read();
    if (chunk.done)
        return false;

    let searched = "Fetch API";
    count = checkChunk(searched, chunk.value, count);
    if (count == searched.length)
        return true;
    return isFetchAPISupported(reader, count);
}

Fetching The Future

The Fetch API journey is not finished. New proposals might cover important features of XMLHttpRequest that Fetch currently lacks, like early cancellation and timeout. New proposals might also cover HTTP/2 push and priority, as well as wider use of the Response object in web APIs: media elements, Web Assembly… The Fetch algorithm is also being constantly refined to reach full interoperability of web resource loading. A first iteration of WebKit Fetch API implementation shipped in Safari. The WebKit community is eager to hear about your feedback on this feature. Comments, suggestions, priorities, use cases, tests, bug reports and candies are all very welcome through the usual WebKit channels. That would be so fetch indeed!

View post on imgur.com

Planet MozillaReps Weekly Meeting Apr. 20, 2017

Reps Weekly Meeting Apr. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Apr. 20, 2017

Reps Weekly Meeting Apr. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaFirefox 54 Beta 3 Testday, April 28th

Hello Mozillians,

We are happy to let you know that Friday, April 28th, we are organizing Firefox 54 Beta 3 Testday. We’ll be focusing our testing on the following new features: Net Monitor MVP and Download Panel UX Redesign.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaLocalizing Firefox in Barcelona

We were thrilled to start the year’s localization (l10n) community workshops in Barcelona at the end of March 2017! Thanks to the help of the ever dedicated Alba and Benny the workshop was fun, productive, and filled with amazing Catalonian food.

This workshop aimed to gather together core and active localizers from twenty-one l10n communities scattered throughout the southern parts of Western and Eastern Europe. Unlike the 2016 l10n hackathons, this was the first time we brought these twenty-one communities together to share experiences, ideas, and hack on Mozilla l10n projects together.

The workshop was held at Betahaus, a local co-working space in Barcelona located in Villa de Grácia, Barcelona. The space was great for both large group presentations and small group breakouts. We had room to move around, brainstorm on whiteboards, and play our favorite icebreaker game, spectograms.

All of the l10n-drivers were present for this workshop (another first) and many gave presentations on their main projects. Localizers got a look into new developments with L20n, Pontoon, and Pootle. We also had a glimpse into cross-channel localization for Firefox and how localizers can prepare for it to come in June.

Following tradition, l10n communities came to the workshop with specific goals to accomplish while there. While together, these communities were able to complete around 75% of their goals. These goals largely surrounded addressing the question of localization quality and testing, but also included translating strings for Mozilla products, web sites, and planning for recruiting new localizers.

We couldn’t think of being in Barcelona without taking advantage of participating in a cultural activity as a group. Alba was kind enough to guide the whole group through the city on Saturday night and show us some of the most prominent sites, like Sagrada Familia (which happened to be the most popular site among the l10n communities).

On Sunday, the l10n communities and drivers gathered around four different tables to discuss four different topics in 30-minute chunks of time. Every 30 minutes, Mozillians moved to a different table to discuss the topic assigned to that table. These topics included localization quality, style guides, recruiting new localizers, and mentoring new localizers. It was a great opportunity for both veteran and new localizers to come together and share their experience with each topic and ideas on how to take new approaches to each. Sure, it was a bit chaotic, but everyone was flexible and willing to participate, which made it a good experience nevertheless.

For more info about the workshop (including the official Spotify playlist of the workshop), visit the event’s wiki page here. ¡Hasta luego!

More pictures from the event:

Planet MozillaReps Program Objectives – Q2 2017

As we did in the past few quarters, we have decided on the Reps Program Objectives for this quarter. Again we have worked with the Community Development Team to align our goals to the broader scope of their goals. These are highly relevant for the Reps program and the Reps’ goals are tightly coupled with these. In the following graphic you can see how all these goals play together.

Objective 1 – RepsNext is successfully completed paving the way for our next improvement program

  • KR 1 – The Coaching plan is implemented and we are able to scale
  • KR 2 – Budget requests submitted after June, 1st go through the trained Resource Reps
  • KR 3 – Reps can get initial resources to improve their Leadership skills
  • KR 4 – Core community sentiment NPS >11.5 (Konstantina as in Q1)
  • KR 5 – Mobilizer sentiment NPS >15 (Konstantina as in Q1)
  • KR 6 – We have a GitHub issue to plan for the future of Reps with an exclusive focus on functional contributions
  • KR 7 – The Facebook experiment is analyzed and being continued if successful
  • KR 8 – 2 communication improvements are identified
  • KR 9 – It takes a maximum of 2 weeks for new applicants to have their first task assigned

Objective 2 – MozActivate focuses mobilizers on impactful areas

  • KR 1 – General feedback form is used by 100% of MozActivate activities
  • KR 2 – We have implemented metrics and measurements for the existing MozActivate and to-be-launched activities as well as for the website itself
  • KR 3 – 70 Reps have organized one or more MozActivate activity
  • KR 4 – Activate is actively engaging 70 new technical contributors
  • KR 5 – 2 new activities are launched

Objective 3 – The Reps program demonstrates operational excellence in the Mozilla Project

  • KR 1 – Goals for Q3 have been set
  • KR 2 – We were involved and gave feedback about the Community Development Team OKRs for Q3 as well as the broader Open Innovation ones
  • KR 3 – The budget allocation for Q3 is finalized and communicated to all Reps
  • KR 4 – We have on average maximum one open action item from last week before every Council Meeting that is not tracked on GitHub and next steps/blockers are identified
  • KR 5 – We have planned 2 brainstorm sessions for the next improvement program
  • KR 6 – We have given feedback for Open Innovation’s “Strategy” project and are a valuable source for future consultation for strategy related questions

We will work closely with the Community Development Team to achieve our goals. You can follow the progress of these tasks in the Reps Issue Tracker. We also have a new Dashboard1 to track the status of each objective.

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

 

Planet MozillaThis April, Mozilla is Standing Up for Science

Mozilla supports the March for Science. And we’re leading projects to make scientific research more open and accessible, from extraterrestrial hackathons to in-depth fellowships

 

We believe openness is a core component not just of a healthy Internet, but also a healthy society. Much like open practices can unlock innovation in the realm of technology, open practices can also invigorate fields like civics, journalism — and science.

In laboratories and at academic institutions, open source code, data and methodology foster collaboration between researchers; spark scientific progress; increase transparency and encourage reproducibility; and better serve the public interest.

Open data has been shown to speed up the study process and vaccine development for viruses, like Zika, at global scale. And open practices have allowed scientific societies from around the globe to pool their expertise and explore environments beyond Earth.

This April, Mozilla is elevating its commitment to open science. Mozilla Science Lab, alongside a broader network of scientists, developers and activists, is leading a series of programs and events to support open practices in science.

Our work aligns with the April 22 March for Science, a series of nonpartisan gatherings around the world that celebrate science in the public interest. We’re proud to say Teon Brooks, PhD — neuroscientist, open science advocate and Mozilla Science Fellow — is serving as a March for Science Partnership Outreach Co-Lead.

From science fellowships to NASA-fueled hackathons, here’s what’s happening at Mozilla this April:

Signage for Science Marchers

We want to equip March for Science participants — from the neuroscientist to the megalosaurus-obsessed third grader — with signs that spotlight their passion and reverence for science. So Mozilla is asking you for your most clever, impassioned science-march slogans. With them, our designers will craft handy posters you can download, print and heft high.

Learn more here.

Seeking Open Science Fellows

This month, Mozilla began accepting applications for Mozilla Fellowships for Science. For the third consecutive year, we are providing paid fellowships to scientists around the world who are passionate about collaborative, iterative and open research practices.

Mozilla Science Fellows spend 10 months as community catalysts at their institutions, and receive training and support from Mozilla to hone their skills around open source, data sharing, open science policy and licensing. Fellows also craft code, curriculum and other learning resources.

Fellowship alums hail from institutions like Stanford University and University of Cambridge, and have developed open source tools to teach and study issues like bioinformatics, climate science and neuroscience.

Apply for a fellowship here. And read what open science means to Mozillian Abigail Cabunoc Mayes: My Grandmother, My Work, and My Open Science Story

Calling for Open Data

In the United States, federal taxes help fund billions of dollars in scientific research each year. But the results of that research are frequently housed behind pricey paywalls, or within complex, confounding systems.

Citizens should have access to the research they help fund. Further, open access can spark even more innovation — it allows entrepreneurs, researchers and consumers to leverage and expand upon research. Just one example: Thanks to publicly-funded research made openly available, farmers in Colorado have access to weather data to predict irrigation costs and market cycles for crops.

Add your name to the petition: https://iheartopendata.org.

Calling for Open Citations

Earlier this month, Mozilla announced support for the Initiative for Open Citations (I4OC), a project to make citations in scientific research open and freely accessible. I4OC is a collaboration between Wikimedia, Bill & Melinda Gates Foundation, a slate of scholarly publishers and several other organizations.

Presently, citations in many scholarly publications are inaccessible, subject to restrictive and confusing licenses. Further, citation data is often not machine readable — meaning we can’t use computer programs to parse the data.

I4OC envisions a global, public web of citation data — one that empowers teaching, learning, innovation and progress.

Learn more about I4OC.

Extraterrestrial Hackathon (in Brooklyn)

Each year, the Space Apps hackathon allows scientists, coders and makers around the world to leverage NASA’s open data sets. In 2016, 5,000 people across six continents contributed. Participants built apps to measure air quality, to remotely explore gelid glaciers and to monitor astronauts’ vitals.

For the 2017 Space Apps Hackathon — slated for April 28-30 — participants will use NASA data to study Earth’s hydrosphere and ecological systems. Mozilla Science is hosting a Brooklyn-based Space Apps event, which will include a data bootcamp.

Learn more at http://spaceappsbrooklyn.com/

The post This April, Mozilla is Standing Up for Science appeared first on The Mozilla Blog.

Planet MozillaWebVR Google Daydream support lands in Servo

WebVR Google Daydream support lands in Servo

Want to try this now? Download this three.js Rollercoaster Demo (Android APK)!

We are happy to announce that Google Daydream VR headset and Gamepad support are landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports asynchronous reprojection to achieve low-latency rendering.

If you are eager to explore, you can download an experimental three.js Rollercoaster Demo (Android APK) compatible with Daydream-ready Android phones. Put on the headset, switch on your controller, and run the app from Daydream Home or from a direct launch.

We have contributed to many parts in the Servo browser codebase in order to allow polished WebVR experiences on Android. It’s nice that our WebVR support goals has allowed to push forward some improvements that are also useful for other areas of the Android version of Servo.

VR Application life cycle

Daydream VR applications have to gracefully handle several VR Entry flows such as transitions between the foreground and background, showing and hiding the Daydream pairing screen, and adding the GvrLayout Android View on top of the view hierarchy. To manage the different scenarios we worked on proper implementations of native EGL context lost and restore, animation loop pause/resume, immersive full-screen mode, and support for surface-size and orientation changes.

Servo uses a NativeActivity, in combination with android-rs-glue and glutin, as an entry point for the application. We realized that NativeActivity ignores the Android view hierarchy because it’s designed to take over the surface from the window to directly draw to it. The Daydream SDK requires a GvrLayout view in the Activity’s view hierarchy in order to show the VR Scene, so things didn’t work out.

A research about this issue shows that most people decide to get rid of NativeActivity or bypass this limitation using hacky PopupWindow modal views. The PopupWindow hack may work for simple views like an Google AdMob banner but causes complications with a complex VR view. We found a more elegant solution by releasing the seized window and injecting a custom SurfaceView with its render callbacks redirected to the abstract implementation in NativeActivity:

This approach works great, and we can reuse the existing code for native rendering. We do, however, intend to remove NativeActivity in the future. We’d like to create a WebView API-based Servo component that will allow developers to embed their content from Android standalone apps or using WebView-based engine ecosystems such as Cordova. This will involve modifications to various Servo layers coupled with NativeActivity callbacks.

Build System

Thanks to the amazing job of both the Rustlang and Servo teams, the browser can be compiled with very few steps, even on Windows now. This is true for Android too, but the packaging step was still using ant combined with Python scripts. We replaced it with a new Gradle build system for the packaging step, which offers some nice benefits:

  • A scalable dependency system that allows to include Gradle/aar-based dependencies such as the GoogleVR SDK.
  • Relative paths for all project libraries and assets instead of multiple copies of the same files.
  • Product flavors for different versions of Servo (e.g. Default, VR Browser, WebView)
  • Android Studio and GPU debugger support.

The new Gradle integration paves the way for packaging Servo APKs with the Android AArch64 architecture. This is important to get optimal performance on VR-ready phone CPUs. Most of the Rust package crates that Servo uses can be compiled for AArch64 using the aarch64-linux-android Rust compilation target. We still, however, need to fix some compilation issues with some C/C++ dependencies that use cmake, autotools or pure Makefiles.

Other necessary improvements to support WebVR

There’s a plethora of rough edges we have to polish as we make progress with the WebVR implementation. This is a very useful exercise that improves Servo Android support as a compelling platform for delivering not only WebVR content, but graphics-intensive experiences. To reach this milestone, these are some of the areas we had to improve:

Daydream support on Rust WebVR

WebVR Google Daydream support lands in Servo

These notable Android improvements, combined with the existing cross-platform WebVR architecture, provide a solid base for Daydream integration into Servo. We started by integrating Daydream support in the browser dependency-free rust-webvr library.

The Google VR NDK for Android provides a C/C++ API for both Daydream and Cardboard headsets. As our codebase is written in Rust, we used rust-bindgen to generate the required bindings. We also published the gvr-sys crate, so from now on anyone can easily use the GVR SDK in Rust for other use cases.

The GoogleVRService class offers the entry point to access GVR SDK and handles life-cycle operations such as initialization, shutdown, and VR Device discovery. The integration with the headset is implemented in GoogleVRDisplay. Daydream lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

A Java GvrLayout view is required in order get a handle to the gvr_context, apply lens distortion, and enable asynchronous-reprojection-based rendering. This adds some complexity to the implementation because it involves adding both the Java Native Interface (JNI) and Java code to the modular rust-webvr library. We created a Gradle module to handle the GvrLayout-related tasks and a helper JNIUtils class to communicate between Rust and Java.

One of the complexities about this interoperation is that JNI FindClass function fails to find our custom Java classes. This happens because when attaching native Rust threads to a JavaVM, the JNI AttachCurrentThread call is unaware of the current Java application context and it uses the system Classloader instead of the one associated with the application. We fixed the issue by retrieving the Classloader from the NativeActivity’s jobject instance and performing loadClass calls directly to it. I’m waiting for variadic templates to land in Rustlang to extend and move these JNI Utils into it’s own crate providing a similar API like the one I implemented for the C++11 SafeJNI library.

In order to present the WebGL canvas into the headset we tried to use a shared texture_id as we did in the OpenVR implementation. Unfortunately, the GVR SDK allows attaching only external textures that originate from the Android MediaCodec or Camera streams. We opted for a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching:

Once the Daydream integration was tested using the pure Rust room-scale demo, we integrated it pretty quickly into Servo. It fit perfectly into the existing WebVR architecture. WebVR tests ran well except that VRDisplay.requestPresent() failed in some random launches. This was caused because of a deadlock possibility during the very specific frame when the requestAnimationFrame is moved from window to VRDisplay. Fortunately, this was fixed with this PR.

In order to reduce battery usage, when a JavaScript thread starts presenting to the Daydream headset, the swap_buffers call of the NativeActivity’s EGLContext is avoided. The optimized VR render path draws into only the texture framebuffer attached to the WebGL Canvas. This texture is sent to the GVRLayout presentation view when VRDisplay.submitFrame() is called and lens distortion is then applied.

Gamepad Integration

Gamepad support is a necessity for complete WebVR experiences. Similarly to the VRDisplay implementation, integration with the vendor-specific SDK for gamepads are implemented in rust-webvr, based on the following traits and structs:

These traits are used in both the WebVR Thread and DOM Objects in the Gamepad API implementation in Servo.

Vendor-specific SDKs don’t allow using the VR gamepads independently, so navigator.vr.getDisplays() must be called in order to spin up VR runtimes and make VR gamepads discoverable later in subsequent navigator.getGamepads() calls.

The recommended way to get valid gamepad state on all browsers is calling navigator.getGamepads() within every frame in your requestAnimationFrame callback. We created a custom GamepadList container class with two main purposes:

  • Provide a fast and Garbage Collection-friendly container to share the gamepad list between Rust and JavaScript, without creating or updating JS arrays every frame.

  • Implement an indexed getter method which will be used to hide gamepads according to privacy rules. The Gamepad spec permits the browser to return inactive gamepads (e.g., [null, <object Gamepad>]) when gamepads are available but in a different, hidden tab.

WebVR Google Daydream support lands in Servo

The latest gamepads state is polled immediately in response to the navigator.getGamepads() API call. This is a different approach than the one implemented in Firefox, where the gamepads are vsync-aligned and have the data already polled when requestAnimationFrame is fired. Both options are equally valid, though the being able to immediately query for gamepads enables a bit more flexibility:

  • Gamepad state can be sampled multiple times per frame, which can be very useful for motion-capture or drawing WebVR applications.
  • Vsync-aligned polling can be simulated by just calling navigator.getGamepads at the start of the frame. Remember from the Servo WebVR architecture that requestAnimationFrame is fired in parallel and allows to get some JavaScript code executed ahead during the VR headset’s vsync time until VRDisplay#getFrameData is called.

Conclusion

We are very excited to see how far we’ve evolved the WebVR implementation on Servo. Now that Servo has a solid architecture on both desktop and mobile, our next steps will be to grow and tune up the WebGL implementation in order to create a first-class WebVR browser runtime. The Gear VR backend is coming too ;) Stay tuned!

Planet MozillaIs undetectable ad blocking possible?

This announcement by the Princeton University is making its rounds in the media right now. What the media seems to be most interested in is their promise of ad blocking that websites cannot possibly detect, because the website can only access a fake copy of the page structures where all ads appear to be visible. The browser on the other hand would work with the real page structures where ads are hidden. This isn’t something the Princeton researchers implemented yet, but they could have, right?

First of all, please note how I am saying “hidden” rather than “blocked” here — in order to fake the presence of ads on the page you have to allow the ads to download. This means that this approach won’t protect you against any privacy or security threats. But it might potentially protect your eyes and your brain without letting the websites detect ad blocker usage.

Can we know whether this approach is doable in practice? Is a blue pill for the website really possible? The Princeton researchers don’t seem to be aware of it but it has been tried before, probably on a number of occasions even. One such occasion was the history leak via the :visited CSS pseudo-class — this pseudo-class is normally used to make links the user visited before look differently from the ones they didn’t. The problem was, websites could detect such different-looking links and know which websites the user visited — there were proof-of-concept websites automatically querying a large number of links in order to extract user’s browsing history.

One of the proposals back then was having getComputedStyle() JavaScript API return wrong values to the website, so that visited and unvisited links wouldn’t be distinguishable. And if you look into the discussion in the Firefox bug, even implementing this part turned out very complicated. But it doesn’t stop here, same kind of information would leak via a large number of other APIs. In fact, it has been demonstrated that this kind of attack could be performed without any JavaScript at all, by making visited links produce a server request and evaluating these requests on the server side.

Hiding all these side-effects was deemed impossible from the very start, and the discussion instead focused on the minimal set of functionality to remove in order to prevent this kind of attack. There was a proposal allowing only same-origin links to be marked as visited. However, the final solution was to limit the CSS properties allowed in a :visited psedo-class to those merely changing colors and nothing else. Also, the conclusion was that APIs like canvas.drawWindow() which allowed websites to inspect the display of the page directly would always have to stay off limits for web content. The whole process from recognizing an issue to the fix being rolled out took 8 (eight!) years. And mind you, this was an issue being addressed at the source — directly in the browser core, not from an extension.

Given this historical experience, it is naive to assume that an extension could present a fake page structure to a website without being detectable due to obvious inconsistencies. If at all, such a solution would have to be implemented deep in the browser core. I don’t think that anybody would be willing to limit functionality of the web platform for this scenario, but the solution search above was also constrained by performance considerations. If performance implications are ignored a blue pill for websites becomes doable. In fact, a fake page structure isn’t necessary and only makes things more complicated. What would be really needed is a separate layout calculation.

Here is how it would work:

  • Some built-in ad hiding mechanism would be able to mark page elements as “not for display.”
  • When displaying the page, the browser would treat such page elements as if they had a “visibility:hidden” style applied — all requests and behaviors triggered by such page elements should still happen but they shouldn’t display.
  • Whenever the page uses APIs that require access to positions (offsetTop, getBoundingClientRect etc), the browser uses a second page layout where the “not for display” flag is ignored. JavaScript APIs then produce their results based on that layout rather than the real one.
  • That second layout is necessarily calculated at the same time as the “real” one, because calculating it on demand would lead to delays that the website could detect. E.g. if the page is already visible, yet the first offsetTop access takes unusually long the website can guess that the browser just calculated a fake layout for it.

Altogether this means that the cost of the layout calculation will be doubled for every page, both in terms of CPU cycles and memory  — only because at some point the web page might try to detect ad blocking. Add to this significant complexity of the solution and considerable maintenance cost (the approach might have to be adjusted as new APIs are being added to the web platform). So I would be very surprised if any browser vendor would be interested in implementing it. And let’s not forget that all this is only about ad hiding.

And that’s where we are with undetectable ad blocking: possible in theory but completely impractical.

Planet WebKitRelease Notes for Safari Technology Preview 28

Safari Technology Preview Release 28 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 214535-215271.

Power and Performance

  • Changed to pause silent WebAudio rendering in background tabs (r214721)
  • Changed to pause animated SVG images on pages loaded in the background (r214561)
  • Changed to make inaudible background tabs become eligible for memory kill after 8 minutes (r215077)
  • Changed to kill any WebContent process using over 16 GB of memory (r215055)
  • DOM Timers are now throttled to 30fps and aligned in cross-origin iframes (r215116)
  • requestAnimationFrame callbacks are now throttled to 30fps and aligned in cross-origin iframes (r215070, r215153)

CSS

  • Adapted content-alignment properties to the new baseline syntax (r214624)
  • Adapted place-content alignment shorthand to the new baseline syntax (r214852)
  • Adapted self-alignment properties to the new baseline syntax (r214564)
  • Fixed scroll offset jumps after a programmatic scroll in an overflow container with scroll snapping (r215075)
  • Implemented the place-items shorthand (r214966)
  • Implemented stroke-color CSS property (r215261)
  • Implemented stroke-miterlimit CSS property (r214787)
  • Unprefixed CSS cursor values grab and grabbing (r215146)

JavaScript

  • Fixed objects with gaps between numerical keys getting filled by NaN values (r214714)
  • Fixed Object.seal() and Object.freeze() on global this (r215072)
  • Fixed String.prototype.replace to correctly apply special replacement parameters when passed a function (r214662)

Web API

  • Changed _blank, _self, _parent, and _top browsing context names to be case-insensitive (r214944)
  • Cleaned up touch event handler registration when moving nodes between documents (r214819)
  • Fixed <input type="range"> to prevent breaking all mouse events when changing to disabled while active (r214955)
  • Prevented double downloads of preloaded content from when the content is in MemoryCache (r215229)
  • Fixed WebSocket.send (r215102)

Web Inspector

  • Added a preference for Auto Showing Scope Chain sidebar on pause (r214847)
  • Changed the order of Debugger tab sidebar panels: Scope Chain, Resource, Probes (r215047)
  • Changed XHR breakpoints to be global (r214956)
  • Changed hierarchical path component labels to guess directionality based on content for RTL layout (r214862)
  • Fixed RTL alignment of close button shown while docked (r214902)
  • Fixed RTL layout issues in call frame tree elements and async call stacks (r214846)
  • Fixed RTL layout issues in the debugger dashboard putting arrows on the wrong side (r214899)
  • Fixed RTL layout issues in Type Profiler popovers (r214906)
  • Fixed misplaced highlights in Search results of the Search navigation sidebar for RTL layout (r214864)
  • Fixed disappearing section when clicking on the body of a CSS rule after editing (r214863)
  • Fixed showing indicators for hidden DOM element breakpoints in the Elements tab (r214844)
  • Fixed blank Network tab content view after reload (r214551)
  • Made “Enter Class Name” text field wider so the placeholder text doesn’t clip (r215192)
  • Fixed probe values not showing in the Debugger tab sidebar (r214967)
  • Fixed focusing the Find banner immediately after showing it (r214856)
  • Fixed showing Source Map Resources in the Debugger Sources list (r215082)
  • Fixed Styles sidebar warning icon appearing inside property value text (r214617)
  • Fixed broken tabbing in Styles sidebar when additional “:” and “;” are in the property value (r215170)
  • Fixed clipped data in WebSockets data grid (r215206)
  • Fixed staying scrolled to the bottom as new WebSocket log messages get added (r214587)
  • Included additional pause reason details for DOM “subtree modified” breakpoint (r214861)
  • Included more Network information in Resource Details Sidebar (r214903)
  • Included all headers in the Request Headers section of the Resource details sidebar (r215062)

WebDriver

  • Fixed an issue that prevented non-popup windows from being maximized or resized
  • Fixed an issue that caused previously opened tabs to reopen when Safari was launched in order to run a WebDriver test

Accessibility

  • Exposed a new AXSubrole for the explicit ARIA “group” role (r214623)
  • Fixed VoiceOver web article navigation with an article rotor for sites like Facebook and Twitter (r215236)

Media

  • Fixed seeks to currentTime=0 if currentTime is already 0 (r214959)

Rendering

  • Fixed clipping across page breaks when including <caption>, <thead> or <tbody> in a <table> (r214712)
  • Fixed Japanese fonts in vertical text to support synthesized italics (r214848)
  • Fixed long Arabic text in ContentEditable with CSS white-space=pre to prevent hangs (r214726)
  • Fixed overly heavy fonts on facebook.com by attempting to normalize variation ranges (r214585, r214572)

WebCrypto

  • Added support for AES-CTR (r215051)

Security

  • Changed private browsing sessions to not look in keychain for client certificates (r215125)

AppleScript

  • Fixed an issue where Safari would throw an exception when evaluating JavaScript ending with an implied return value, where the final statement doesn’t include the return keyword

Planet MozillaFirefox faster and more stable with the first big bytes of Project Quantum, simpler with compact themes and permissions redesign

Today’s release of Firefox includes the first significant piece of Project Quantum, as well as various visible and the under-the-hood improvements.

The Quantum Compositor speeds up Firefox and prevents graphics crashes on Windows

In case you missed our Project Quantum announcement, we’re building a next-generation browser engine that takes full advantage of modern hardware. Today we’re shipping one of the first important pieces of this effort – what we’ve referred to as the “Quantum Compositor”.

Some technical details – we’ve now extracted a core part of our browser engine (the graphics compositor) to run in a process separate from the main Firefox process. The compositor determines what you see on your screen by flattening into one image all the layers of graphics that the browser computes, kind of like how Photoshop combines layers. Because the Quantum Compositor runs on the GPU instead of the CPU, it’s super fast. And, because of occasional bugs in underlying device drivers, the graphics compositor can sometimes crash. By running the Quantum Compositor in a separate process, if it crashes, it won’t bring down all of Firefox, or even your current tab.

In testing, the Quantum Compositor reduced browser crashes by about 10%. You can learn more about our findings here. The Quantum Compositor will be enabled for about 70% of Firefox users – those on Windows 10, 8, and 7 with the Platform Update, on computers with graphics cards from Intel, NVidia, or AMD.

And if you’re wondering about the Mac – graphics compositing is already so stable on MacOS that a separate process for the compositor is not necessary.

Save screen real estate – and your eyes – with compact themes and tabs

It’s a browser’s job to get you where you want to go, and then get out of the way.

That’s why today’s release of Firefox for desktop ships with two new themes: Compact Light and Compact Dark. Compact Light shrinks the size of the browser’s user interface (the ‘chrome’) while maintaining Firefox’s default light color scheme. The Compact Dark theme inverts colors so it won’t strain your eyes, especially if you’re browsing in the dark. To turn on one of these themes, click the menu button and choose Add-ons. Then select the Appearance panel, and the theme you’d like to activate.

Firefox for Android also ships with a new setting for compact tabs. When you switch tabs, this new setting displays your tabs in two columns, instead of one, so it’s easier to switch tabs when you have several open. To activate compact tabs, go to Settings > General.

Easily control a website’s permission to access device sensors or send you notifications

In order to fully function, many websites must first get your permission to access your hardware or alert you of information. For example, video conferencing apps need to use your camera and microphone, and maps request your location so you don’t have to type it in. Similarly, news sites and social networks often ask to send you notifications of breaking stories or messages.

Today’s Firefox desktop release introduces a redesigned interface for granting and subsequently managing a website’s permissions. Now, when you visit a website that wants to access sensitive hardware or send you a notification, you’ll be prompted with a dialog box that explicitly highlights the permissions that site is requesting. If later on you would like to change a site’s permissions, just click the ‘i’ icon in the Awesome Bar.

You can learn more about the improvements to Firefox’s permissions in this post.

Lots more new

Check out the Firefox 53 release notes for a full list of what’s new, but here are a few more noteworthy items:

  • Firefox for Android is now localized in Arabic, Hebrew, Persian, and Urdu
  • Reader Mode now displays estimated reading times on both Android and desktop
  • Send tabs between desktop and mobile Firefox by right-clicking the tab
  • Firefox now uses TLS 1.3 to secure HTTPs connections

Web developers should check out the Hacks blog for more information about what’s in today’s release.

We hope you enjoy today’s release, and that you’re excited for the even bigger Quantum leaps still ahead.

The post Firefox faster and more stable with the first big bytes of Project Quantum, simpler with compact themes and permissions redesign appeared first on The Mozilla Blog.

Planet MozillaWeekly SUMO Community Meeting Apr. 19, 2017

Weekly SUMO Community Meeting Apr. 19, 2017 This is the Sumo Weekly call for 4/19/17. PLEASE NOTE***( Known audio issue for the 2nd half of video)

Planet MozillaWeekly SUMO Community Meeting Apr. 19, 2017

Weekly SUMO Community Meeting Apr. 19, 2017 This is the Sumo Weekly call for 4/19/17. PLEASE NOTE***( Known audio issue for the 2nd half of video)

Planet Mozillaon customer service; or, how to treat bug reports

From United: Broken Culture, by Jean-Louis Gassée, writing on his time as the head of Apple France:

Over time, a customer service theorem emerged. When a customer brings a complaint, there are two tokens on the table: It’s Nothing and It’s Awful. Both tokens are always played, so whoever chooses first forces the other to grab the token that’s left. For example: Customer claims something’s wrong. I try to play down the damage: It’s Probably Nothing…are you sure you know what you’re doing? Customer, enraged at my lack of judgment and empathy, ups the ante: How are you boors still in business??

But if I take the other token first and commiserate with Customer’s complaint: This Is Awful! How could we have done something like this? Dear Customer is left with no choice, compelled to say Oh, it isn’t so bad…certainly not the end of the world..

It’s simple, it works…even in marriages, I’m told.

There’s no downside to taking the It’s Awful position. If, on further and calm investigation, the customer is revealed to be seriously wrong, you can always move to the playbook’s Upon Further Review page.

Planet Mozillacurl bug bounty

The curl project is a project driven by volunteers with no financing at all except for a few sponsors who pay for the server hosting and for contributors to work on features and bug fixes on work hours. curl and libcurl are used widely by companies and commercial software so a fair amount of work is done by people during paid work hours.

This said, we don’t have any money in the project. Nada. Zilch. We can’t pay bug bounties or hire people to do specific things for us. We can only ask people or companies to volunteer things or services for us.

This is not a complaint – far from it. It works really well and we have a good stream of contributions, bugs reports and more. We are fortunate enough to make widely used software which gives our project a certain impact in the world.

Bug bounty!

Hacker One coordinates a bug bounty program for flaws that affects “the Internet”, and based on previously paid out bounties, serious flaws in libcurl match that description and can be deemed worthy of bounties. For example, 3000 USD was paid for libcurl: URL request injection (the curl advisory for that flaw) and 1000 USD was paid for libcurl duphandle read out of bounds (the corresponding curl advisory).

I think more flaws in libcurl could’ve met the criteria, but I suspect more people than me haven’t been aware of this possibility for bounties.

I was glad to find out that this bounty program pays out money for libcurl issues and I hope it will motivate people to take an extra look into the inner workings of libcurl and help us improve.

What qualifies?

The bounty program is run and administered completely out of control or insight from the curl project itself and I must underscore that while libcurl issues can qualify, their emphasis is on fixing vulnerabilities in Internet software that have a potentially big impact.

To qualify for this bounty, vulnerabilities must meet the following criteria:

  • Be implementation agnostic: the vulnerability is present in implementations from multiple vendors or a vendor with dominant market share. Do not send vulnerabilities that only impact a single website, product, or project.
  • Be open source: finding manifests itself in at least one popular open source project.

In addition, vulnerabilities should meet most of the following criteria:

  • Be widespread: vulnerability manifests itself across a wide range of products, or impacts a large number of end users.
  • Have critical impact: vulnerability has extreme negative consequences for the general public.
  • Be novel: vulnerability is new or unusual in an interesting way.

If your libcurl security flaw matches this, go ahead and submit your request for a bounty. If you’re at a company using libcurl at scale, consider joining that program as a bounty sponsor!

Planet MozillaLocalizing Nightly by Default

One of our goals for 2017 is to implement a continuous localization system at Mozilla for Firefox and other projects. The idea is to expose new strings to localizers earlier and more frequently, and to ship updates to users as soon as they’re ready. I’m excited to say that we’ve arrived at one of the key milestones toward a continuous localization system: transitioning localization from Aurora to Nightly.

How can you help?

Starting April 19th, the focus for localization is going to be on Nightly.

If you are a localizer, you should install Nightly in your own language and test your localization.

If you are a member of a local community, you should start spreading the message about the importance of using Nightly to help improve localized versions of Firefox and share feedback with localizers.

If you are new to localization, and you want to help with translation tasks, check out our tools (Pontoon and Pootle), and get in touch with the contributors already working on your language.

The amount of information might be overwhelming at times, if you ever get lost you can find help on IRC in the #l10n channel, on our mailing list, and even via Twitter @mozilla_l10n.

Firefox release channels

Mozilla has three (previously four) release channels for Firefox, each with their own dedicated purpose. There’s Nightly (built from the mozilla-central repository), Beta (mozilla-beta), and Release (mozilla-release).

  • Nightly: development of Firefox (and now localization)
  • Aurora: testing & localization (no longer available)
  • Beta: stable testing of Firefox
  • Release: global distribution of Firefox to general audience

A version of Firefox will “ride the trains” from Nightly to Beta and finally to Release, moving down the channel stream every 6-8 weeks.

With Aurora, localizers were given one cycle to localize new, unchanging content for Firefox. In fact, once moved to Aurora, code would be considered “string frozen”, and only exceptional changes to strings would be allowed to land. Any good update from localizers during that time was signed off and rode the trains for 6-12 weeks before end-users received it.

We spent the last two years asking localizers about their contribution frequency preferences. We learned that, while some preferred this 6 week cycle to translate their strings, the majority preferred to have new content to translate more frequently. We came away from this with the understanding that the thing localizers want most when it comes to their contribution frequency is freedom: freedom to localize new Firefox content whenever they choose. They also wanted the freedom to send those updated translations to end-users as early as possible, without waiting 6-12 weeks. To accommodate this desire for freedom, Axel set out to develop a plan for a continuous localization system that exposes new content to localizers early and often, as well as delivers new l10n updates to users more quickly.

Nightly localization

The first continuous localization milestone consisted of removing the sign-off obligation from localizer’s TODO list. The second milestone consists of transitioning localization from the old Aurora channel to the Nightly channel. This transition aims to set the stage for cross-channel localization (one repository per locale with Nightly, Beta, and Release strings together) as well as satisfy the first desired freedom: to localize new Firefox content whenever localizers choose to localize.

This is how it works:

  1. A developer lands new strings in mozilla-central for Nightly.
  2. Localization drivers (l10n-drivers) review those new strings and offer feedback to the dev where needed.
  3. Every 2-3 days, localization drivers update a special clone of mozilla-central used by localization tools.
  4. Pootle & Pontoon detect when new strings have been added to this special repository and pull them into their translation environments automatically.
  5. When a new l10n updates is made, Pootle & Pontoon push the change into the locale’s Nightly repository.
  6. Localization drivers review all new updates into l10n Nightly repositories and sign off on all good updates.
  7. Good updates are flagged for shipping to Release users when the version of Firefox “rides the trains” to Release.

Localizing on Nightly offers localizers a few benefits:

  1. Localizers are exposed to new strings earlier for l10n, making it easier for developers to make corrections to en-US strings when localizers report errors.
  2. Localizers have the freedom to localize whenever new strings land (every 2-3 days) or to define their own cadence (every 2 weeks, 4 weeks, 8 weeks, etc.).
  3. Without Aurora, new localization updates get to end-users in Release faster.

The next continuous localization milestone is to implement cross-channel localization. Cross-channel will satisfy the second desired freedom: delivering translation updates to end-users faster. It will also drastically simplify the localization process, allowing localizers to land fixes once, and shipping them in all versions of Firefox. If you’d like to follow the work related to cross-channel, you can find it here on GitHub. We expect cross-channel to be ready before June 2017.

Planet MozillaMy fourth year working at Mozilla

<figure> Mozilla staff photo from All-Hands event in Hawaii, December 2016 </figure>

This week marks my 4th year Mozillaversary! As usual, I try to put together a short post to recap on some of the things that happened during the past year. It feels like I have some things to talk about this time around which are slightly more process-heavy than previous year’s efforts, but gladly there’s some good work in there too. Here goes!

Our team grew

Our functional team grew over the past year which is really great to see. We now manage the development and infrastructure for both www.mozilla.org and MDN. The idea is that having both teams more closely aligned will lead to increased sharing of knowledge and skills, as well as standardization on common tools, libraries, infra, deployment and testing. It’s great to have some more talented people on the team, hooray!

Are we agile yet?

While most of my day-to-day work is still spent tending to the needs of www.mozilla.org, a lot has changed in the last year with regard to how our development team manages work processes. The larger marketing organization at Mozilla has switched to a new agile sprint model, with dedicated durable teams for each focus area. While I think this is a good move for the marketing org as a whole, it has also been a struggle for many teams to adjust (the mozorg team included). While two week sprints can work well for product focused teams, a website such as mozorg can be quite a different beast; with multiple stakeholders, moving parts, technical debt, and often rapidly shifting priorities. It is also an open source project, with real contributors. We’re still experimenting with trying to make this new process fit the needs of our project, but I do wonder if we’ll slowly creep back to Kanban (our previous methodology) during the course of the next year. Let’s wait and see ;)

Contributions and other stats

Here are the usual stats from the past year:

  • I made over 166 commits to bedrock this past year (down from 269 commits last year).
  • I have now filed over 424 bugs on Bugzilla, been assigned over 474 bugs and made over 3967 comments.
  • I cycled over 1657 miles on my lunch breaks (one of my personal goals this past year was to become more healthy!).

Now, the number of commits to bedrock aren’t always a good representation of the level of work that occurred during the year. I did work on some large, far reaching bugs which took a lot of time and effort. But it does make me wonder if our new sprint process is actually less productive overall? Are all those smaller bugs going left unattended for longer? Would we have still have been hitting our high level goals doing Kanban? It’s hard to quantify, but there’s some food for thought here.

Firefox Download Pages

The main Firefox download page is one of the most high traffic pages on mozorg, so it’s naturally something we pay close attention to when making changes. This year we experimented on the page a lot. It got redesigned it no less than three times, and continually tweaked over the course of multiple A/B tests. Lots of scrutiny goes into every change, especially in relation to page weight, loading time, and the impact that can have on download conversions. Ultimately what used to be a relatively plain looking page turned into something quite beautiful.

<figure> Redesigned Firefox download page </figure>

We also experimented with things like making the sun rise over the horizon, but sadly this proved to be a bit too much of a distraction for some visitors. Nevertheless, kudos to our design team for the beautiful visuals. It was quite fun to work on :)

Firefox Stub Attribution

Another notable feature I spent time on was adding support to bedrock for tracking campaign referral data, and passing that along to the Firefox Stub Installer for profiling in Telemetry. The idea is that the Firefox Retention Team can look at data in Telemetry and try to attribute specific changes in retention (how long users actively use the product) to downloads triggered by specific referral sources or media campaigns. This work required coordination with multiple engineering teams within Mozilla, and took considerable time to test and gradually roll out. We’re still crunching the data and hope it can provide some useful insights going forward.

SHA-1 Bouncer Support

Firefox 52 marked the end of SHA-1 certificate support on the Web. In order to continue serving downloads to users, we had to switch Bouncer to SHA-2 only, and then set up a SHA-1 mirror to continue supporting users on Windows XP/Vista. This required modifying our download button logic in bedrock (something I was once a bit scared of doing) to provide SHA-1 specific links that get shown only to the users who need it. Once XP/Vista are officially no longer supported by Firefox ESR we can remove this logic.

Mozilla Global Navigation

As part of Mozilla’s new branding rollout, I also got to build the first prototype of the new global navigation for mozorg. We’re still iterating and refining how it works and performs, but the aim is that one day it can be used across many Mozilla web properties. I’m hopeful it may help to solve some of the information architecture issues we’ve faced on mozorg in recent years.

All-hands and travel

Photo of me in the crater of a volcano!

Mozilla’s All-Hands events are always pretty amazing. This time they happened in London and Hawaii. While London wasn’t really high on the excitement levels, it was nice to get to welcome all my colleagues to the UK. Hawaii was naturally the real highlight for me, especially because I got to go visit a real, live volcano! In between all that I also got to pay my second visit to the Mozilla Toronto office, almost exactly 4 years since my last visit (which was my very first week working for Mozilla!).

Planet MozillaDocker image to generate allthethings.json

I've created a lot of hackery in the past (mozci) based on Release Engineering's allthethings.json file as well as improving the code to generate it reliably. This file contains metadata about the Buildbot setup and the relationship between builders (a build trigger these tests).

Now, I have never spent time ensuring that the setup to generate the file is reproducible. As I've moved over time through laptops I've needed to modify the script to generate the file to fit my new machine's set up.

Today I'm happy to announce that I've published a Docker image at Docker hub to help you generate this file anytime you want.

You can find the documentation and code in here.

Please give it a try and let me know if you find any issues!
docker pull armenzg/releng_buildbot_docker
docker run --name allthethings --rm -i -t releng_buildbot_docker bash
# This will generate an allthethings.json file; it will take few minutes
/braindump/community/generate_allthethings_json.sh
# On another tab (once the script is done)
docker cp allthethings:/root/.mozilla/releng/repos/buildbot-configs/allthethings.json .


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaMOSS End-of-Award Report: Mio

We are starting to ask MOSS project awardees to write an end-of-award report detailing what happened. Here’s one written a few months ago by the Mio project (Carl Lerche).

Planet MozillaThis Week in Rust 178

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

Sadly, for lack of nominations we have no Crate of this Week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

100 pull requests were merged in the last week.

New Contributors

  • Aaron Hill
  • alexey zabelin
  • nate
  • Nathaniel Ringo
  • Scott McMurray
  • Suchith J N

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust doesn't end unsafety, it just builds a strong, high-visibility fence around it, with warning signs on the one gate to get inside. As opposed to C's approach, which was to have a sign on the periphery reading "lol good luck".

Quxxy on reddit.

Thanks to msiemens for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaFirefox 53 new contributors

With the release of Firefox 53, we are pleased to welcome the 63 developers who contributed their first code change to Firefox in this release, 58 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Planet MozillaMozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment

Mozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment Join us for a Mozilla and Stanford Program in Law, Science & Technology hosted panel series about the intersection between intellectual property law and the...

Planet MozillaMozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment

Mozilla and Stanford Law Panel on Intellectual Property Law and the First Amendment Join us for a Mozilla and Stanford Program in Law, Science & Technology hosted panel series about the intersection between intellectual property law and the...

Planet MozillaAdd-ons Update – 2017/04

Here’s the state of the add-ons world this month.

The Road to Firefox 57 (recently updated) explains what developers should look forward to in regards to add-on compatibility for the rest of the year. Please give it a read if you haven’t already.

The Review Queues

In the past month, 1,209 listed add-on submissions were reviewed:

  • 984 (81%) were reviewed in fewer than 5 days.
  • 31 (3%) were reviewed between 5 and 10 days.
  • 194 (16%) were reviewed after more than 10 days.

There are 821 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The blog post for 53 is up and the bulk validation was run. Here’s the post for Firefox 54 and the bulk validation is pending.

Multiprocess Firefox is enabled for some users, and will be deployed for most users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta to make sure that they continue to work correctly. You may also want  to review the post about upcoming changes to the Developer Edition channel.

End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • bkzhang
  • Aayush Sanghavi
  • saintsebastian
  • Thomas Wisniewski
  • Michael Kohler
  • Martin Giger
  • Andre Garzia
  • jxpx777
  • wildsky

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/04 appeared first on Mozilla Add-ons Blog.

Planet MozillaHello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Planet Mozilla45.9.0 available

TenFourFox 45.9.0 is now available for testing (downloads, hashes, release notes), a bit behind due to Mozilla delaying this release until the Wednesday and my temporary inability to get connected at our extended stay apartment. The only changes in this release from the beta are some additional tweaks to JavaScript and additional expansion of the font block list. Please test; this build will go live Tuesday "sometime."

The next step is then to overlay the NSPR from 52 onto 45.9, overlay our final stack of changesets, and upload that as the start of FPR1 and our Github repository. We can then finally retire the changesets and let them ride off into the sunset. Watch for that in a couple weeks along with new build instructions.

Planet MozillaDawn project or the end of Aurora

As described in the post on the Hacks blog, we are changing the release mechanism of Firefox.

What

In order to address the complexity and cycle length issues, the release management team, in coordination with Firefox product management and engineering, is going to remove the Aurora stabilization phase from the cycle.

When

On April 18th, Firefox 55 will remain on Nightly. This means Firefox 55 will remain on Nightly for two full cycles. On June 13th, Firefox 55 will migrate directly from Nightly to Beta.

Why

As originally intended, Aurora was to be the first stabilization channel having a user base 10x the size of Nightly so as to provide additional user feedback. This original intent never materialized.

The release cycle time has required that we subvert the model regularly over the years by uplifting new features to meet market requirements.

How

The stabilization cycle from Nightly to Release will be shortened by 6-8 weeks.

A staged rollout mechanism, similar to what we do today with Release, will be used for the first weeks of Beta.

Our engineering and release workflow will continue to have additional checks and balances rolled out to ensure we ship a high quality release.

We will focus on finding and fixing regressions during the Nightly cycle and alleviate time pressure to ship to reduce the 400-600 patches currently uplifted to Aurora.

A new feature will merge from Nightly to Beta only when it's deemed ready, based on pre-established criteria determined by engineering, product, and product integrity.

Tooling such as static analysis, linters, and code coverage will be integrated into the development process

Dawn planning

FAQ

What will happen to the Aurora population on Desktop?

The Aurora population will be migrated to the Beta update channel in April 2017. We plan to keep them on a separate “pre-beta” update channel as compared to the rest of the Beta population. We will use this pre-beta audience to test and improve the stability and quality of initial Beta builds until we are ready to push to 100% of beta population. Because we presented Aurora as a stable product in the past, the beta channel is the closest in terms of stability and quality.

From the next merge (April 18th), users running 54 Aurora will remain on the Aurora channel but updates will be turned off. In case of critical security issues, we might push new updates to these aurora channel users. Aurora channel users will be migrated to Beta channel in April ‘17. For this to happen, we need to make sure that the Developer Edition features are working the same way on the Beta update channel (theme, profile, etc).

What will happen to the Aurora population on Android?

Because Google play doesn't allow the migration of a population from an application to another, the fennec population on aurora will be migrated to the nightly application. For now, we are planning to reuse the current Google play aurora application and replace it by Nightly to preserve the current population.

Why are we taking different approaches with the Desktop and Android Aurora populations?

Aurora channel on Desktop has been around for a long time and has a substantial end-user base that Beta channel will benefit from.

Fennec Aurora on Google Play is a recent addition and we believe merging this audience with Nightly makes more sense. It also simplifies implementation. !

I am running Developer Edition, what will happen to me?

Developer Edition, currently based off Aurora, will be updated to get builds from the Beta branch. There is nothing Developer Edition users need to do, they will update automatically to the Beta build keeping the Developer Edition themes, tools, and preferences as well as the existing profile.

Will I still be able to test add-ons with Developer Edition?

You can continue to test unsigned add-ons on Nightly builds or load WebExtensions temporarily in Beta and Release builds.

We are also continuing to provide unbranded builds of the beta and release branches which are able to run unsigned add-ons - including bootstrapped - for development and experimentation. These versions will not be verified by QE, but will receive updates , which is an improvement to the unbranded builds we currently provide for add-on development..

The majority of Developer Edition users won't experience any disruption. However those developers who rely on unsigned add-ons will need to use Nightly builds until we have finalized the unsigned add-on builds specifically for those developers.

How will you mitigate the quality risk from cutting 6-8 weeks of stabilization from the cycle?

Instead of pushing to 100 % of the beta population at once, we will use a staged rollout mechanism to push to a subset of the beta population. For the first phase, we will be pushing to the former aurora population. As a second phase, we will be targeting specific populations (Operating system, graphic card, etc)

In parallel, QE will also do preliminary nightly sign off to detect early new potential issues. Release management will be much more aggressive in term of feature deactivation.

Last but not least, the aurora cycle was used to finalize some features. Instead, feature stabilization will be performed during the nightly cycle.

What are we doing to improve Nightly quality?

To improve the overall quality of nightly, a few initiatives will help.

Nightly merge criteria

New end-user facing features landing in Nightly builds should meet Beta-readiness criteria before they can be pushed to Beta channel.

Static analyzers

In order to detect issues at review phase, static analyzers will be integrated as part of the workflow. They will be able to identify potential defects but also limit the technological debt.

Code coverage

Code coverage results are going to be used to analyze the quality of the testsuite and the risk introduced by the change.

Risk assessment

By correlating various data sources (VCS, Bugzilla, etc), we believe we can identify the potential risks carried by changes before they even land. The idea is to identify the functions where a modification has more chance to induce a regression.

How often will Beta builds be updated?

We will continue to push two Beta builds for Desktop and one Fennec build each week of the Beta cycle.

Will Developer Edition continue to have a separate profile?

Yes. The Developer Edition separate profile feature is a requirement for transition. If for whatever reason this feature cannot be completed by the end of the year we will need to return to creating rebuilds of Developer Edition as previously done to ensure those users are not cast away.

What will happen to the Aurora branch after Firefox 54 moves to Beta?

Updates on aurora channel will be disabled on April 18th. The desktop and Android aurora populations will be migrated as described above.

What criteria will be used to assess feature readiness to move to Beta?

We will be monitoring crash rates, QE's sign offs, telemetry data and new regressions to determine overall Nightly quality and feature readiness to merge to Beta.

How and who will determine whether a feature is ready to move to Beta?

End-user facing features will be reviewed for beta-readiness before they are pushed to Beta channel. Following is a list of criteria that will be used to evaluate feature readiness to merge to Beta:

  • No significant stability Issues
  • Missing Test Plans
  • Insufficient Testing
  • Feature is not Code Complete
  • Too Many Open Bugs

More detailed criteria defined in this document.

Are there any changes to Release or ESR channel?

No changes are planned for Release or ESR channel users.

Does this change how frequently we push mainline builds to Release channel?

No, but changes added in Nightly can make it into a Release build about 6-8 weeks sooner than they do now.

What will happen for l10n process when we remove Aurora?

Focus for localization will move from mozilla-aurora to mozilla-central. Localization tools (Pootle and Pontoon) will read en-US strings from a special mozilla-central clone: l10n-drivers will review patches with strings landing in the official mozilla-central repository, provide feedback to devs if necessary, and land updates every 2-3 days in this special repository. Localized content will be pushed to l10n-central repositories.

There are no changes for developers working on Firefox: Nightly and mozilla-central remain open to string changes, including the extra six weeks that Firefox 55 will spend in Nightly, while Beta is still considered string frozen, and requests to uplift changes affecting strings are evaluated case by case.

Users interested in helping with localization should download Nightly in their language.

What will happen for l10n process by the end of year?

For Firefox and Firefox for Android we will shift to a model with a single repository for all channels for each locale. This change will be reflected in localization tools, allowing localizers to make a change to a string and see that update applied across all channels at once.

How does Dawn impact engineering planning for landing features?

The biggest shift is that features will have to be completed before merge day. Developers will not be able to finalize feature development during the next branch cycle (as Aurora is used currently). See also “How and who will determine whether a feature is ready to move to Beta?”.

How will bug fixes and features not tracked by project management be impacted by Dawn?

Landing bug fixes in Nightly repository continues as before. Development on features that are not directly end-user visible and not tracked by EPMs, release management continues as before.

If Nightly quality and stability is negatively impacted by these untracked features or bug fixes, we will discuss potential mitigation options such as: back outs, stabilizing quality issues before continuing new feature development work, delaying Merge date, imposing code freeze in Nightly until blocking issues are resolved, etc.

What will happen to the diagnostic assert?

MOZDIAGNOSTICASSERT will enabled during the first part of the beta cycle. It will be automatically disabled when EARLYBETAOR_EARLIER is no longer defined.

Planet MozillaThis Week In Servo 98

In the last week, we landed 127 PRs in the Servo organization’s repositories.

We started publishing Windows nightly builds on download.servo.org. Please test them out and file issues about things that don’t work right!

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • jdm fixed an assertion failure when loading multiple stylesheets from the same <link> element.
  • mckaymatt made line numbers correct in errors reported for inline stylesheets.
  • canaltinova implemented support for the shape-outside CSS property in Stylo.
  • waffles removed much of the code duplication for CSS parsing and serialization of basic shapes.
  • nox preserved out of bounds values when parsing calc() expressions.
  • Manishearth implemented MathML presentation hints for Stylo.
  • bholley improved performance of the style system by caching runtime preferences instead of querying them.
  • ferjm added an option to unminify JS and store it to disk for easier web compatibility investigations.
  • tiktakk converted a recursive algorithm to an iterative one for complex selectors.
  • emilio fixed some bugs that occurred when parsing media queries.
  • Manishearth implemented queries for font metrics during restyling.
  • jryans added support for @page rules to Stylo.
  • UK992 allowed Servo to build with MSVC 2017.
  • MortimerGoro implemented the Gamepad API.
  • jdm corrected an assertion failure when using text-overflow: ellipsis.
  • tomhoule refactored the style system types to preserve more specified values.
  • jonathandturner worked around the mysterious missing key events on Windows.
  • charlesvdv improved the handling of non-ascii characters in text inputs.
  • clementmiao added common keyboard shortcuts for text inputs.
  • manuel-woelker implemented support for Level 4 RGB and HSL CSS syntax.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Planet Mozilla[worklog] Edition 063. Spring is here

webcompat life

  • Some issues takes a lot longer to analyze understand than what it seems at the start.

webcompat issues

webcompat.com dev

Otsukare!

Planet MozillaQuantum Flow Engineering Newsletter #5

Another week full of performance related updates quickly went by, I’d like to share a few of them.
We’re almost mid-April, about 3 weeks after I shared my first update on our progress battling our sync IPC issues.  I have prepared a second Sync IPC Report for 2017-04-13.  For those who looked at the previous report, this is in the same spreadsheet, and the data is next to the previous report, for easy comparison.  We have made a lot of great progress fixing some of the really bad synchronous IPC issues in the recent few weeks, and even though telemetry data is laggy, we are starting to see this reflect in the data coming in through telemetry!  Here is a human readable summary of where we are now:
  • PCookieService::Msg_GetCookieString is still at the top of the list, now taking a whopping 45% piece of the pie chart!  I don’t think there is any reason to believe that this has gotten particularly worse, it’s just that we’re starting to get better at not doing synchronous IPC, so this is standing out even more now!  But its days are numbered.  🙂
  • PContent::Msg_RpcMessage and PBrowser::Msg_RpcMessage at 19%.  We still need to get better data about the sync IPC triggered from JS, that shows up in this data under one of these buckets.
  • PJavaScript::Msg_Get at 5% (CPOW overhead) could be caused by add-ons that aren’t e10s compatible.
  • PAPZCTreeManager::Msg_ReceiveMouseInputEvent.  This one (and a few other smaller APZ related ones) tends to have really low mean values, but super high count values which is why they tend to show high on this list, but they aren’t necessarily too terrible compared to the rest of our sync IPC issues.
  • PVRManager::Msg_GetSensorState also relatively low mean values but could be slightly worse.
  • PJavaScript::Msg_CallOrConstruct, more CPOW overhead.
  • PContent::Msg_SyncMessage, more JS triggered sync IPC.
A few items further down on the list are either being worked on or recently fixed as well.  I expect this to keep improving over the next few weeks.  It is really great to see this progress, thanks to everyone who has worked on fixing these issues, helping with the diagnoses, code reviews, etc.
We have also been working hard at triaging performance related bug reports.  In order to keep an eye over the bug-to-bug status of project you can use the Bugzilla queries on the wiki.  As of this moment, we have triaged 160 bugs as [qf:p1] (which means, these performance related bugs are the ones we believe should be fixed now for the Firefox 57 release).  Of these bugs, 92 bugs are unassigned right now.  If you see a bug on this list in your area of expertise which you think you can help with, please consider picking it up.  We really appreciate your help.  Please remember that not every bug on this list is complicated to fix, and there’s everything from major architectural changes to simple one-liner fixes up for grabs.  🙂
Another really nice effort that is starting to unfold and I’m super excited about is the new Photon performance project, which is a focused effort on the front-end performance.  This includes everything from engineering the new UI with things like animations running on the compositor in mind from the get-go, being laser focused on guaranteeing good performance on key UI interactions such as tab opening and closing, and lots of focused measurements and fixes to the browser front-end.
The performance story of this week is about how measurement tools can distort our vision.  And this one isn’t much of a story, it’s more of a lesson that I have been learning seemingly over and over again, these days.  You may have heard of the measurement problem, which basically amounts to the fact that you always change what you measure.  Markus and I were recently talking about the cost of style flushes for browser.xul that I had seen in my profiles and how they could sometimes be expensive, and noticed that this may be due to the profiler overhead that we incur in order to show information about the cause of the restyle in the profile UI.  He fixed the issue since.  I think the reason why I didn’t catch this in my own profiling was that I have gotten so used to seeing expensive reflows and restyles that sometimes I accept that as a fact of life and don’t look under the hood closely enough.  Lesson learned!
We have a bug tracking these types of issues, so if you know of something similar please create a dependency.  If you also profile Firefox regularly using the Gecko Profiler, adding yourself to the CC list of that bug may not be a bad idea.
Now it’s time to acknowledge those who have helped make Firefox faster in the past week.  I will probably forget a few people here, apologies for any unintended omissions!
Until next week, happy hacking!

Planet MozillaShould Patent Law Be a First Amendment Issue?

On Monday April 17th, Mozilla and Stanford Law are presenting a panel about intellectual property law and the First Amendment.

We’ll talk about how IP law and the First Amendment intersect in IP disputes, eligibility tests, and the balance of interests between patent holders and users.

Judge Mayer’s concurring opinion last year in Intellectual Ventures I LLC v. Symantec Corp, has put the debate over the First Amendment and boundaries of patent protection back in the spotlight.

Our all star panel will discuss both sides of the debate.

Panelists

Dan Burk, professor of law at UC Irvine School of Law.

Sandra Park, Senior Staff Attorney for the ACLU Women’s Rights Project.

Robert Sachs, a partner at Fenwick & West LLP, a leading Intellectual Property law firm.

Wendy Seltzer, Strategy Lead and Policy Counsel for the World Wide Web Consortium.

Elvin Lee, Product and Commercial Counsel at Mozilla, will moderate the event.

We’ll also hear opening remarks from professor Mark A. Lemley, who serves as the Director of the Stanford Program in Law, Science and Technology.

Topics and questions we’ll cover

  • Does patent law create conflicts with the First Amendment?
  • Do the subject-matter eligibility tests created by the Supreme Court (e.g., Alice) mitigate or impact any potential First Amendment issues?
  • How does the First Amendment’s intersection with patent law compare to other IP and regulatory contexts?
  • What are the different competing interests for IP owners and creators?
  • Registration of ‘offensive’ marks is currently being reviewed in light of the First Amendment. Are there any parallels to the grant of patent protection by the USPTO, or subsequent enforcement?

Watch

AirMozilla and Mozilla’s Facebook page will carry the livestream for this event. We hope you’ll tune in.

The post Should Patent Law Be a First Amendment Issue? appeared first on Open Policy & Advocacy.

Planet MozillaApply to Join the AMO Feature Board

Help people discover add-ons that make this browser do glorious things.

Do you have an eye for awesome add-ons? Can you distinguish a decent ad blocker from a stellar one? Interested in making a huge impact for millions of Firefox users? If so, please consider applying to join AMO’s Feature Board.

The board is comprised of a small group of community contributors who help select each month’s new featured add-ons. Every board serves for six months, then a new group of community curators take over. Now the time has come to assemble a new group of talented contributors.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board.

This page provides more information on the duties of a board member. To be considered, please email us at amo-featured [at] mozilla [dot] org and tell us how you’re involved with AMO and why you think you’d make a strong content curator. The deadline for applications is Friday, April 28, 2017 at 23:59 PDT. The new board will be announced shortly thereafter.

We look forward to hearing from you!

The post Apply to Join the AMO Feature Board appeared first on Mozilla Add-ons Blog.

Planet MozillaReps Weekly Meeting Apr. 13, 2017

Reps Weekly Meeting Apr. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Apr. 13, 2017

Reps Weekly Meeting Apr. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>