Planet MozillaWhy you probably want to disable jQuery.parseHTML even though you don't call it

TL;DR: jQuery.parseHTML is a security hazard and will be called implicitly in a number of obvious and not so obvious situations.

Why should you care?

Hey, jQuery is great! It’s so great that Stack Overflow users will recommend it no matter what your question is. And now they have two problems. Just kidding, they will have the incredible power of jQuery:

$("#list").append('<li title="' + item.info + '">' + item.name + '</li>');

The above is locating a list in the document, creating a new list item with dynamic content and adding it to the list — all that in a single line that will still stay below the 80 columns limit. And we didn’t even lose readability in the process.

Life is great until some fool comes along and mumbles “security” (yeah, that’s me). Can you tell whether the code above is safe to be used in a web application? Right, it depends on the context. Passing HTML code to jQuery.append will use the infamous innerHTML property implicitly. If you aren’t careful with the HTML code you are passing there, this line might easily turn into a Cross-Site Scripting (XSS) vulnerability.

Does item.name or item.info contain data from untrusted sources? Answering that question might be complicated. You need to trace the data back to its source, decide who should be trusted (admin user? localizer?) and make sure you didn’t forget any code paths. And even if you do all that, some other developer (or maybe even yourself a few months from now) might come along and add another code path where item.name is no longer trusted. Do you want to bet on this person realizing that they are making an entirely different piece of code insecure?

It’s generally better to give jQuery structured data and avoid taking any chances. The secure equivalent of the code above would be:

$("#list").append($("<li>", {title: item.info}).text(item.name));

Not quite as elegant any more but now jQuery will take care of producing a correct HTML structure and you don’t need to worry about that.

Wait, there is more!

There is one remarkable thing about jQuery APIs: each function can take all kinds of parameters. For example, the .append() function we used above can take a DOM element, a CSS selector, HTML code or a function returning any of the above. This keeps function names short, and you only need to remember one function name instead of four.

The side effect is however: even if you are not giving jQuery any HTML code, you still have to keep in mind that the function could accept HTML code. Consider the following code for example:

$(tagname + " > .temporary").remove();

This will look for elements of class temporary within a given tag and remove them, right? Except that the content of tagname better be trusted here. What will happen if an attacker manages to set the value of tagname to "<img src='dummy' onerror='alert(/xss/)'>"? You probably guessed it, the “selector” will be interpreted as HTML code and will execute arbitrary JavaScript code.

There is more than a dozen jQuery functions that will happily accept both selectors and HTML code. Starting with jQuery 1.9.0 security issues here got somewhat less likely, the string has to start with < in order to be interpreted as HTML code. Older versions will accept anything as HTML code as long as it doesn’t contain #, the versions before jQuery 1.6.1 didn’t even have that restriction.

To sum up: you better use jQuery 1.9.0 or above, otherwise your dynamically generated selector might easily end up being interpreted as an HTML string. And even with recent jQuery versions you should be careful with dynamic selectors, the first part of the selector should always be a static string to avoid security issues.

Defusing jQuery

With almost all of the core jQuery functionality potentially problematic, evaluating security of jQuery-based code is tricky. Ideally, one would simply disable unsafe functionality so that parsing HTML code by accident would no longer be possible. Unfortunately, there doesn’t seem to be a supported way yet. The approach I describe here seems to work in the current jQuery versions (jQuery 1.11.3 and jQuery 2.1.4) but might not prevent all potential issues in older or future jQuery releases. Use at your own risk! Oh, and feel free to nag jQuery developers into providing supported functionality for this.

There is a comment in the source code indicating that jQuery.parseHTML function being missing is an expected situation. However, removing this function doesn’t resolve all the issues, and it disables safe functionality as well. Removing jQuery.buildFragment on the other hand doesn’t seem to have any downsides:

delete jQuery.buildFragment;

// Safe element creation still works
$('<img>', {src: "dummy"});

// Explicitly assigning or loading HTML code for an element works
$(document.body).html('<img src="dummy">');
$(document.body).load(url);

// These will throw an exception however
$('<img src="dummy">');
$(document.body).append('<img src="dummy">');
$.parseHTML('<img src="dummy">');

Of course, you have to adjust all your code first before you disable this part of the jQuery functionality. And even then you might have jQuery plugins that will stop working with this change. There are some code paths in the jQuery UI library for example that rely on parsing non-trivial HTML code. So this approach might not work for you.

But how do I create larger DOM structures?

The example creating a single list item is nice of course but what if you have to create some complicated structure? Doing this via dozens of nested function calls is impractical and will result in unreadable code.

One approach would be placing this structure in your HTML document, albeit hidden. Then you would need to merely clone it and fill in the data:

<style type="text/css">
  #entryTemplate { display: none; }
</style>

<div id="entryTemplate">
  <div class="title"></div>
  <div class="description"></div>
</div>

<script>
  var template = $("#entryTemplate");
  var entry = template.clone().removeAttr("id");
  entry.find(".title").text(item.title);
  entry.find(".description").text(item.description);
  $(document.body).append(entry);
</script>

Other templating approaches for JavaScript exist as well of course. It doesn’t matter which one you use as long as you don’t generate HTML code on the fly.

Planet MozillaOn WebExtensions

There has been enough that has been said over the past week about WebExtensions that I wasn’t sure if I wanted to write this post. As usual, I can’t seem to help myself. Note the usual disclaimer that this is my personal opinion. Further note that I have no involvement with WebExtensions at this time, so I write this from the point of view of an observer.

API? What API?

I shall begin with the proposition that the legacy, non-jetpack environment for addons is not an API. As ridiculous as some readers might consider this to be, please humour me for a moment.

Let us go back to the acronym, “API.” Application Programming Interface. While the usage of the term “API” seems to have expanded over the years to encompass just about any type of interface whatsoever, I’d like to explore the first letter of that acronym: Application.

An Application Programming Interface is a specific type of interface that is exposed for the purposes of building applications. It typically provides a formal abstraction layer that isolates applications from the implementation details behind the lower tier(s) in the software stack. In the case of web browsers, I suggest that there are two distinct types of applications: web content, and extensions.

There is obviously a very well defined API for web content. On the other hand, I would argue that Gecko’s legacy addon environment is not an API at all! From the point of view of an extension, there is no abstraction, limited formality, and not necessarily an intention to be used by applications.

An extension is imported into Firefox with full privileges and can access whatever it wants. Does it have access to interfaces? Yes, but are those interfaces intended for applications? Some are, but many are not. The environment that Gecko currently provides for legacy addons is analagous to an operating system running every single application in kernel mode. Is that powerful? Absolutely! Is that the best thing to do for maintainability and robustness? Absolutely not!

Somewhere a line needs to be drawn to demarcate this abstraction layer and improve Gecko developers’ ability to make improvements under the hood. Last week’s announcement was an invitation to addon developers to help shape that future. Please participate and please do so constructively!

WebExtensions are not Chrome Extensions

When I first heard rumors about WebExtensions in Whistler, my source made it very clear to me that the WebExtensions initiative is not about making Chrome extensions run in Firefox. In fact, I am quite disappointed with some of the press coverage that seems to completely miss this point.

Yes, WebExtensions will be implementing some APIs to be source compatible with Chrome. That makes it easier to port a Chrome extension, but porting will still be necessary. I like the Venn Diagram concept that the WebExtensions FAQ uses: Some Chrome APIs will not be available in WebExtensions. On the other hand, WebExtensions will be providing APIs above and beyond the Chrome API set that will maintain Firefox’s legacy of extensibility.

Please try not to think of this project as Mozilla taking functionality away. In general I think it is safe to think of this as an opportunity to move that same functionality to a mechanism that is more formal and abstract.

Planet MozillaLetting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

Planet Mozilla38.2.1 is available

TenFourFox 38.2.1 is available (release notes, hashes, downloads). Due to the fact this is a chemspill and we're already delayed, it will become live by this evening. Other than the Mozilla fixes, issue 306 is also repaired. Further work on 38's MP3 support is being deferred until the replacement hard disk arrives (should be early next week).

Don't forget to test 38.2.1 with incremental GC disabled. See the previous post. Enjoy our new sexy wiki, too. Sexy. Yes.

Planet MozillaParticipation Leadership Framework 0.1

In the last heartbeat, as part of our Q3 goals for leadership development,  I interviewed a diverse set of people across Mozilla, asking what they think the skills, knowledge and attitudes of effective Participation Leadership at Mozilla are.  Two things really stood out during this process.  The first was how many people (staff, contributors and alumni) are truly, truly dedicated to the success of each other, which was really inspiring and helped inform the quality of this Framework. The second was how many opportunities and resources already exist (or are being created) for leadership development, that if bundled together,  with more specifically targeted curriculum and focused outcomes will provide powerful learning by Participating experiences.

This Heartbeat iterated on themes that emerged during those interviews.  I thank those who provided feedback on Discourse, and in Github, all  of which brought us to this  first  0.1 version.

2015-08-28_1244

Foundations of Participation Leadership are the  core skills, knowledge and attitudes that lend to success on both personal goals, and goals for Participation at Mozilla.

Building Blocks of Participation Leadership are units of learning, that together provide a whole vision for leadership, but individually build skills, attitude and knowledge that inform specific learning outcomes as needed.

Examples of skills, leadership and knowledge for each:

Personal Leadership

  • Accountability
  • Decision Making
  • Introspective, Authentic Leadership
  • “My Leadership Identity at Mozilla”

Essential Mozilla

  • Mozilla’s Timeline & History
  • Advocacy
  • Mozilla’s Mission
  • “Why Mozilla, Why Now, Why Me?”

Building for Action and Impact

  • Community Building
  • Agile for Participation Projects
  • Designing with Participation Hooks & Triggers
  • Building Bridges to Mozilla

Empowering Teams and People

  • Uncovering Unconscious Bias
  • Mentoring & Finding Mentorship
  • Teach & Facilitate Like Mozilla
  • Distributed Leadership

Working Open

  • Open Practices
  • Writing in the Open
  • Sharing & Licensing
  • Activism in the Open

Developing Specialization

  • Creating Automated Tests for Firefox OS
  • Inviting Skilled Participation to Fennec
  • Web Literacy Leadership

We would love your comments, suggestions and ideas on where we are so far. In the next heartbeat we’ll begin building and running workshops with these as guide, and further iterating towards 1.0.

 


Image Credit: Lead Type by jm3

Planet MozillaAMO T-shirt Update

Just want to give a quick update on the snazzy t-shirts designed by Erick León Bolinaga. They were finally done printing this week, and are headed to a fulfillment center for shipping. We expect them to begin shipping by the end of next week.

Thanks for your patience!

Planet MozillaWebmaker Demos August 28 2015

Webmaker Demos August 28 2015 Webmaker Demos August 28 2015

Planet MozillaMozReview Montréal Work Week

pigeon

Under the watchful gaze (and gentle cooing) of the pigeons, the MozReview developers gathered in Montréal for a work week. My main goal for the week was to make substantial progress towards Autoland to Inbound, my primary project for this quarter, maybe even a deployment of an initial iteration to the development server.

While we didn’t get quite that far, we did make a lot of progress on a lot of fronts, including finally getting Bugzilla API key support deployed. This is the first work week I’ve done where we just stayed and worked together in an AirBNB apartment rather than get hotel rooms and make use of shared space in a Mozilla office. I really enjoyed this, it was a nice casual work environment and we got a lot of focused work done.

Some things I worked on this week, in varying degrees of completion:

  • Bug 1198086 This adds an endpoint for making Autoland to “non-Try” tree requests which will allow us to build the UI for Autoland to Inbound. A while back I fixed Bug 1183295 which added support for non-Try destinations in the Autoland service itself. This means, outside of bug fixes, the backend support for Autoland to Inbound is implemented and we can focus on the UI.

  • Bug 1196263 is my other main project this quarter. We want to add a library which enables people to write their own static analysis bots that run against MozReview. This is based on work that GPS did in the winter to create a static analysis bot for Python. We still need to rework some of the messages we’re sending out on Pulse when a review is published, at the moment we’ll end up re-reviewing unchanged commits and spamming lots of comments. This was a problem with the original Python bot and needs to be fixed before bots can be enabled.

  • Bug 1168486 involves creating a “Custom Hosting Service” for review repositories. This will let us maintain metadata about things like whether or not a repo has an associated Try repository so we can disable triggering try runs on reviews where this doesn’t make sense.

  • Bug 1123139 is a small UI fix to remove unnecessary information from the Description field. We’ve decided to reserve the Description field for displaying the Mercurial commit message which will hopefully encourage people to write more descriptive messages for their changes. This will also move the “pull down these commits” hint to the Information section on the right of the page. Like most small UI fixes, this consumed an embarrassing amount of time. I’ve come to realize that no matter how many bad UIs I leave under my pillow at night, the UI fairy will not come and fix them, so I’ll just have to get better at this sort of thing.

Planet MozillaBoston Python: Twisted async networking framework

Yesterday, Stephen DiCato and I gave a talk for Boston Python titled: Twisted async networking framework. It was an introduction to intermediate level talk about using the Twisted networking framework based on our experiences at Percipient Networks.

The talk, available on our GitHub (PDF) covered a few basic topics:

  1. What is asynchronous programming?
  2. What is Twisted?
  3. When/why to use Twisted?
  4. What is the event loop (reactor)?
  5. What are Deferreds and how do you use them?
  6. What are protocols (and related objects) and how do you use them?

Additionally there was a ‘bonus’ section: Using Twisted to build systems & services.

We used an example of a very simple chat server (NetCatChat: where the official client is netcat) to demonstrate these principles. All of our (working!) demo code is included in the repository.

There was a great turn out (almost 100 people showed up) and I greatly enjoyed the experience. Thanks to everyone who came, the sponsors for the night, Boston Python for setting this up, and Stephen for co-presenting! Please let us know if you have any questions or comments.

Planet MozillaBringing better support to regional communities

During this third quarter, one of the main goals for the Participation team at Mozilla is to better support Reps and Regional communities.

We want to focus our efforts this quarter in 10 countries to be more efficient with the resources we have and be able to:

  • Tailor country profiles and a community health dashboard.
  • Develop a mid-term plan with at least three communities.
  • Systematize a coaching framework with volunteers.

As part of the Reps/Regional group I’m currently involved in these efforts, focusing in three European countries: Germany, France and UK.

During the past and following weeks I’ll be meeting volunteers from these communities to know more about them and to figure out where to get some information that would help to develop the country profiles and the community dashboard, an important initiative to have a clear overview about our community status.

Also, I’m working with the awesome German community to meet and work together in a plan to align and improve the community in the next 6 months.

On top of all the previous things, we are starting a set of 1:1 meetings with key volunteers inside these communities to bring coaching and support in a more personal way, understanding everyone’s views and learning the best ways to help people’s skills and motivation.

Finally, I’m working to improve the Reps/Regional team accountability and work-flow productivity exploring better ways to manage our work as a team and working with the Reps Council to put together a Rep program profile doc to understand better the current status and what should be changed/improved.

You can know more about the Participation team Q3 goals and key results, as well as individual team members goals, in this public document and follow our daily work in our github page.

Planet MozillaApache Licenses

Apache Licenses

At the bottom fo the Apache 2.0 License file, there’s an appendix:

APPENDIX: How to apply the Apache License to your work.

...

Copyright [yyyy] [name of copyright owner]

...

Does that look like an invitation to fill in the blanks to you? It sure does to me, and has for others in the Rust community as well.

Today I was doing some licensing housekeeping and made the same embarrassing mistake.

This is a PSA to double check whether inviting blanks are part of the appendix before filling them out in Apache license texts.

Planet MozillaES6 for now: Template strings

ES6 is the future of JavaScript and it is already here. It is a finished specification, and it brings a lot of features a language requires to stay competitive with the needs of the web of now. Not everything in ES6 is for you and in this little series of posts I will show features that are very handy and already usable.

If you look at JavaScript code I’ve written you will find that I always use single quotes to define strings instead of double quotes. JavaScript is OK with either, the following two examples do exactly the same thing:

var animal = "cow";
 
var animal = 'cow';

The reason why I prefer single quotes is that, first of all, it makes it easier to assemble HTML strings with properly quoted attributes that way:

// with single quotes, there's no need to 
// escape the quotes around the class value
var but = '<button class="big">Save</button>';
 
// this is a syntax error:
var but = "<button class="big">Save</button>";
 
// this works:
var but = "<button class=\"big\">Save</button>";

The only time you need to escape now is when you use a single quote in your HTML, which should be a very rare occasion. The only thing I can think of is inline JavaScript or CSS, which means you are very likely to do something shady or desperate to your markup. Even in your texts, you are probably better off to not use a single quote but the typographically more pleasing ‘.

Aside: Of course, HTML is forgiving enough to omit the quotes or to use single quotes around an attribute, but I prefer to create readable markup for humans rather than relying on the forgiveness of a parser. We made the HTML5 parser forgiving because people wrote terrible markup in the past, not as an excuse to keep doing so.

I’ve suffered enough in the DHTML days of document.write to create a document inside a frameset in a new popup window and other abominations to not want to use the escape character ever again. At times, we needed triple ones, and that was even before we had colour coding in our editors. It was a mess.

Expression substition in strings?

Another reason why I prefer single quotes is that I wrote a lot of PHP in my time for very large web sites where performance mattered a lot. In PHP, there is a difference between single and double quotes. Single quoted strings don’t have any substitution in them, double quoted ones have. That meant back in the days of PHP 3 and 4 that using single quotes was much faster as the parser doesn’t have to go through the string to substitute values. Here is an example what that means:

<?php
  $animal = 'cow';
  $sound = 'moo';
 
  echo 'The animal is $animal and its sound is $sound';
  // => The animal is $animal and its sound is $sound
 
  echo "The animal is $animal and its sound is $sound";
  // => The animal is cow and its sound is moo
?>

JavaScript didn’t have this substitution, which is why we had to concatenate strings to achieve the same result. This is pretty unwieldy, as you need to jump in and out of quotes all the time.

var animal = 'cow';
var sound = 'moo';
 
alert('The animal is ' + animal + ' and its sound is ' +
 sound);
// => "The animal is cow and its sound is moo"

Multi line mess

This gets really messy with longer and more complex strings and especially when we assemble a lot of HTML. And, most likely you will sooner or later end up with your linting tool complaining about trailing whitespace after a + at the end of a line. This is based on the issue that JavaScript has no multi-line strings:

 
// this doesn't work
var list = '<ul>
  <li>Buy Milk</li>
  <li>Be kind to Pandas</li>
  <li>Forget about Dre</li>
</ul>';
 
// This does, but urgh… 
var list = '<ul>\
  <li>Buy Milk</li>\
  <li>Be kind to Pandas</li>\
  <li>Forget about Dre</li>\
</ul>';
 
// This is the most common way, and urgh, too…
var list = '<ul>' +
'  <li>Buy Milk</li>' +
'  <li>Be kind to Pandas</li>' +
'  <li>Forget about Dre</li>' +
'</ul>';

Client side templating solutions

In order to work around the mess that is string handling and concatenation in JavaScript, we did what we always do – we write a library. There are many HTML templating libraries with Mustache.js probably having been the seminal one. All of these follow an own – non standardised – syntax and work in that frame of mind. It’s a bit like saying that you write your content in markdown and then realising that there are many different ideas of what “markdown” means.

Enter template strings

With the advent of ES6 and its standardisation we now can rejoice as JavaScript has now a new kid on the block when it comes to handling strings: Template Strings. The support of template strings in current browsers is encouraging: Chrome 44+, Firefox 38+, Microsoft Edge and Webkit are all on board. Safari, sadly enough, is not, but it’ll get there.

The genius of template strings is that it uses a new string delimiter, which isn’t in use either in HTML nor in normal texts: the backtick (`).

Using this one we now have string expression substitution in JavaScript:

var animal = 'cow';
var sound = 'moo';
 
alert(`The animal is ${animal} and its sound is ${sound}`);
// => "The animal is cow and its sound is moo"

The ${} construct can take any JavaScript expression that returns a value, you can for example do calculations, or access properties of an object:

var out = `ten times two totally is ${ 10 * 2 }`;
// => "ten times two totally is 20"
 
var animal = {
  name: 'cow',
  ilk: 'bovine',
  front: 'moo',
  back: 'milk',
}
alert(`
  The ${animal.name} is of the 
  ${animal.ilk} ilk, 
  one end is for the ${animal.front}, 
  the other for the ${animal.back}
`);
// => 
/*
  The cow is of the 
  bovine ilk, 
  one end is for the moo, 
  the other for the milk
*/

That last example also shows you that multi line strings are not an issue at all any longer.

Tagged templates

Another thing you can do with template strings is prepend them with a tag, which is the name of a function that is called and gets the string as a parameter. For example, you could encode the resulting string for URLs without having to resort to the horridly named encodeURIComponent all the time.

function urlify (str) {
  return encodeURIComponent(str);
}
 
urlify `http://beedogs.com`;
// => "http%3A%2F%2Fbeedogs.com"
urlify `woah$£$%£^$"`;
// => "woah%24%C2%A3%24%25%C2%A3%5E%24%22"
 
// nesting also works:
 
var str = `foo ${urlify `&&`} bar`;
// => "foo %26%26 bar"

This works, but relies on implicit array-to-string coercion. The parameter sent to the function is not a string, but an array of strings and values. If used the way I show here, it gets converted to a string for convenience, but the correct way is to access the array members directly.

Retrieving strings and values from a template string

Inside the tag function you can not only get the full string but also its parts.

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
}
 
tag `you ${3+4} it`;
/* =>
 
Array [ "you ", " it" ]
7
it
 
*/

There is also an array of the raw strings provided to you, which means that you get all the characters in the string, including control characters. Say for example you add a linebreak with \n you will get the double whitespace in the string, but the \n characters in the raw strings:

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
  console.log(string.raw[1]);
}
 
tag `you ${3+4} \nit`;
/* =>
 
Array [ "you ", "  it" ]
7
 
it
 \nit
*/

Conclusion

Template strings are one of those nifty little wins in ES6, that can be used right now. If you have to support older browsers, you can of course transpile your ES6 to ES5, you can do a feature test for template string support using a library like featuretests.io or with the following code:

var templatestrings = false;
try {
  new Function( "`{2+2}`" );
  templatestrings = true;
} catch (err) {
  templatestrings = false;
} 
 
if (templatestrings) {
	// …
}

More articles on template strings:

Planet Mozilla38.2.1 delayed due to hardware failure

TenFourFox 38.2.1 was supposed to be released to you today but the hard disk used for compiling it blew up sometime yesterday and I've been recovering data from the drive and the last backup instead. The G5 version was built before the disk died, and does check out, but the other three builds haven't been yet. Let this be a reminder that DiskWarrior can fix a lot of things but not hardware failure (and people complaining of random faults in TenFourFox, please check your hardware first -- the symptom here was random freezes because the electronics kept dropping the drive off the SATA bus unexpectedly), so Data Rescue is busy getting the recoverable pieces off it and the rest I can restore from the file server. Both tools belong in your Power Mac bug-out bag and both still support PowerPC, so please support those vendors who still support us. It should be repaired enough to resume builds hopefully late tonight but I don't have an estimated time of release (hopefully no later than Sunday). It includes two Mozilla fixes and will also include a tweak for TenFourFox issue 306.

In the meantime, a fair bit of the wiki has been updated and rewritten for Github. I am also exploring an idea from bug 1160228 by disabling incremental garbage collection entirely. This was a bad idea on 31 where incremental GC was better than nothing, but now that we have generational garbage collection and the nursery is regularly swept, the residual tenured heap seems small enough to make periodic full GCs more efficient. On a tier 1 platform the overhead of lots of incremental cycles may well be below the noise floor, but on the pathological profile in the bug even a relatively modern system had a noticeable difference disabling incremental GC. On this G5 occasionally I get a pause in the browser for 20 seconds or so, but that happens very infrequently, and otherwise now that the browser doesn't have to schedule partial passes it seems much sprightlier and stays so longer. The iBook G4 saw an even greater benefit. Please note that this has not been tested well with multiple compartments or windows, so your mileage may vary, but with that said please see what you think: in about:config set javascript.options.mem.gc_incremental to false and restart the browser to flush everything out. If people generally find this superior, it may become the default in 38.3.

Planet MozillaTop 50 DOS Problems Solved: Sorting Directory Listings

Q: Could you tell me if it’s possible to make the DIR command list files in alphabetical order?

A: Earlier versions of DOS didn’t allow this but there’s a way round it. MS-DOS 5 gives you an /ON switch to use with DIR, for instance:

DIR *.TXT /ON /P

would list all the files with names ending in .TXT, pause the listing every screenful (/P) and sort the names into alphabetical order (/ON).

Users of earlier DOS programs can shove the output from DIR through a utility program that sorts the listing before printing it on the screen. That utility is SORT.EXE, supplied with DOS. … [So:]

DIR | SORT

diverts the output from DIR into SORT, which sorts the directory listing and sends it to the screen. Put this in a batch file called SDIR.BAT and you will have a sorted directory command called SDIR.

I guess earlier versions of DIR followed the Unix philosophy of “do one thing”…

Planet MozillaContent over HTTP/2

cdn77 logoRoughly a week ago, on August 19, cdn77.com announced that they are the “first CDN to publicly offer HTTP/2 support for all customers, without ‘beta’ limitations”. They followed up just hours later with a demo site showing off how HTTP/2 might perform side by side with a HTTP/1.1 example. And yes, the big competitor CDNs are not yet offering HTTP/2 support it seems.

Their demo site initially got critized for not being realistic and for showing HTTP/2 to be way better in comparison that what a real life scenario would be more likely to look like, and it was also subsequently updated fairly quickly. It is useful to compare with the similarly designed previously existing demo sites hosted by Akamai and the Go project.

NGINX logocdn77’s offering is built on nginx’s alpha patch for HTTP/2 that was anounced just two weeks ago. I believe nginx’s full release is still planned to happen by the end of this year.

I’ve talked with cdn77’s Jakub Straka and their lead developer Honza about their HTTP/2 efforts, and since I suspect there are a few others in my audience who’re also similarly curious I’m offering this interview-style posting here, intertwined with my own comments and thoughts. It is not just a big ad for this company, but since they’re early players on this field I figure their view and comments on this are worth reading!

I’ve been in touch with more than one person who’ve expressed their surprise and awe over the fact that they’re using this early patch for nginx to run in production. So I had to ask them about that. Below, their comments are all prefixed with CDN77 and shown using italics.

nginx

CDN77: “Yes, we are running the alpha patch, which is basically a slightly modified SPDY. In the past we have been in touch with the Nginx team and exchanged tips and ideas, the same way we plan to work on the alpha patch in the future.

We’re actually pretty careful when deploying new and potentially unstable packages into production. We have separate servers for http2 and we are monitoring them 24/7 for any exceptions. We also have dedicated developers who debug any issues we are facing with these machines. We would never jeopardize the stability of our current network.

I’m not an expert on neither server-side HTTP/2 nor nginx in particular , but I think I read somewhere that the nginx HTTP/2 patch removes the SPDY support in favor of the new protocol.

CDN77: “You are right. HTTP/2 patch rewrites SPDY into the HTTP/2, so the SPDY is no longer supported after applying the patch. Since we have HTTP/2 running on separate servers, we still have SPDY support on the rest of the network.”

Did the team at cdn77 at all consider using something else than nginx for HTTP/2, like the promising newcomer h2o?

CDN77: “Not at all. Nginx is a clear choice for us. Its architecture and modularity is awesome. It is also very reliable and it has a pretty long history.

On scale

Can you share some of the biggest hurdles you had to overcome to deploy HTTP/2 on this scale with nginx?

CDN77: “Since nobody has tried the patch in such a scale ever before, we had to make sure it will hold even under pressure and needed to create a load heavy testing environment. We used servers from our partner company 10gbps.io and their 10G uplinks to create intensive ghost traffic. Also, it was important to make sure that supporting tools and applications are HTTP/2 ready – not all of them were. We needed to adjust the way we monitor and control servers in few cases.

There are a few bugs in Nginx that appear mainly in association with the longer-lived connections. They cause issues with the application layer and consume more resources. To be able to accommodate HTTP/2 and still keep necessary network redundancies, we needed to upgrade our network significantly.

I read this as an indication that the nginx patch isn’t perfected just yet rather than signifying that http2 is special. Perhaps also that http2 connections might use a larger footprint in nginx than good old http1 connections do.

Jakub mentioned they see average “performance savings” in the order of 20 to 60 percent depending on sites and contents with the switch to h2, but their traffic amounts haven’t been that large yet:

CDN77: “So far, only a fraction of the traffic is running via HTTP/2, but that is understandable since we launched the support few days ago. On the first day, only about 0.45% of the traffic was HTTP/2 and a big part of this was our own demo site. Over the weekend, we saw impressive adoption rate and the total HTTP/2 traffic accounts for more than 0.8% now, all that with the portion of our own traffic in this dropping dramatically. We expect to be pushing around 1.2% – 1.5% of total traffic over HTTP/2 till the end of this week.

Understandably, it is ramping up. Still, Firefox telemetry is showing at least 10% of the total traffic over HTTP/2 already.

Future of HTTPS and HTTP/2?

Whttp2 logohile I’m talking to a CDN operator, I figured I should poll their view on HTTPS going forward! Will the fact that all browsers only support h2 over HTTPS push more of your customers and your traffic in general over to HTTP, you think?

CDN77: “This is not easy to predict. There is encryption overhead, but HTTP/2 comes with header compression and it is binary. So at this single point, the advantages and disadvantages zero out. Also, the use of HTTPS is rising rapidly even on the older protocol, so we don’t consider this an issue at all.

In general, from a CDN perspective and as someone who just deployed this on a fairly large scale, what’s your general perception of what http2 means going forward?

CDN77: “We believe that this is a huge step forward in how we distribute content online and as a CDN company, we are especially excited as it concerns the very core of our business. From the new features, we have great expectations about cache invalidation that is being discussed right now.

Thanks to Jakub, Honza and Tomáš of cdn77 for providing answers and info. We live in exciting times.

Planet Mozilla"JavaScript of the Future: Asynchronous functions in ES7" Presented by Mariusz Kierski

"JavaScript of the Future: Asynchronous functions in ES7" Presented by Mariusz Kierski Mariusz Kierski - JavaScript of the future: Asynchronous functions in ES7

Planet Mozilla"It's All About That Automation!" Presented by Koki Yoshida

"It's All About That Automation!" Presented by Koki Yoshida Koki Yoshida - It's all about that automation!

Planet Mozilla"Building vs. Shipping Software" Presented by Karim Benhmida

"Building vs. Shipping Software" Presented by Karim Benhmida Karim Benhmida - Building vs shipping software

Planet Mozilla"Life is Hard" Presented by Jonathan Almeida

"Life is Hard" Presented by Jonathan Almeida Jonathan Almeida - Life is Hard

Planet Mozilla"Saving the World from Bad Experience" Presented by Jatin Chhikara

"Saving the World from Bad Experience" Presented by Jatin Chhikara Jatin Chhikara - Saving the world from bad experience

Planet MozillaIntern Presentations

Intern Presentations Bernardo Rittmeyer Jatin Chhikara Steven Englehardt Gabriel Luong Karim Benhmida Eduoard Oger Jonathan Almeida Huon Wilson Mariusz Kierski Koki Yoshida

Planet Mozilla"I Promise CATs" Presented by Gabriel Luong

"I Promise CATs" Presented by Gabriel Luong Gabriel Luong - I Promise CATs!

Planet Mozilla"Revamping the Sync Experience" Presented by Edouard Oger

"Revamping the Sync Experience" Presented by Edouard Oger Edouard Oger - Revamping the Sync Experience

Planet Mozilla"Firefox Helps You Log In" Presented by Bernardo Rittmeyer

"Firefox Helps You Log In" Presented by Bernardo Rittmeyer Firefox Helps You Log In: Seamless password management for your daily browsing.

Planet MozillaBeer and Tell – August 2015

Once a month, web developers from across the Mozilla Project get together to spend an hour of overtime to honor our white-collar brethren at Amazon. As we watch our productivity fall, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

openjck: Discord

openjck was up first and shared Discord, a Github webhook that scans pull requests for CSS compatibility issues. When it finds an issue, it leaves a comment on the offending line with a short description and which browsers are affected. The check is powered by doiuse, and projects can add a .doiuse file (using browserslist syntax) that specifies which browser versions they want to be tested against. Discord currently checks CSS and [Stylus][] files.

The MDN team is looking for sites to test Discord out. Work on the site is currently suspended (which is why it’s a side project, openjck and friends won’t stop working on it) so that feedback can be gathered to determine where the site should go next. If you’re interested in trying out Discord, let groovecoder know!

peterbe: Activity and Fanout.io

Next up was peterbe, with an update to Activity. The site now uses Fanout.io and a message queue to improve how activity items are fetched from GitHub and other sources. The site queues up jobs to fetch data from the Github API, and as the jobs complete, they send their results to Fanout. Fanout’s JavaScript library maintains an open WebSocket with their service, and when Fanout receives the data from the completed jobs, it notifies the client of the new data, which gets written to localStorage and updates the React state. This allows Activity to remain structured as an offline-ready application while still receiving seamless updates if the user has an internet connection.


There’s a donation jar near the exit; for just $25 you can pay for an hour of time for an Amazon engineer to spend with their family. Checks may be made payable to No Questions Asked Laundry, LLC.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Planet MozillaImpact


Video recording of the Aug 26 Mozilla Learning community call

For the Mozilla Learning plan right now, we’re focused on impact. What impact will our advocacy and leadership work will have in the world over the next three years? How do we state that in a way that’s memorable, manageable, measurable and motivational?

How do other orgs do it? As a way to think big and step back, we asked participants in Tuesday’s community call to give examples of organizations or projects that inspire them right now. Here’s our list.

Who inspires you?

  • Free Code Camp — learn to code by helping non-profit organizations (Amira)
  • 18F — kicking ass when it comes to bringing open source to government (Kaitlin)
  • The Inter-Agency Network for Education in Emergencies — cool network and community of practice for 15,000 people teaching in refugee camps and other emergency settings around the world (Surman)
  • The Engine Room — small and scrappy, but doing amazing work with teaching open tools for social change (Michelle)
  • GDS — because they somehow manage to work like MoFo, even though they are part of Government (Adam)
  • Keyboardio — open source mechanical keyboard with a wonderful backlight, shipped with a screwdriver so that you can tinker around and reprogram.  (Shreyas)

  • Born Accessible — thinking about web content as “born accessible.”(Emma)
  • WikiSpeed — a non-profit that’s building open source, energy-efficient cars in 17 countries,  with no org chart or management structure (@OpenMatt)
  • NESTA — engaged in some interesting thought leadership that relates well to our work (Sam)

  • Ocean Cleanup — addressing “The Great Pacific Garbage Patch” with business / philanthropy / sponsorship / science / data / youth vision all coming together to stem it (Rebecca)
  • Conservation International — I’m digging their current campaign: “Nature doesn’t need people, people need nature” (Paul)
  • Mercy for Animals — they take a big, often controversial topic and make it approachable — and they have a massive, engaged volunteer force (Lindsey)
  • Truth and Reconciliation Commission Canada (Simona)
  • Generation Squeeze — taking on the impossible task of advocating for worklife balance, childcare and affordable housing on a living wage (ErikaD)

  • NYT documentary of bieber + skrillex + diplo –  Love the focus on storytelling and combo of graphics / animation. (Cassie)
  • model view culture — cranky and continuous analytic deconstructions of intersections between technology, inclusion, diversity with anger and no apologies and a paper journal that arrives on a regular basis. (@leahatplay)
  • Colors magazine — open contribution (Jordan)
  • the Unilever rapper campaign — because it was a long-stale pollution problem that was revitalized with creativity (Andrea)

  • Hollaback — uses online tools to work with young people and confront street harassment (Sara)
  • Craigslist — because their success is based on the assumption that most people are good. (David)
  • Dark Mountain — thinking through how WebLit does / does not survive in the anthropocene. (Chad)
  • NPR – They strike a successful balance between mass appeal and education. (Simon)

make things better -- 5319988695_22db1bded5_o

takeaways?

The above examples are…

  1. Crisp. Our group was able to communicate the story for each of these projects — in their own words, off the top of their head, in a single sentence. That means the mission is telegraphic, simple and sticky.
  2. Viral. Each of these organizations has succeeded in creating an influential, mini-evangelist to spread their story for them: you!
  3. Edgy.  Many of these examples have a bit of punk rock or social justice grit. They’re not wearing a bow tie.
  4. Diverse. There’s a broad range of stuff here, not just the usual tech / ed tech suspects. This is a party you’d want to be at.
  5. Real. There’s no jargon or planning language in any of the descriptions people provided — the language is authentic and human, because no one’s trying too hard. It’s just natural and unscripted.

Can we get to this same level of natural, edgy crispness for MoFo and our core strategies? Would others put us on a list like this? Food for thought.

Planet MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

Planet MozillaLooking beyond Try Syntax

Today marks the 5 year anniversary of try syntax. For the uninitiated, try syntax is a string that you put into your commit message which a parser then uses to determine the set of builds and tests to run on your try push. A common try syntax might look like this:

try: -b o -p linux -u mochitest -t none

Since inception, it has been a core part of the Mozilla development workflow. For many years it has served us well, and even today it serves us passably. But it is almost time for try syntax to don the wooden overcoat, and this post will explain why.

A brief history on try syntax

In the old days, pushing to try involved a web interface called sendchange.cgi. Pushing is probably the wrong word to use, as at no point did the process involve version control. Instead, patches were uploaded to the web service, which in turn invoked a buildbot sendchange with all the required arguments. Like today try server was often overloaded, sometimes taking over 4 hours for results to come back. Unlike today there was no way to pick and choose which builds and tests you wanted, every try push ran the full set.

The obvious solution was to create a mechanism for people to do that. It was while brainstorming this problem that ted, bhearsum and jorendorff came up with the idea of encoding this information in the commit message. Try syntax was first implemented by lsblakk in bug 473184 and landed on August 27th, 2010. It was a simple time; the list of valid builders could fit into a single 30 line config file; Fennec still hadn't picked up full steam; and B2G wasn't even a figment of anyone's wildest imagination.

It's probably not a surprise to anyone that as time went on, things got more complicated. As more build types, platforms and test jobs were added, the try syntax got harder to memorize. To help deal with this, lsblakk created the trychooser syntax builder just a few months later. In 2011, pbiggar created the trychooser mercurial extension (which was later forked and improved by sfink). These tools were (and still are) the canonical way to build a try syntax string. Little has changed since then, with the exception of the mach try command that chmanchester implemented around June 2015.

One step forward, two steps back

Since around 2013, the number of platforms and test configurations have grown at an unprecendented rate. So much so, that the various trychooser tools have been perpetually out of date. Any time someone got around to adding a new job to the tools, two other jobs had already sprung up in its place. Another problem caused by this rapid growth, was that try syntax became finicky. There were a lot of edge cases, exceptions to the rule and arbitrary aliases. Often jobs would mysteriously not show up when they should, or mysteriously show up when they shouldn't.

Both of those problems were exacerbated by the fact that the actual try parser code has never had a definite owner. Since it was first created, there have never been more than 11 commits in a year. There have been only two commits to date in 2015.

Two key insights

At this point, there are two things that are worth calling out:

  1. Generating try strings from memory is getting harder and harder, and for many cases is nigh impossible. We rely more and more on tools like trychooser.
  2. Try syntax is sort of like an API on which these tools are built on top of.

What this means is that primary generators of try syntax have shifted from humans to tools. A command line encoded in a commit message is convenient if you're a human generating the syntax manually. But as far as tooling goes, try syntax is one god awful API. Not only do the tools need to figure out the magic strings, they need to interact with version control, create an empty commit and push it to a remote repository.

There is also tooling on the other side of the see saw, things that process the try syntax post push. We've already seen buildbot's try parser but taskcluster has a separate try parser as well. This means that your try push has different behaviour, depending on whether the jobs are scheduled in buildbot or taskcluster. There are other one off tools that do some try syntax parsing as well, including but not limited to try tools in mozharness, the try re-trigger bot and the AWSY dashboard. These tools are all forced to share and parse the same try syntax string, so they have to be careful not to step on each other's toes.

The takeaway here is that for tools, a string encoded as a commit message is quite limiting and a lot less convenient than say, calling a function in a library.

Despair not, young Padawan

So far we've seen how try syntax is finicky, how the tools that use it are often outdated and how it fails as an API. But what is the alternative? Fortunately, over the course of 2015 a lot of progress has been made on projects that for the first time, give us a viable alternative to try syntax.

First and foremost, is mozci. Mozci, created by armenzg and adusca, is a tool that hooks into the build api (with early support for taskcluster as well). It can do things like schedule builds and tests against any arbitrary pushes, and is being used on the backend for tools like adusca's try-extender with integration directly into treeherder planned.

Another project that improves the situation is taskcluster itself. With taskcluster, job configuration and scheduling all lives in tree. Thanks to bhearsum's buildbot bridge, we can even use taskcluster to schedule jobs that still live in buildbot. There's an opportunity here to leverage these new tools in conjunction with mozci to gain complete and total control over how jobs are scheduled on try.

Finally I'd like to call out mach try once more. It is more than a thin wrapper around try syntax that handles your push for you. It actually lets you control how the harness gets run within a job. For now this is limited to test paths and tags, but there is a lot of potential to do some cool things here. One of the current limiting factors is the unexpressiveness of the try syntax API. Hopefully this won't be a problem too much longer. Oh yeah, and mach try also works with git.

A glimpse into the crystal ball

So we have several different projects all coming together at once. The hard part is figuring out how they all tie in together. What do we want to tackle first? How might the future look? I want to be clear that none of this is imminent. This is a look into what might be, not what will be.

There are two places we mainly care about scheduling jobs on try.

First imagine you push your change to try. You open up treeherder, except no jobs are scheduled. Instead you see every possible job in a distinct greyed out colour. Scheduling what you want is as simple as clicking the desired job icons. Hold on a sec, you don't have to imagine it. Adusca already has a prototype of what this might look like. Being able to schedule your try jobs this way has a huge benefit: you don't need to mentally correlate job symbols to job names. It's as easy as point and click.

Second, is pushing a predefined set of jobs to try from the command line, similar to how things work now. It's often handy to have the try command for a specific job set in your shell history and it's a pain to open up treeherder for a simple push that you've memorized and run dozens of times. There are a few improvements we can do here:

  • We can move the curses ui feature of the hg trychooser extension into mach try.
  • We can use mozci to automatically keep the known list of jobs up to date. This is useful for things like generating the curses ui on the fly, validation and tab completion.
  • We can use mozci + taskcluster + buildbot bridge to provide a much more expressive API for scheduling jobs. For example, you could easily push a T-style try run.
  • We can expand some of the functionality in mach try for controlling how the harnesses are run, for example we could use it to enable some of the debugging features of the harness while investigating test failures.

Finally for those who are stuck in their ways, it should still be possible to have a "classic try syntax" front-end to the new mozci backend. As large as this change sounds, it could be mostly transparent to the user. While I'm certainly not a fan of the current try syntax, there's no reason to begrudge the people who are.

Closing words

Try syntax has served us well for 5 long years. But it's almost time to move on to something better. Soon a lot of new avenues will be open and tools will be created that none of us have thought of yet. I'd like to thank all of the people mentioned in this post for their contributions in this area and I'm very excited for what the future holds.

The future is bright, and change is for the better.

Internet Explorer blogCreating your own browser with HTML and JavaScript

Over the past several months, we have made numerous improvements to the Microsoft Edge rendering engine (Microsoft EdgeHTML), focusing on interoperability with modern browsers and compliance with new and emerging standards. In addition to powering Microsoft Edge, EdgeHTML is also available for all Universal Windows Platform (UWP) apps via the WebView control. Today we would like to demonstrate how the WebView control can be used to create your own browser in Windows 10.

Using standard web technology including JavaScript, HTML, and CSS we created a sample UWP application which hosts the WebView and provides basic functionality such as navigation and favorites. These same techniques can be used in any UWP application to seamlessly integrate web content.

Animation showing favorites menu in custom browser application

The crux of the functionality stems around the powerful WebView control. Offering a comprehensive set of APIs, it overcomes several of the limitations which encumber iframes, such as framebusting sites and document loading events. Additionally, the x-ms-webview, how one declares a WebView in HTML, provides new functionality that is not possible with an iframe, such as better access to local content and the ability to take screenshots. When you use the WebView control, you get the same web platform that powers Microsoft Edge.

Get the Sample Code

You can view the full set of sample code in our repo on GitHub. You can also demo the browser live by installing the app from the Windows Store, or by deploying the Visual Studio solution.

Animation showing fullscreen mode in custom browser app

Build Your Windows 10 App Today

With the WebView control, we were able to create a simple web browser using standard web technology in just an afternoon. We look forward to seeing what you build with Windows 10!

– Josh Rennert, Program Manager, Microsoft Edge

8/28 3:17p – Updating for clarity purposes.

Planet MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Planet MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Planet MozillaDynamically updating <meta viewport> in the year 2015.

18 months after writing the net-ward-winning article Dynamically updating <meta viewport> in the year 2014, I wrote some patches for Firefox for Android to make it possible to update a page's existing meta[name=viewport] element's content attribute and have the viewport be updated accordingly.

So when version 43 ships (at some point in 2015), code like this will work in more places than it did in 2014:

if(screen.width < 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=768');
}
if(screen.width > 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=1024');
}

I'll just go ahead and accept the 2015 netaward now, thanks for the votes everyone, wowowow.

Planet MozillaConway’s Corollary

Conway’s Law states:

organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations

I’ve always read this as an accusation: we are doomed to recreate the structure of our organizations in the structure of software projects. And further: projects cannot become their True Selves, cannot realize the most superior design, unless the organization is itself practically structureless. That only without the constraints of structure can the engineer make the truly correct choices. Michelangelo sculpted from marble, a smooth and uniform stone, not from an aggregate, where any hit with the chisel might reveal only the chaotic structure and fault lines of the rock and not his vision.

But most software is built, not revealed. I’m starting to believe that Conway’s observation is a corollary, not so clearly cause-and-effect. Maybe we should work with it, not struggle against it. (With age I’ve lost the passion for pointless struggle.) It’s not that developers can’t imagine a design that goes contrary to the organizational structure, it’s that they can’t ship those designs. What we’re seeing is natural selection. And when through force of will such a design is shipped, that it survives and is maintained depends on whether that the organization changed in the process, whether a structure was created to support that design.

A second skepticism: must a particular construction and modularity of code be paramount? Code is malleable, and its modularity is for the purpose of humans. Most of what we do disappears anyway when the machine takes over – functions are inlined, types erased, the pieces become linked, and the machine doesn’t care one whit about everything we’ve done to make the software comprehensible. Modularity is to serve our purposes. And sometimes organization structure serves a purpose; we change it to meet goals, and we shouldn’t assume the people who change it are just busybodies. But those changes are often aspirational, and so those changes are setting ourselves up for conflict as the new structure probably does not mirror the software design.

If the parts of an organization (e.g. teams, departments, or subdivisions) do not closely reflect the essential parts of the product, or if the relationship between organizations do not reflect the relationships between product parts, then the project will be in trouble… Therefore: Make sure the organization is compatible with the product architecture – Coplien and Harrison

So change the architecture! There’s more than one way to resolve these tensions.

A last speculation: as described in the Second System Effect we see teams rearchitect systems with excessive modularity and abstraction. Maybe because they remember all these conflicts, they remember all the times organizational structure and product motivations didn’t match architecture. The team makes an incorrect response by creating an architecture that can simultaneously embody all imagined organizational structures, a granularity that embodies not just current organizational tensions but also organizational boundaries that have come and gone. But the value is only in predicting future changes in structure, and only then if you are lucky.

Maybe we should look at Conway’s Law as a prescription: projects should only have hard boundaries where there are organizational boundaries. Soft boundaries and definitions still exist everywhere: just like we give local variables meaningful names (even though outside the function no one can tell the difference), we might also create abstractions and modularity that serve immediate and concrete purposes. But they should only be built for the moment and the task at hand. Extra effort should be applied to being ready to refactor in the future, not predicting and embodying those predictions in present modularity. Perhaps this is another rephrasing of Agile and YAGNI. Code is a liability, agency over that code is an asset.

Planet MozillaBay Area Rust Meetup August 2015

Bay Area Rust Meetup August 2015 The SF Rust Meetup for August.

Planet MozillaWhat does the OS X Activity Monitor’s “Energy Impact” actually measure?

[Update: this post has been updated with significant new information. Look to the end.]

Activity Monitor is a tool in Mac OS X that shows a variety of real-time process measurements. It is well-known and its “Energy Impact” measure (which was added in Mac OS X 10.9) is often consulted by users to compare the power consumption of different programs. Apple support documentation specifically recommends it for troubleshooting battery life problems, as do countless articles on the web.

However, despite its prominence, the exact meaning of the “Energy Impact” measure is unclear. In this blog post I use a combination of code inspection, measurements, and educated guesses to hypothesize how it is computed in Mac OS X 10.9 and 10.10.

What is known about “Energy Impact”?

The following screenshot shows the Activity Monitor’s “Energy” tab.

There are no units given for “Energy Impact” or “Avg Energy Impact”.

The Activity Monitor documentation says the following.

Energy Impact: A relative measure of the current energy consumption of the app. Lower numbers are better.

Avg Energy Impact: The average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.

That is vague. Other Apple documentation says the following.

The Energy tab of Activity Monitor displays the Energy Impact of each open app based on a number of factors including CPU usage, network traffic, disk activity and more. The higher the number, the more impact an app has on battery power.

More detail, but still vague. Enough so that various other  people have wondered what it means. The most precise description I have found says the following.

If my recollection of the developer presentation slide on App Nap is correct, they are an abstract unit Apple created to represent several factors related to energy usage meant to compare programs relatively.

I don’t believe you can directly relate them to one simple unit, because they are from an arbitrary formula of multiple factors.

[…] To get the units they look at CPU usage, interrupts, and wakeups… track those using counters and apply that to the energy column as a relative measure of an app.

This sounds plausible, and we will soon see that it appears to be close to the truth.

A detour: top

First, a necessary detour. top is a program that is similar to Activity Monitor, but it runs from the command-line. Like Activity Monitor, top performs periodic measurements of many different things, including several that are relevant to power consumption: CPU usage, wakeups, and a “power” measure. To see all these together, invoke it as follows.

top -stats pid,command,cpu,idlew,power -o power -d

(A non-default invocation is necessary because the wakeups and power columns aren’t shown by default unless you have an extremely wide screen.)

It will show real-time data, updated once per second, like the following.

PID            COMMAND                  %CPU         IDLEW        POWER
50300          firefox                  12.9         278          26.6
76256          plugin-container         3.4          159          11.3
151            coreaudiod               0.9          68           4.3
76505          top                      1.5          1            1.6 
76354          Activity Monitor         1.0          0            1.0

The PID, COMMAND and %CPU columns are self-explanatory.

The IDLEW column is the number of package idle exit wakeups. These occur when the processor package (containing the cores, GPU, caches, etc.) transitions from a low-power idle state to the active state. This happens when the OS schedules a process to run due to some kind of event. Common causes of wakeups include scheduled timers going off and blocked I/O system calls receiving data.

What about the POWER column? top is open source, so its meaning can be determined conclusively by reading the powerscore_insert_cell function in the source code. (The POWER measure was added to top in OS X 10.9.0 and the code has remain unchanged all the way through to OS X 10.10.2, which is the most recent version for which the code is available.)

The following is a summary of what the code does, and it’s easier to understand if the %CPU and POWER computations are shown side-by-side.

|elapsed_us| is the length of the sample period
|used_us| is the time this process was running during the sample period

  %CPU = (used_us * 100.0) / elapsed_us

  POWER = if is_a_kernel_process()
            0
          else
            ((used_us + IDLEW * 500) * 100.0) / elapsed_us
          

The %CPU computation is as expected.

The POWER computation is a function of CPU and IDLEW. It’s basically the same as %CPU but with a “tax” of 500 microseconds for each wakeup and an exception for kernel processes. The value of this function can easily exceed 100 — e.g. a program with zero CPU usage and 3,000 wakeups per second will have a POWER score of 150 — so it is not a percentage. In fact, POWER is a unitless measure because it is a semi-arbitrary combination of two measures with incompatible units.

Back to Activity Monitor and “Energy Impact”

MacBook Pro running Mac OS X 10.9.5

First, I did some measurements with a MacBook Pro with an i7-4960HQ processor running Mac OS X 10.9.5.

I did extensive testing with a range of programs: ones that trigger 100% CPU usage; ones that trigger controllable numbers of idle wakeups; ones that stress the memory system heavily; ones that perform frequent disk operations; and ones that perform frequent network operations.

In every case, Activity Monitor’s “Energy Impact” was the same as top‘s POWER measure. Every indication is that the two are computed identically on this machine.

For example, consider the data in the following table,  The data was gathered with a small test program that fires a timer N times per second; other than extreme cases (see below) each timer firing causes an idle platform wakeup.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power  Act.Mon. top
-----------------------------------------------------------------------------
     2     0.14        2.00       1.80     2.30W     0.1    0.1
   100     4.52      100.13      95.14     3.29W       5      5
   500     9.26      499.66     483.87     3.50W      25     25
  1000    19.89     1000.15     978.77     5.23W      50     50
  5000    17.87     4993.10    4907.54    14.50W     240    240
 10000    32.63     9976.38    9194.70    17.61W     485    480
 20000    66.66    19970.95   17849.55    21.81W     910    910
 30000    99.62    28332.79   25899.13    23.89W    1300   1300
 40000   132.08    37255.47   33070.19    24.43W    1610   1650
 50000   160.79    46170.83   42665.61    27.31W    2100   2100
 60000   281.19    58871.47   32062.39    29.92W    1600   1650
 70000   276.43    67023.00   14782.03    31.86W     780    750
 80000   304.16    81624.60     258.22    35.72W      43     45
 90000   333.20    90100.26     153.13    37.93W      40     42
100000   363.94    98789.49      44.18    39.31W      38     38

The table shows a variety of measurements for this program for different values of N. Columns 2–5 are from powermetrics, and show CPU usage, interrupt frequency, and package idle wakeup frequency, respectively. Column 6 is Activity Monitor’s “Energy Impact”, and column 7 is top‘s POWER measurement. Column 6 and 7 (which are approximate measurements) are identical, modulo small variations due to the noisiness of these measurements.

MacBook Air running Mac OS X 10.10.4

I also tested a MacBook Air with an i5-4250U processor running Mac OS X 10.10.4. The results were substantially different.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power Act.Mon. top
-----------------------------------------------------------------------------
     2     0.21        2.00       2.00     0.63W   0.0     0.1
   100     6.75       99.29      96.69     0.81W   2.4     5.2
   500    22.52      499.40     475.04     1.15W   10       25
  1000    44.07      998.93     960.59     1.67W   21       48
  3000   109.71     3001.05    2917.54     3.80W   60      145
  5000    65.02     4996.13    4781.43     3.79W   90      230
  7500   107.53     7483.57    7083.90     4.31W   140     350
 10000   144.00     9981.25    9381.06     4.37W   190     460

The results from top are very similar to those from the other machine. But Activity Monitor’s “Energy Impact” no longer matches top‘s POWER measure. As a result it is much harder to say with confidence what “Energy Impact” represents on this machine. I tried tweaking the previous formula so that the idle wakeup “tax” drops from 500 microseconds to 180 or 200 microseconds and that gives results that appear to be in the ballpark but don’t match exactly. I’m a bit skeptical whether Activity Monitor is doing all its measurements at the same time or not. But it’s also quite possible that other inputs have been added to the function that computes “Energy Impact”.

What about “Avg Energy Impact”?

What about the “Avg Energy Impact”? It seems reasonable to assume it is computed in the same way as “Energy Impact”, but averaged over a longer period. In fact, we already know that period from the Apple documentation that says it is the “average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.”

Indeed, when the Energy tab of Activity Monitor is first opened, the “Avg Energy Impact” column is empty and the title bar says “Activity Monitor (Processing…)”. After a few seconds the “Avg Energy Impact” column is populated with values and the title bar changes to “Activity Monitor (Applications in last 8 hours)”. If you have top open during those 5–10 seconds can you see that systemstats is running and using a lot of CPU, and so presumably the measurements are obtained from it.

systemstats is a program that runs all the time and periodically measures, among other things, CPU usage and idle wakeups for each running process (visible in the “Processes” section of its output.) I’ve done further tests that indicate that the “Avg Energy Impact” is almost certainly computed using the same formula as “Energy Impact”. The difference is that the the measurements are from the past 8 hours of wake time — i.e. if a laptop is closed for several hours and then reopened, those hours are not included in the calculation — as opposed to the 1, 2 or 5 seconds of wake time used for “Energy Impact”.

battery status menu

Even more prominent than Activity Monitor is OS X’s battery status menu. When you click on the battery icon in the OS X menu bar you get a drop-down menu which includes a list of “Apps Using Significant Energy”.

Screenshot of the OS X battery status menu

How is this determined? When you open this menu for the first time in a while it says “Collecting Power Usage Information” for a few seconds, and if you have top open during that time you see that, once again, systemstats is running and using a lot of CPU. Furthermore, if you click on an application name in the menu Activity Monitor will be opened and that application’s entry will be highlighted. Based on these facts it seems reasonable to assume that “Energy Impact” is again being used to determine which applications show up in the battery status menu.

I did some more tests (on my MacBook Pro running 10.9.5) and it appears that once an energy-intensive application is started it takes about 20 or 30 seconds for it to show up in the battery status menu. And once the application stops using high amounts of energy I’ve seen it take between 4 and 10 minutes to disappear. The exception is if the application is closed, in which case it disappears immediately.

Finally, I tried to determine the significance threshold. It appears that a program with an “Energy Impact” of roughly 20 or more will eventually show up as significant, and programs that have much higher “Energy Impact” values tend to show up more quickly.

All of these battery status menu observations are difficult to make reliably and so should be treated with caution. They may also be different in OS X 10.10. It is clear, however, that the window used by the battery status menu is measured in seconds or minutes, which is much less than the 8 hour window used for “Avg Energy Impact”.

An aside: systemstats is always running on OS X. The particular invocation used for the long-running instance — the one used by both Activity Monitor and the battery status menu — takes the undocumented --xpc flag. When I tried running it with that flag I got an error message saying “This mode should only be invoked by launchd”. So it’s hard to know how often it’s making measurements. The output from vanilla command-line invocations indicate it’s about every 10 minutes.

But it’s worth noting that systemstats has a -J option which causes the CPU usage and wakeups for child processes to be attributed to their parents. It seems likely that the --xpc option triggers the same behaviour because the Activity Monitor does not show “Avg Energy Impact” for child processes (as can be seen in the screenshot above for the login, bash and vim processes that are children of the Terminal process). This hypothesis also matches up with the battery status menu, which never shows child processes. One consequence of this is that if you ssh into a Mac and run a power-intensive program from the command line it will not show up in Activity Monitor’s energy tab or the battery status menu, because it’s not attributable to a top-level process such as Terminal! Such processes will show up in top and in Activity Monitor’s CPU tab, however.

How good a measure is “Energy Impact”?

We’ve now seen that “Energy Impact” is used widely throughout OS X. How good a measure is it?

The best way to measure power consumption is to actually measure power consumption. One way to do this is to use an ammeter, but this is difficult. Another way is to measure how long it takes for the battery to drain, which is easier but slow and requires steady workloads. Alternatively, recent Intel hardware provides high-quality estimates of processor and memory power consumption that are relatively easy to obtain.

These approaches all have the virtue of measuring or estimating actual power consumption (i.e. Watts). But the big problem is that they are machine-wide measures that cannot be used on a per-process basis. This is why Activity Monitor uses several proxy measures — ones that correlate with power consumption — which can be measured on a per-process basis. “Energy Impact” is a hybrid of at least two different proxy measures: CPU usage and wakeup frequency.

The main problem with this is that “Energy Impact” is an exaggerated measure. Look at the first table above, with data from the 10.9.5 machine. The variation in the “Pkg Power” column — which shows the package power from the above-mentioned Intel hardware estimates — is vastly smaller than the variation in the “Energy Impact” measurements. For example, going from 1,000 to 10,000 wakeups per second increases the package power by 3.4x, but the “Energy Impact” increases by 9.7x, and the skew gets even worse at higher wakeup frequencies. “Energy Impact” clearly weights wakeups too heavily. (In the second table, with data from the 10.10.4 machine, the weight given to wakeups is less, but still too high.)

Also, in the first table “Energy Impact” actually decreases when the timer frequency gets high enough. Presumably this is because the timer interval is so short that the OS has trouble putting the package into a idle power state. This leads to the absurd result that firing a timer at 1,000 Hz has about the same “Energy Impact” value as firing one at 100,000 Hz, when the package power of the latter is about 7.5x higher.

Having said all that, it’s understandable why Apple uses formulations of this kind for “Energy Impact”.

  • CPU usage and wakeup frequency are probably the two most important factors affecting a process’s power consumption, and they are factors that can be measured on a per-process basis.
  • Having a single measure makes things easy for users; evaluating the relative important of multiple measures is more difficult.
  • The exception for kernel processes (which always have an “Energy Impact” of 0) avoids OS X itself being blamed for high power consumption. This makes a certain amount of sense — it’s not like users can close the kernel — while also being somewhat misleading.

If I were in charge of Apple’s Activity Monitor product, I’d do two things.

  1. I would compute a new formula for “Energy Impact”. I would measure the CPU usage, wakeup frequency (and any other inputs) and actual power consumption for a range of real-world programs, on a range of different Apple machines. From this data, hopefully a reasonably accurate model could be constructed. It wouldn’t be perfect, and it wouldn’t need to be perfect, but it should be possible to come up with something that reflects actual power consumption better than the existing formulations. Once formulated, I would then test the new version against synthetic microbenchmarks, like the ones I used above, to see how it holds up. Given the choice between accurately modelling real-world applications and accurately modelling synthetic microbenchmarks, I would definitely favour the former.
  2. I would publicly document the formula that is used so that developers can actually tell how their applications are being evaluated, and can optimize for that measure. You may think “but then developers will be optimizing for a synthetic measure rather than a real one” and you’d be right. That’s an inevitable consequence of giving a synthetic measure such prominence, and all the more reason for improving it.

Conclusion

“Energy Impact” is a flawed measure of an application’s power consumption. Nonetheless, it’s what many people use at this moment to evaluate the power consumption of OS X applications, so it’s worth understanding. And if you are an OS X application developer who wants to reduce the “Energy Impact” of your application, it’s clear that it’s best to focus first on reducing wakeup frequency, and then on reducing CPU usage.

Because Activity Monitor is closed source code I don’t know if I’ve characterized “Energy Impact” exactly correctly. The evidence given above indicates that I am close on 10.9.5, but not as close on 10.10.4. I’d love to hear if anybody has evidence that either corroborates or contradicts the conclusions I’ve made here. Thank you.

Update

A commenter named comex has done some great detective work and found on 10.10 and 10.11 Activity Monitor consults a Mac model-specific file in the /usr/share/pmenergy/ directory. (Thank you, comex.)

For example, my MacBook Air has a model number 7DF21CB3ED6977E5 and the file Mac-7DF21CB3ED6977E5.plist has the following list of key/value pairs under the heading “energy_constants”.

kcpu_time               1.0
kcpu_wakeups            2.0e-4

This matches the previously seen formula, but with the wakeups “tax” being 200 microseconds, which matches what I hypothesized above.

kqos_default            1.0e+00
kqos_background         5.2e-01
kqos_utility            1.0e+00
kqos_legacy             1.0e+00         
kqos_user_initiated     1.0e+00
kqos_user_interactive   1.0e+00

“QoS” refers to quality of service classes which allow an application to mark some of its own work as lower priority. I’m not sure exactly how this is factored in, but from the numbers above it appears that operations done in the lowest-priority “background” class is considered to have an energy impact of about half that done in all the other classes.

kdiskio_bytesread       0.0
kdiskio_byteswritten    5.3e-10

These ones are straightforward. Note that the “tax” for disk reads is zero, and for disk writes it’s a very small number. I wrote a small program that wrote endlessly to disk and saw that the “Energy Impact” was slightly higher than the CPU percentage alone, which matches expectations.

kgpu_time               3.0e+00

It makes sense that GPU usage is included in the formula. It’s not clear if this refers to the integrated GPU or the separate (higher performance, higher power) GPU. It’s also interesting that the weighting is 3x.

knetwork_recv_bytes     0.0 
knetwork_recv_packets   4.0e-6
knetwork_sent_bytes     0.0
knetwork_sent_packets   4.0e-6

These are also straightforward. In this case, the number of bytes sent is ignored, and only the number of packets matter, and the cost of reading and writing packets is considered equal.

So, in conclusion, on 10.10 and 10.11, the formula used to compute “Energy Impact” is machine model-specific, and includes the following factors: CPU usage, wakeup frequency, quality of service class usage, and disk, GPU, and network activity.

This is definitely an improvement over the formula used in 10.9, which is great to see. The parameters are also visible, if you know where to look! It would be wonderful if all these inputs, along with their relative weightings, could be seen at once in Activity Monitor. That way developers would have a much better sense of exactly how their application’s “Energy Impact” is determined.

Planet MozillaEngineering Productivity Update, August 26, 2015

It’s PTO season and many people have taken a few days or week off.  While they’re away, the team continues making progress on a variety of fronts.  Planning also continues for GoFaster and addon-signing, which will both likely be significant projects for the team in Q4.

Highlights

Treeherder: camd rolled out a change which collapses chunked jobs on Treeherder, reducing visual noise.  In the future, we plan on significantly increasing the number of chunks of many jobs in order to reduce runtimes, so this change makes that work more practical.  See camd’s blog post.  emorley has landed a change which allows TaskCluster job errors that occur outside of mozharness to be properly handled by Treeherder.

Automatic Starring: jgraham has developed a basic backend which supports recognizing simple intermittent failures, and is working on integrating that into Treeherder; mdoglio is landing some related database changes. ekyle has received sheriff training from RyanVM, and plans to use this to help improve the automated failure recognition algorithm.

Perfherder and Performance Testing: Datazilla has finally been decommissioned (R.I.P.), in favor of our newest performance analysis tool, Perfherder.  A lot of Talos documentation updates have been made at https://wiki.mozilla.org/Buildbot/Talos, including details about how we perform calculations on data produced by Talos.  wlach performed a useful post-mortem of Eideticker, with several takeaways which should be applicable to many other projects.

MozReview and Autoland: There’s a MozReview meetup underway, so expect some cool updates next time!

TaskCluster Support: ted has made a successful cross-compiled OSX build using TaskCluster!  Take it for a spin.  More work is needed before we can move OSX builds from the mac mini builders to the cloud.

Mobile Automation: gbrown continues to make improvements on the new |mach emulator| command which makes running Android tests locally on emulator very simple.

General Automation: run-by-dir is live on opt mochitest-plain; debug and ASAN coming soon.  This reduces test “bleed-through” and makes it easier to change chunking.  adusca, our Outreachy intern, is working to integrate the try extender into Treeherder.  And ahal has merged the mozharness “in-tree” configs with the regular mozharness config files, now that mozharness lives in the tree.

Firefox Automation: YouTube ad detection has been improved for firefox-media-tests by maja, which fixes the source of the top intermittent failure in this suite.

Bughunter: bc has got asan-opt builds running in production, and is working on gtk3 support.

hg.mozilla.org: gps has enabled syntax highlighting in hgweb, and has added a new JSON API as well.  See gps’ blog post.

The Details

bugzilla.mozilla.org
Treeherder
Perfherder/Performance Testing
  • talos cleanup and preparation to move in-tree
  • perfherder database cleanup in progress for simpler and more optimized queries. This is mainly preparatory work for making perfherder capable of managing/starring performance alerts, but as a bonus perfherder compare view should load virtually instantly once this is finished. 
  • most talos wiki docs are updated: https://wiki.mozilla.org/Buildbot/Talos
TaskCluster Support
Mobile Automation
  •  [gbrown] Working on “mach emulator” support: wip can download and run 2.3, 4.3, or x86 emulator images. Integrating with other mach commands like “install” and “mochitest”.
  •  [gbrown] Updated mochitest manifests to run most dom/media mochitests on Android 4.3 (under review, bug 1189784)
Firefox and Media Automation
  • [maja_zf] Improved ad detection on YouTube for firefox-media-tests, which fixes our top intermittent failure for long-running playback tests.
General Automation
  •  run-by-dir is live for mochitest-plain (opt only); debug is coming soon, followed by ASAN.
  • Mozilla CI tools is moving from using BuildAPI as the scheduling entry point to use TaskCluster’ scheduling. This work will allow us to schedule a graph of buildbot jobs and their dependencies in one shot. https://bugzil.la/1194264
  • adusca is integrating into treeherder the ability to extend the jobs run for any push. This is based on the http://try-extender.herokuapp.com prototype. Follow along in https://bugzil.la/1194830
  • Git was deployed to the test machines. This is necessary to make the Firefox UI update tests work on them.
  • [ahal] merge mozharness in-tree configs with the main mozharness configs
ActiveData
  • Bug fixes to the ETL – fix bad lookups on hg repo, mostly l10n builds 
  • More error reporting on ETL – Structured logging has changed a little, handle the new variations, and be more elegant when it comes to unknowns, an complain when there is non-conformance.
  • Some work on adding hg `repo` table – acts as a cache for ETL, but can be used to calculate ‘per push’ statistics on OrangeFactor data.
  • Added Talos to the `perf` table – used the old Datazilla ETL code to fill the ES cluster.  This may speed up extracting the replicates, for exploring the behaviour of a test.
  • Enable deep queries – Effectively performing SQL join on Elasticsearch – first attempt did too much refactoring.  Second attempt is simpler, but still slogging through all the resulting test breakage
hg.mozilla.org
WebDriver
  • Updated 
Marionette
  • [ahal] helped review and finish contributor patch for switching marionette_client from optparse to argparse
  • Corrected UUID used for session ID and element IDs
  • Updated dispatching of various marionette calls in Gecko
bughunter
  • [bc] Have asan-opt builds running in production. Finalizing patch. Still need to build gtk3 for rhel6 32bit in order to stop using custom builds and support opt in addition to debug.
charts.mozilla.org
  • Updated the hierarchical burndowns to EPM’s important metabugs that track features 
  • More config changes

Planet MozillaSUMO Questions Day this Thursday, 27 August 2015

The summer holidays are now over  so it’s time to start organizing a new SUMO Day!

What are SUMO Days?

A SUMO Day is that time of the week where everybody who loves doing support, contributors, admins, moderators gather together and try and answer all the incoming questions on the Mozilla support forums. This is a 24 hour event, we will start early during European mornings  and finish late during US Pacific evenings.

We are also hanging out having fun and helping each other in #sumo on IRC.

I want to participate! Where do I start?

Just create an account and then take some time to help with unanswered questions. We have an etherpad ready with all the details plus additional tips and resources.

If you get stuck with questions that are too difficult feel free to ping us on IRC #sumo or ask for help on the contributors forums.

Moderators

SUMO Day will be moderated by madasan (EU morning/afternoon), marksc (EU afternoon/US morning), guigs (US morning/afternoon). We can always use more people to help moderate through the day so if you would like to do this just add your name in the etherpad!

What does it mean to be a SUMO Day moderator?

It’s easy! Just check out the forums and monitor incoming questions. Don’t forget to hang out on IRC on #sumo and the contributor forums and chat with the other SUMO Day participants about possible solutions to questions. As a moderator you also help out contributors who are stuck with difficult questions and need help.

Screensharing experiment

During this SUMO Day some of us will experiment with helping users via screen sharing. This is only open to senior contributors and forum moderators so if you’re one of them and you would like to participate please PM madasan.

 We’re trying to answer each and every incoming question on the support forum on Thursday so please join us. The more the merrier!

 

See you online and happy SUMO Day!

Planet MozillaProject Beehive: A HW/SW co-designed stack for runtime and architectural research.

Project Beehive: A HW/SW co-designed stack for runtime and  architectural research. In this talk we will present an overview of our recent research efforts focusing on Hw/Sw co-designed platform for heterogeneous many-core architectural research. The presented...

Planet Mozillapytest-wholenodeid addon: v0.2 released!

What is it?

pytest-wholenodeid is a pytest addon that shows the whole node id on failure rather than just the domain part. This makes it a lot easier to copy and paste the entire node id and re-run the test.

v0.2 released!

I wrote it in an hour today to make it easier to deal with test failures. Then I figured I'd turn it into a real project so friends could use it. Now you can use it, too!

I originally released v0.1 (the first release) and then noticed on PyPI that the description was a mess, so I fixed that and released v0.2.

To install:

pip install pytest-wholenodeid

It runs automatically. If you want to disable it temporarily, pass the --nowholeid argument to pytest.

More details on exactly what it does on the PyPI page.

If you use it and find issues, write up an issue in the issue tracker.

Planet MozillaWeekly Update 2015-08-26

Discourse

Discourse UX improvements (@Leo, @yousef)

There are some changes to Discourse that should be made to make it more suitable to Mozillian’s needs

  • Status [In Progress]: See SSO update below. We can still use help researching and building the plugins that we need.
SSO (@Leo)

To improve the login experience for people using Discourse within Mozilla, bridge the gap in various ways between our different instances (e.g. single username across instances), and integrate better with Mozilla wore widely (with Mozillians integration, etc.)

  • Status [In Progress]: Still working on initial version of SSO server, currently working on finishing touches
Discourse Documentation (@Kensie)

To make Discourse more user friendly for Mozillians, we need some good documentation on how to use it

  • Status [In Progress]: Added a couple docs based on questions that came up during the week. Still need people to ask questions so we can answer them.
MECHADISCOURSE (@Yousef)

Putting all Discourse instances on one infrastructure, automated with Ansible and CloudFormation. This will help us keep the many Discourse instances we have secure, up to date and running common plugins easily; at scale. Also saves $$$ while allowing all of our instances to be HA.

  • Status [In Progress]: Turns out this isn’t quite production ready so we’re going to use our staging servers as a test-bed to iron out issues
MoFo Discourse migrations (@Yousef)

Migrating the Webmaker, Science and Hive Discourse instances to MECHADISCOURSE. This provides the teams with more stable Infra for their Discourse instances.

  • Status [In Progress]: Leo is currently implementing Webmaker login for the Teach The Web Discourse

Ansible (@Tanner)

Config management, initializes servers, will be used with MECHADISCOURSE as its first “big” project. Makes it 100x easier to set up servers.

  • Status [Done!]: Production-ready, Jenkins has been set up so jobs can be triggered on-demand.

Monitoring (@Tanner)

we need to set up a robust monitoring solution for our sites.

  • Status [In Progress]: Will be using Nagios. Need to write checks and config for Nagios, and then deploy the NPRE agent to servers.

Community Hosting (@Tanner, @yousef)

Audit

We need to understand which sites are being actively used and which no longer need hosting, or need different hosting than they currently have

  • Status [In Progress]: Michael Buluma has started work on defining a MVP (minimum viable product) for a community website.
Migration

We will be moving away from OVH to simplify community hosting and save money.

  • Status [Stalled]: Waiting for progress on Participation Infrastructure side

Documentation (@Kensie)

Discourse documentation (see above)
Wiki update

Our wiki pages our out of date, and shouldn’t be under IT anymore

  • Status [In Progress]: Michael Buluma has started working on this.
Confluence (@Kensie)

Links to JIRA, will use it to help with project management, decision tracking.

  • Status [In Progress]: Help from Atlassian experts would be very welcome!

Matrix (@Leo)

Communication protocol which attempts to bind various different ones together – could possibly be used by us as a Telegram-esque IRC bouncer. Discussion and link to planning pad here.

  • Status [In Progress]: Started to investigate it, finding answers to various questions

MozFest Participation (@Kensie)

We are looking at ways our team can support MozFest, and planning session proposals that would be interesting to MozFest

Online Forum for Participants (@Tanner)

We are offering our services to host a Discourse instance for MozFest

  • Status [In Progress]: Putting up a Discourse instance as a PoC
Session Proposals (@Kensie)
  • Status [In Progress]: We have several proposals to submit, ideally by Friday (deadline is Monday).

Miscellaneous

  • Crowd didn’t work as hoped. It messed with a lot of plugins for Jenkins that relied on usernames, so ldap might work better.

Contribution Opportunities

Recap of contribution opportunities from status updates and ongoing contribution opportunities:

  • Discourse
    • Research/coding customizations
    • Documenting how to use Discourse/need questions to answer
    • Ansible expertise welcome
  • Monitoring
    • Nagios experts/mentors welcome
    • Community Hosting
    • Research MVP for community sites
  • Documentation
    • Discourse (see above)
    • Need writers to help drive wiki update
    • Atlassian experts welcome to help with Confluence/JIRA organization

Planet MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Internet Explorer blogHow Microsoft Edge and Internet Explorer 11 on Windows 10 work better together in the Enterprise

Last month, we launched a brand new browser built for Windows 10, Microsoft Edge.  Microsoft Edge has been built from the ground up to correctly, quickly, and reliably render web pages, and improve productivity. We understand that many Enterprise customers may have line of business web apps and services that depend on Internet Explorer and the proprietary technologies that it supports. To help support these customers, Windows 10 includes Internet Explorer 11 with Enterprise Mode, the same version supported on Windows 7 and 8.1. Starting on January 12, 2016, Internet Explorer 11 will be the minimum supported version to continue to receive security updates and technical support on Windows 7 and Windows 8.1.

Today, we’re going to show you how you can use Enterprise Mode with Microsoft Edge to open Internet Explorer 11 for your business’s sites that require IE’s proprietary technologies. This approach enables your users to run a modern browser designed for better productivity, security, and rendering web pages—without sacrificing compatibility with legacy line of business applications.

Many of our customers who depend on legacy features only available in older versions of Internet Explorer are familiar with our Enterprise Mode tools for IE11. Today, we’re extending Enterprise Mode support to Microsoft Edge by opening any site specified on the Enterprise Mode Site List in IE11. IT Pros can use their existing IE11 Enterprise Mode Site List or they can create a new one specifically for Microsoft Edge. By keeping Microsoft Edge as the default browser in Windows 10 and only opening legacy line of business sites in IE11 when necessary, you can help keep newer development projects on track, using the latest web standards on Microsoft Edge.  For customers that have significant legacy content, we are also providing the ability to configure any Intranet site to open in IE11 when a user browses to it using Microsoft Edge.  This functionality is available as part of Windows 10 and has no additional installation requirements. Find out more about how to use Enterprise mode to improve compatibility in Microsoft Edge on Technet.

User Experience

Any sites specified on the Enterprise Mode Site List will open in IE11 automatically—even when navigated to in Microsoft Edge—unless you specifically exclude that site on the list. This action happens without any user input.  For example, if Contoso Travel, a line of business application that requires IE’s legacy proprietary technologies, was on the Enterprise Mode Site List or in the Intranet Zone, a user would see the following page when navigating to it in Microsoft Edge while the site is automatically opened in IE11:

Screen capture showing the interstitial transitioning to Internet Explorer

The user is prompted to open the website in Internet Explorer 11 (with the option to default to Internet Explorer in the future), but can choose to continue in Microsoft Edge.  Selecting “Open with Internet Explorer” will start Internet Explorer 11 and navigate to the current page  in a new window (or a new tab if the browser is already running).

Screen capture showing Microsoft Edge handing navigation to Internet Explorer 11

Using the Enterprise Mode Site list

For Internet Explorer 11, Enterprise Mode is configured by enabling the Use the Enterprise Mode IE website list Group Policy. You must then specify the location (URL) of the Enterprise Mode Site List under Options.  You can configure Microsoft Edge in a similar way using the Microsoft Edge Allows you to configure Enterprise Site list Group Policy.   That Group Policy specifies the location of the Enterprise Mode Site List for Microsoft Edge. If you enable this policy but do not specify a location for the Enterprise Mode Site List, Microsoft will automatically use the IE11 Enterprise Mode Site List if one exists.

"Allows you to configure the Enterprise Site list" Group Policy option

Location: Administrative Templates\Windows Components\Microsoft Edge\Allows you to configure the Enterprise Mode Site list

By default, any site that is in the <emie> or <docmode> section of the Enterprise Mode Site List will automatically open in Internet Explorer 11.  Additionally, we are introducing a new attribute “DoNotTransition” which allows explicit control over whether Microsoft Edge will open a site in Internet Explorer 11 or not.

DoNotTransition  True False Undefined (default)
Impact/Action This site on the Enterprise Mode Site List will not transition to Internet Explorer 11 when opened in Microsoft Edge This site on the Enterprise Mode Site List will transition to Internet Explorer 11 when opened in Microsoft Edge By default any site on the Enterprise Mode Site List will transition to Internet Explorer 11 when opened in Microsoft Edge
Code Sample
<emie>
<domain doNotTransition="true/false">
foo.com</domain>
</emie>
<emie>
<domain>foo.com</domain>
</emie>

The doNotTransition attribute can also be set using the latest version of the Enterprise Site List Manager.  When an existing site list is imported into the latest version each entry will receive an explicit DoNotTransition=”False” setting.  This is shown in the UI via the “Open in IE/Internet Explorer” checkboxes.  These are found in both the Site Detail and Site List view.

Screen capture of Enterprise Mode Site List view

Enterprise Mode Site List view

Screen capture of Enterprise Mode Site detail view

Enterprise Mode Site detail view

Microsoft Edge and IE11 can share the same Site List, or you can specify separate lists.

Opening Intranet sites in IE11

In addition to using the Enterprise Mode Site List, Microsoft Edge can be configured to send all Intranet sites to Internet Explorer 11.  You can do this via the “Sends all intranet traffic over to Internet Explorer” Group Policy. If this policy is configured, it will send all intranet sites to IE11, not just the sites listed in the Enterprise Mode site list. This option provides the same user experience covered earlier. We recommend IT Pros use the Enterprise Mode Site List—and not this policy—to configure just the minimum set of sites that need to open in IE11, as it will help enable you to modernize your intranet sites for Microsoft Edge more quickly.

Screenshot of "Sends all intranet traffic over to Internet Explorer" Group Policy

Location: Administrative Templates\Windows Components\Microsoft Edge\Sends all intranet traffic over to Internet Explorer

When configured, any site that is identified as being on the company Intranet will be automatically opened in Internet Explorer 11 when visited in Microsoft Edge.

For customers that have significant line of business application dependencies on IE11 and legacy IE technologies, you can set IE11 as your default browser on Windows 10 using the Set a default associations configuration file Group Policy. However, we recommend against this approach to ensure that sites which don’t rely on legacy technologies get the most modern rendering via Microsoft Edge.

How to get started

This feature is available in Windows 10 Build 10240+, so no additional updates are needed.  If you are already using Enterprise Mode to address compatibility issues in Internet Explorer 11, all you need to do is configure Microsoft Edge to use your existing Site List.  Below are some additional resources.

  • Download the new Enterprise Site List Manager Tool – v4.0 (Summer 2015).
  • Learn more about Enterprise Modeand how to turn it on.
  • Read up on setting up and configuring this functionality on Technet

While we hope that administrators are able to quickly transition to modern web standards and Microsoft Edge, we’re committed to help ease the transition using Internet Explorer 11 and the Enterprise Mode Site List. We’re excited about these new improvements and encourage you to try them out! Let us know your feedback on Twitter @MSEdgeDev or on Connect.

Deen King-Smith, Program Manager, Microsoft Edge
— Swathi Ganapathi, Program Manager, Microsoft Edge

Planet MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Planet MozillaVacation Mode @Yahoo? How About Evening Mode, Original Content Mode, and Walkie Talkies With Texting?

Called it. I recently asked “When did you last eat without using digital devices at all?” and proposed a “dumb camera mode” where you could only take/review photos/videos and perhaps also write/edit text notes on your otherwise “smart” phone that usually made you dumber through notification distractions.

Five days later @Yahoo published a news article titled: “The One Feature Every Smartphone Needs: Vacation Mode” — something I’m quite familiar with, having recently completed a one week Alaska cruise during which I was nearly completely off the grid.

Evening Mode rather than Vacation Mode

Despite the proposals in the Yahoo article, I still think a “dumb” capture/view mode would still be better on a vacation, where all you could do with your device was capture photos/text/GPS and subsequently view/edit what you captured. Even limited notifications distract and detract from a vacation.

However, the idea of “social media updates only from people you’re close to, either geographically or emotionally” would be useful when not on vacation. I'd use that as an Evening Mode most nights.

Original content rather than “shares”

In addition, the ability to filter and only see “original content — no shared news stories on Facebook, no retweets on Twitter” would be great as reading prioritization — I only have a minute, show me only original content, or show me original content from the past 24h before any (re)shares/bookmarks etc.

This strong preference to prioritize viewing original content is I think what has moved me to read my Instagram feed, and in contrast nearly ignore my Twitter feed / home page, as well as actively avoid Facebook’s News Feed.

Ideally I’d use an IndieWeb reader, but they too have yet to find a way to distinguish original content posts in contrast to bookmarks or brief quotes / commentary / shares of “news” articles.

Tame your inbox? No, vacation should mean no inbox

The Yahoo article suggests: “tame your inbox in the same fashion, showing messages from your important contacts as they arrive but hiding everything else” and completely misses the point of disconnecting from all inbox stress while on vacation.

SMS smart phone texting frustrations vs stress-free iPod

While I was on the Alaska cruise, other members of my family did txt/SMS each other a bit, but due to the unreliability of the shipboard cell tower, it was more frustrating to them than not.

With my iPod, I completely opted out of all such electronic text comms, and thus never stressed about checking my device to coordinate.

IRL coordination FTW

Instead I coordinated as I remember doing as a kid (and even teenager) — we made plans when we were together, about the next time and place we would meetup, and our general plans for the day. Then we’d adjust our plans by having *in-person* conversations whenever we next saw each other.

Or if we needed to find each other, we would wander around the ship, to our staterooms, the pool decks, the buffet, the gym, knowing that it was a small enough world that we’d likely run into each other, which we did several times.

During the entire trip there was only one time that I lost touch with everyone and actually got frustrated. But even that just took a bit longer of a ship search. Of course even for that situation there are solutions.

Walkie Talkies!

My nephews and niece used walkie-talkies that their father brought on board, and that actually worked in many ways better than anyone’s fancy smart phones.

Except walkie-talkies can be a bit intrusive.

Walkie Texting?

My question is:

If walkie-talkies can send high quality audio back and forth in broadcast mode, why can’t they broadcast short text messages to everyone else on that same “channel” as well?

Then I found this on Amazon: TriSquare eXRS TSX300-2VP 900MHz FHSS Digital Two-Way Radio Two 2-way radios

  • Digital Two-Way Radio
  • spread spectrum and encrypted
  • text mesaging between radios

(Discontinued by Manufacturer)

Anybody have one or a similar two-way radio that also supports texting?

Or would it be possible to do peer-to-peer audio/texting purely in software on smart “phones” peer-to-peer over bluetooth or wifi without having to go through a central router/tower?

That would seem ideal for a weekend road trip, say to Tahoe, or to the desert, or perhaps even for camping, again, maybe in the desert, like when you choose to escape from the rest of civilization for a week or more.

Planet MozillaThe Seasonal Blog Redux

It's that time of year again! The weeds are growing, the air is thick and stagnant, and I just deployed another refactoring of my blog. "Why does he keep working on his blog," you're thinking, "when I could do all of that with a static-site generator like Jekyll?"

Writing my own blogging engine has been one of the best decisions I've made. Having a side project that I actually use and get value from is a great place to implement my own ideas, or try out new libraries. Every now and then it's fun to throw it back in the furnance, get it hot, and start shaping it with new ideas.

A blog is a great litmus test for new libraries (remember, I have an admin site behind this). You have to deal with routing, forms, interfacing with things like the CodeMirror editor, server-side rendering, async data fetching, and more. I feel like it really hits most of the pain-points of big client-side apps, even if it's a relatively small project. The only thing it doesn't stress is a complex shape of data: the data I get back from the server is pretty simple, and and more complex apps would need something better to handle complex data.

But even then, contrasting my simple code with more complex solutions makes it really clear why they are solved that way. Take GraphQL for example; I definitely don't need it, but there are a few places in my code that would obviously be way more complex if my data was more complex, and it's clear what GraphQL is trying to solve.

Last time I completely rewrote my blog, I learned about react-router, Webpack (with babel integration), server-side rendering (universal apps), Docker, and various aspects of React.

This time, I learned about Redux, immutable-js, and having a fully snapshot-able app state.

What do I mean by snapshot? My entire app state (even component local state) lives as a nested tree with a single root. I can simply serialize that root, and load it in later to see the app exactly how it was a that point in time. Here's a fun trick to show you what I mean: copy all of this text, press cmd+shift+k and paste it in. That's my admin interface with 2 errors; you're seeing it exactly at that point (may not work in all browsers, Chrome is known to truncate prompt inputs. I'll make my own modal at some point).

Redux What?

Redux is library that complements React and manages application state. It provides a simple workflow for updating application state and allowing React components to subscribe to state changes. While it borrows ideas from Elm, Flux, and various fancy-sounding abstractions, it's actually quite simple.

It embraces an idea currently bubbling up in the UI community: make state explicit and immutable, use pure functions as much as possible, and push all side effects to the edge of your app. In fact, the entire state exists as a single atom: a deeply nested JS object that contains everything you need to render the current UI.

This seems radical, but it's the right way to do things.

  1. Your frontend is made up of simple pure functions that take inputs and return outputs. This makes it extremely easy to test, rationalize about, and do things like hot-reloading. Separating state from code just makes things simpler.

  2. Your state exists as a single object that is never mutated. Normally it's a JS object, but it could be an immutable.js object or even a client-side database. Thats right, putting state in one place means you could even use a database for state. That's not even the best part: with a single atom and immutability, you can easily snapshot and resume the app at any point in time!

Redux provides the ability for the UI to subscribe to changes to specific parts of the app state. Generally only top-level components in the UI select state from the global app state atom, and most components are pure: they simply receive data and render it.

The library has roots in flux, Facebooks original library for handling state. The main similarity is you dispatch actions to change state. An action is simply a JavaScript object with a type field and any other fields as arguments. These actions are dispatched across all registered "reducers", which are functions that take state and an action and return new state: (state, action) -> newState. All new states are grouped together into a new single atom app state.

The real-world is grey and misty like a London street. You can't use pure functions and a global app state atom for everything. Asynchronous code is inherently side-effecting, but by isolating it to a specific part of your app, the rest of the world doesn't have to be bothered with things such as promises or observables. Updating the app state and rendering the UI is completely synchronous, but "async action creators" are functions which have the ability to dispatch multiple actions over time.

Local state is obviously desirable in certain situations, although it's less important than you think. UIs tend to require global state: many different parts of the UI need access to the same data. However, local state is important mainly for performance reasons. We are not out of luck though: we can get local state back by scoping part of the global app state atom to single components, as CircleCI did.

The frontend space is super interesting these days, and there's a lot to talk about. Follow me as I blog more about what I learned rewriting my blog with these ideas. I'll walk through specific techniques in my blog's code dealing with:

  • Using immutable.js for app state
  • Integrating Redux with react-router
  • Data fetching and asynchronous action creators
  • Server-side rendering
  • Local state

Feel free to peruse my blog's code in the meantime.

Planet MozillaWebExtensions FAQ

WebExtensions are making some people happy, some people angry, many people ask questions.
Some of the answers can be found here, more to come as add-on developers keep discussing this hot topic.
My favourite one: No, your add-ons' ability and your own creativity won't be limited by the new API.

Planet MozillaUsing Hidden UI in the CCK2

One of the questions I get asked the most is how to hide certain UI elements of Firefox. I implemented the Hidden UI feature of the CCK2 specifically to address this problem. Using it can be a little daunting, though, so I wanted to take some time to give folks the basics.

The Hidden UI feature relies on CSS selectors. We can use CSS Selectors to specify any element in the Firefox user interface and then that element will be hidden. The trick is figuring out the selectors. To accomplish this, my primary tool is the DOM Inspector. With the DOM Inspector, I can look at any element in the Firefox user interface and determine it's ID. Once I have it's ID, I can usually specify the CSS selector as #ID and I can hide that element. Let's walk through using the DOM Inspector to figure out the ID of the home button.

  • Install the DOM Inspector
  • Go to Developer Tools and select DOM Inspector
  • From the DOM Inspector Window, select File->Inspect Chrome Document and select the first window
  • In the DOM Inspector Window, click on the Node Finder.
  • Click on the Home button in the Firefox Window.
  • You'll see results in the DOM Inspector that look like this:

  • This gives us something unique we can use - an ID. So #home-button in Hidden UI will hide the home button.

    You can use this method for just about every aspect of the Firefox UI except for menus and the Australis panel. For these items, I turn to the Firefox source code.

    If you want to hide anything on the Australis panel, you can look for IDs here. If you want to hide anything on the Firefox context menu, you can look here. If you want to hide anything in the menu bar, you can look here.

    As a last resort, you can simply hide menuitems based on their text. For instance, if you wanted to hide the Customize menu that appears when you right click on a toolbar, you could specify a selector of menuitem[label^='Customize]. This says "Hide any menu item that begins with the word Customize." Don't try to include the ellipsis in your selector because in most cases it's not ..., it's the unicode ellipsis (…). (Incidentally, that menu is defined here, along with the rest of the toolbar popup menu. Because it doesn't have an ID, you'll have to use menuitem.viewCustomizeToolbar.)

    Hopefully this should get everyone started. If there's something you can't figure out how to hide, let me know. And if you're trying to hide everything, you should probably be looking at a kiosk solution, not the CCK2...

Planet MozillaMozilla Learning Community Call Aug 25

Mozilla Learning Community Call Aug 25 Mozilla Learning community calls are open to all. The goal: work on the Mozilla Learning plan together.

Planet MozillaSetting up for Android and Firefox OS Development

This post is a follow-up to an earlier article I wrote about setting up a FirefoxOS development environment.

I’m going to set up a Sony Z3C as the target device for Mobile OS software development. The Sony Z3C (also known as Aries or aosp_d5803 ) is a nice device for Mobile OS hacking as it’s an AOSP device with good support for building the OS binaries. I’ve set the phone up for both FirefoxOS and Android OS development, to compare and see what’s common across both environments.

Please note that if you got your Sony Z3C from the Mozilla Foxfooding program, then this article isn’t for you. Those phones are already flashed and automatically updated with specific FirefoxOS builds that Mozilla staff selected for your testing. Please don’t replace those builds unless you’re actively developing for these phones and have a device set aside for that purpose.

My development host is a Mac (OSX 10.10) laptop already set up to build the Firefox for Macintosh product. It’s also set up to build the Firefox OS binaries for the Flame device.

Most of the development environment for the Flame is also used for the Aries device. In particular, the case-sensitive disk partition is required for both FirefoxOS and Android OS development. You’ll want this partition to be at least 100GB in size if you want to build both operating systems. Set this up before downloading FirefoxOS or Android souce code to avoid ‘include file not found’ errors.

The next step to developing OS code for the Aries is to root the device. This will void your warranty, so tread carefully.

For most Gecko and Gaia developers, you’ll want to start from the base image for the Aries. The easiest way to flash your device with a known-good FirefoxOS build is to run flash.sh in the expanded aries.zip file from the official builds. You can then flash the phone with just Gecko or Gaia from your local source code.

The Aries binaries from a FirefoxOS build:

aries_firefoxos_images

The Aries binaries in an Android Lollipop build:

aries_android_images

If you want to build Android OS for the Aries, then read these docs from Sony, and these Mac-specific steps for building Android Lollipop. Note that the Android Lollipop SDK requires XCode 5.1.1 and Java 7 (JRE and JDK.) Both versions of XCode and Java are older than the latest versions available, so you’ll need to install the downgrades before building the Android OS.

When it comes time to configure your Android OS build via the lunch command, select aosp_d5803-userdebug as your device. Once the build is finished (after about 2 hours on my Mac,) use these commands to flash your phone with the Android OS you just built:

fastboot flash boot out/target/product/aries/boot.img
fastboot flash system out/target/product/aries/system.img
fastboot flash userdata out/target/product/aries/userdata.img

Planet MozillaThunderbird and end-to-end email encryption – should this be a priority?

In the last few weeks, I’ve had several interesting conversations concerning email encryption. I’m also trying to develop some concept of what areas Thunderbird should view as our special emphases as we look forward. The question is, with our limited resources, should we strive to make better support of end-to-end email encryption a vital Thunderbird priority? I’d appreciate comments on that question, either on this Thunderbird blog posting or the email list tb-planning@mozilla.org.

"I took an oath to defend the constitution, and I felt the Constitution was being violated on a massive scale" SnowdenIn one conversation, at the “Open Messaging Day” at OSCON 2015, I brought up the issue of whether, in a post-Snowden world, support for end-to-end encryption was important for emerging open messaging protocols such as JMAP. The overwhelming consensus was that this is a non-issue. “Anyone who can access your files using interception technology can more easily just grab your computer from your house. The loss of functionality in encryption (such as online search of your webmail, or loss of email content if certificates are lost) will give an unacceptable user experience to the vast majority of users” was the sense of the majority.

Woman In HandcuffsIn a second conversation, I was having dinner with a friend who works as a lawyer for a state agency involved in white-collar crime prosecution. This friend also thought the whole Snowden/NSA/metadata thing had been blown out of proportion, but for a very different reason. Paraphrasing my friend’s comments, “Our agency has enormous powers to subpoena all kinds of records – bank statements,  emails – and most organizations will silently hand them over to me without you ever knowing about it. We can always get metadata from email accounts and phones, e.g. e-mail addresses of people corresponded with, calls made, dates and times, etc. There is alot that other government employees (non NSA) have access to just by asking for it, so some of the outrage about the NSA’s power and specifically the lack of judicial oversight is misplaced and out of proportion precisely because the public is mostly ignorant about the scope of what is already available to the government.”

So in summary, the problem is much bigger than the average person realizes, and other email vendors don’t care about it.

There are several projects out there trying to make encryption a more realistic option. In order to change internet communications to make end-to-end encryption ubiquitous, any protocol proposal needs wide adoption by key players in the email world, particularly by client apps (as opposed to webmail solutions where the encryption problem is virtually intractable.) As Thunderbird is currently the dominant multi-platform open-source email client, we are sometimes approached by people in the privacy movement to cooperate with them in making email encryption simple and ubiquitous. Most recently, I’ve had some interesting conversations with Volker Birk of Pretty Easy Privacy about working with them.

Should this be a focus for Thunderbird development?

Planet Mozillahappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195362] Quicksearch error pages (“foo is not a field” and friends) should still fill in search into quicksearch box
  • [1190476] set Comment field in GPG email to the URL of the bug
  • [1195645] don’t create a new session for every authenticated REST/BzAPI call
  • [1197084] No mail sent when bugs added to or removed from *-core-security groups
  • [1196614] restrict the ability for users with editusers/creategroups to alter admins and the admin group
  • [1196092] Switch logincookies primary key to auto_incremented id, make cookie a secondary UNIQUE key
  • [1197699] always store the ip address in the logincookies table
  • [1197696] group_members report doesn’t display nested inherited groups
  • [1196134] add ability for admins to force a user to change their password on next login
  • [1192687] add the ability for users to view and revoke existing sessions
  • [1195836] Remove install-module.pl from bmo
  • [1180733] “An invalid state parameter was passed to the GitHub OAuth2 callback” error when logging in with github

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Planet MozillaRock, Meats, JavaScript – BrazilJS 2015

BrazilJS audience

I just got back from a 4 day trip to Brazil and back to attend BrazilJS. I was humbled and very happy to give the opening keynote seeing that the closing was meant to be by Brendan Eich and Andreas Gal – so, no pressure.

The keynote

In my keynote, I asked for more harmony in our community, and more ownership of the future of JavaScript by those who use it in production.

Keynote time

For quite some while now, I am confused as to who we are serving as browser makers, standards writers and library creators. All of the excellent solutions we have seem to fall through the cracks somewhere when you see what goes live.

That’s why I wanted to remind the audience that whatever amazing, inspiring and clever thing they’ll hear about at the conference is theirs to take to fruition. We have too much frustration in our market, and too much trying to one-up one another instead of trying to solve problems and making the solutions easily and readily available. The slides are on Slideshare, and a video will become available soon.

About Brazil

There are a few things to remember when you are going to Brazil:

  • When people are excited about something, they are really excited about it. There’s a lot of passion.
  • Personal space is as rare as an affordable flat in central London – people will affectionately touch strangers and there is a lot of body language. If that’s not your thing, make it obvious!
  • You will eat your body weight in amazing meat and food is a social gathering, not just fuel. Thus, bring some time.
  • Everybody will apologise for their bad English before having a perfectly comprehensible conversation with you
  • People of all ages and backgrounds are into heavy music (rock, metal, hardcore…)

About the event

VR ride about the history of JavaScript

BrazilJS was a ridiculous attempt at creating the biggest JavaScript event with 1,300 people. And it was a 100% success at that. I am fascinated by the professionalism, the venue, the AV setup and all the things that were done for speakers and attendees alike. Here are just a few things that happened:

  • There was a very strong message about diversity and a sensible and enforced code of conduct. This should not be a surprise, but when you consider Brazilian culture and reputation (think Carnival) it takes pride and conviction in those matters to stand up for them the way the organisers did.
  • The AV setup was huge and worked fine. There were no glitches in the audio and every presentation was live translated from English to Brazilian Portuguese and vice versa. The translation crew did a great job and we as presenters should do more to support them. I will write a post soon about this.
  • Wireless was flaky, but available when you needed it. It is pretty ridiculous to assume in a country where connectivity isn’t cheap and over a thousand people with two devices each try to connect that you’d have a good connection. As a presenter, I never rely on availability – neither should you.
  • There was always enough coffee, snacks and even a huge cake celebrating JavaScript (made by the mom of one of the organisers – the cake, not JavaScript)
  • The overall theme was geek – as geek as it can get. The organisers dressed up as power rangers, in between talks we saw animated 90s TV series, there as a Virtual Reality ride covering the history of JavaScript built with Arduinos and there were old-school arcade machines and consoles to play with.
  • It was a single track conference over two days with lots of high-class speakers and very interesting topics.
  • As a speaker, everything was organised for me. We all took a hired bus from and to the venue and we had lunch catered for us.
  • The conference also had a minority/diversity scholarship program where people who couldn’t afford to come got a sponsored ticket. These people weren’t grandstanded or shown up but just became a part of the crowd. I was lucky to chat to a few and learned quite a few things.
  • The after party was a big “foot in mouth” moment for me as I kept speaking out against bands at those. However, in Brazil and choosing a band that covers lots of rock anthems, it very much worked. I never thought I see an inclusive, non-aggressive mosh pit and people stage diving at a JavaScript event – I was wrong.

<figure>action shotMe, stagediving at the BrazilJS after party – photo by @orapouso</figure>

So, all I can say is thank you to everyone involved. This was a conference to remember and the enthusiasm of the people I met and talked to is a testament to how much this worked!

Personal/professional notes

BrazilJS was an interesting opportunity for me as I wanted to connect with my Microsoft colleagues in the country. I was amazed by how well-organised our participation was and loved the enthusiasm people had for us. Even when one of our other speakers couldn’t show up, we simply ran an impromptu Q&A on stage abut Edge. Instead of a sales booth we had technical evangelists at hand, who also helped translating. Quite a few people came to the booth to fix their web sites for Microsoft Edge’s standard compliant rendering. It’s fun to see when fixing things yields quick results.

Other short impressions:

  • I had no idea what a machine my colleague Jonathan Sampson is on stage. His talk in adventurous Portuguese had the audience in stitches and I was amazed by the well-structured content. I will pester him to re-record this in English.
  • Ju Gonçalves (@cyberglot) gave a great, detailed talk about reduce(). If you are a conference organiser, check her out as a new Speaker() option – she is now based in Copenhagen.
  • It was fun to catch up with Laurie Voss after a few years (we worked in Yahoo together) and it was great of him to point to his LGBTQ Slack group inviting people to learn more about that facet of diversity in our community.
  • It warmed me to see the Mozilla Brazil community still kicking butt. Warm, affectionate and knowledgable people like the ones you could meet at the booth there are the reason why I became a Mozillian in the first place.

And that’s that

Organisers on stage

Thank you for everyone involved. Thank you to everybody asking me lots of technical questions and giving non-filtered feedback. Thank you for showing that a lot of geeks can also be very human and warm. Thank you for embracing someone who doesn’t speak your language. I met quite a few people I need to follow up with and I even had a BBQ at the family of two of the attendees I met before I went to my plane back home. You rock!

Always bet on JavaScript cake

Planet MozillaVancouver Trip Summary

I spent Thursday and Friday of last week with my lovely colleagues in Vancouver. Some things to note:

  • The Vancouver office is awesome, especially the art (h/t David Ascher’s wife)
  • Thanks to Jennie and the rest of the YVR team for making me feel welcome around the lunch table!
  • Luke promised to play guitar but he never did :(

Here’s how the two days went down:

  • Sabrina and I started off by having a morning meeting with Michelle via Vidyo. This produced several clarifying insights including the use of “portfolio” as the key metaphor for Clubs pages in the MLN Directory. This helped shaped our conversations during the rest of my visit.
  • Sabrina and I then reviewed what we already know about our audience, our programs and offerings, and value adds for the user.
  • We then sketched out a model for an engagement funnel

IMG_20150821_150739

    • Then we got to work on the MLN Directory model. We came up with streamlined sketches for the various content types, thinking in terms of mobile-first.
      • Member profile:
        • See field listing
        • Implied functionality: certain Leadership roles might be auto-applied (e.g. if the user owns an approved Club page, the system can apply the “Club Captain” role), while others might require an admin interface (e.g. Regional Coordinator, Hive Member). We’d like to allow for flexible Role names, to accommodate local flavor (i.e. Hive Chicago has specific role names they give to members).
      • Club and Hive pages:
        • Club page field listing
        • Hive page field listing
        • A key insight was that we should treat each distinct entity differently. That is, Club pages and Hive pages might be quite different, and we don’t need to try to force them into the same treatment. We also recognized that our MVP can simply address these two specific types of groups, since this is where our programs are focused.
        • We decided that focusing on Reporting for Clubs would be the highest value functionality, so we spec’ed out what that would look like (wireframes coming soon)
        • For Hive pages, we want to re-create the org listings and contact cards that the current Hive Directories have
  • We also met with Laura de Reynal and David Ascher to hash out plans for the audience research project. More on that soon, but you can see our “most important questions” at the top of this pad.
  • The issue of badges came up a few times. First, because we found that the plan for “Club Captain” and “Regional Coordinator” badges felt a little redundant given the concept of “roles.” Second, because we saw an opportunity to incentivize and reward participation by providing levels of badges (more like an “achievements” model). Seems like our colleagues were thinking along the same lines.

All in all, it was a really productive couple of days. We’ll be getting wireframes and then mockups out to various stakeholders over the next heartbeat, along with hashing out the technical issues with our engineering team.

Feel free to share any comments and questions.


Planet MozillaRecent Fennec platform changes

There has been a series of recent changes to the Fennec platform code (under widget/android). Most of the changes was refactoring in preparation for supporting multiple GeckoViews.

Currently, only one GeckoView is supported at a time in an Android app. This is the case for Fennec, where all tabs are shown within one GeckoView in the main activity. However, we'd like to eventually support having multiple GeckoView's at the same time, which would not only make GeckoView more usable and make more features possible, but also reduce a lot of technical debt that we have accumulated over the years.

The simplest way to support multiple GeckoViews is to open multiple nsWindows on the platform side, and associate each GeckoView with a new nsWindow. Right now, we open a new nsWindow in our command line handler (CLH) during startup, and never worry about having to open another window again. In fact, we quit Fennec by closing our only window. This assumption of having only one window will change for multiple GeckoView support.

Next, we needed a way of associating a Java GeckoView with a C++ nsWindow. For example, if a GeckoView sends a request to perform an operation, Gecko would need to know which nsWindow corresponds to that GeckoView. However, Java and platform would need to coordinate GeckoView and nsWindow creation somehow so that a match can be made.

Lastly, existing messaging systems would need to change. Over the years, GeckoAppShell has been the go-to place for platform-to-Java calls, and GeckoEvent has been the go-to for Java-to-platform calls. Over time, the two classes became a big mess of unrelated code stuffed together. Having multiple GeckoViews would make it even harder to maintain these two classes.

But there's hope! The recent refactoring introduced a new mechanism of implementing Java native methods using C++ class members 1). Using the new mechanism, calls on a Java object instance are automatically forwarded to calls on a C++ object instance, and everything in-between is auto-generated. This new mechanism provides a powerful tool to solve the problems mentioned above. Association between GeckoView and nsWindow is now a built-in part of the auto-generated code – a native call on a GeckoView instance can now be transparently forwarded to a call on an nsWindow instance, without writing extra code. In addition, events in GeckoEvent can now be implemented as native methods. For example, preference events can become native methods inside PrefHelper, and the goal is to eventually eliminate GeckoEvent altogether 2).

Effort is underway to move away from using the CLH to open nsWindows, which doesn't give an easy way to establish an association between a GeckoView and an nsWindow 3). Instead, nsWindow creation would move into a native method inside GeckoView that is called during GeckoView creation. As part of moving away from using the CLH, making a speculative connection was moved out of the CLH into its own native method inside GeckoThread 4). That also had the benefit of letting us make the speculative connection much earlier in the startup process.

This post provides some background on the on-going work in Fennec platform code. I plan to write another follow-up post that will include more of the technical details behind the new mechanism to implement native calls.

1) Bug 1178850 (Direct native Java method calls to C++ classes), bug 1186530 (Implement per-instance forwarding of native Java methods), bug 1187552 (Support direct ownership of C++ objects by Java objects), bug 1191083 (Add mechanism to handle native calls before Gecko is loaded), bug 1192043 (Add mechanism to proxy native calls to Gecko thread)
2) Bug1188959 ([meta] Convert GeckoEvent to native methods)
3) Bug 1197957 (Let GeckoView control nsWindow creation)
4) Bug 1195496 (Start speculative connection earlier in startup)

Planet MozillaChris Beard: Community Participation Guidelines

Chris Beard: Community Participation Guidelines Mozilla CEO Chris Bears talks about the Mozilla Project's Community Participation Guidelines in a recent Monday Project Meeting.

Planet Mozilla“we are all remote” at Cultivate NYC

It’s official!!

I’ll be speaking about remoties at the O’Reilly Cultivate conference in NYC!

Cultivate logoCultivate is being held on 28-29 Sept 2015, in the Javits conference center, in New York City. This is intentionally the same week, and same location, as the O’Reilly Strata+Hadoop World conference, so if you lead others in your organization, and are coming to Strata anyways, you should come a couple of days early to focus on cultivate-ing (!) your leadership skills. For more background on O’Reilly’s series of Cultivate conferences, check out this great post by Mike Loukides. I attended the Cultivate Portland conference last month, when it was co-located with OSCON, and found it insightful edge-of-my-seat stuff. I expect Cultivate NYC to be just as exciting.

Meanwhile, of course, I’m still writing like crazy on my book (and writing code when no-one is looking!), so have to run. As always, if you work remotely, or are part of a distributed team, I’d love to hear what does/doesn’t work for you and any wishes you have for topics to include in the book – just let me know.

Hope to see you in NYC next month.

John.
=====

Planet MozillaOkay, you want WebExtensions API suggestions? Here's three.

Not to bring out the "lurkers support me in E-mail" argument but the public blog comments are rather different in opinion and tenor from the E-mail I got regarding our last post upon my supreme concern and displeasure over the eventual end of XPCOM/XUL add-ons. I'm not sure why that should be, but never let it be said that MoFo leadership doesn't stick to their (foot)guns.

With that in mind let me extend, as an author of a niche addon that I and a number of dedicated users employ regularly for legacy protocols, an attempt at an olive branch. Here's the tl;dr: I need a raw socket API, I need a protocol handler API, and I need some means of being able to dynamically write an document/data stream and hand it to the docshell. Are you willing?

When Mozilla decommissioned Gopher support in Firefox 4, the almost universal response was "this shouldn't be in core" and the followup was "if you want it, it should be an add-on, maintained by the community." So I did, and XPCOM let me do this. With OverbiteFF, Gopher menus (and through an analogous method whois and ph) are now first class citizens in Firefox. You can type a Gopher URL and it "just works." You can bookmark them. You can interact with them. They appear no differently than any other web page. I created XPCOM components for a protocol object and a channel object, and because they're XPCOM-based they interact with the docshell just like every other native core component in Necko.

More to the point, I didn't need anyone's permission to do it. I just created a component and loaded it, and it became as "native" as anything else in the browser. Now I need "permission." I need APIs to do what I could do all by myself beforehand.

What I worry is that Mozilla leadership is going to tick the top 10 addons or so off as working and call it a day, leaving me and other niche authors no way of getting ours to work. I don't think these three APIs are either technically unrealistic or lack substantial global applicability; they're foundational for getting new types of protocol access into the browser, not just old legacy ones. You can innovate nearly anything network-based with these three proposals.

So how about it? I know you're reading. Are you going to make good on your promises to us little guys, or are we just screwed?

Planet MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>