Planet MozillaRust for C++ programmers - part 2: control flow

If

The `if` statement is pretty much the same in Rust as C++. One difference is that the braces are mandatory, but brackets around the expression being tested are not. Another is that `if` is an expression, so you can use it the same way as the ternary `?` operator in C++ (remember from last time that if the last expression in a block is not terminated by a semi-colon, then it becomes the value of the block). There is no ternary `?` in Rust. So, the following two functions do the same thing:
fn foo(x: int) -> &'static str {
    let mut result: &'static str;
    if x < 10 {
        result = "less than 10";
    } else {
        result = "10 or more";
    }
    return result;
}

fn bar(x: int) -> &'static str {
    if x < 10 {
        "less than 10"
    } else {
        "10 or more"
    }
}
The first is a fairly literal translation of what you might write in C++. The second is in better Rust style.

You can also write `let x = if ...`, etc.

Loops

Rust has while loops, again just like C++:
fn main() {
    let mut x = 10;
    while x > 0 {
        println!("Current value: {}", x);
        x -= 1;
    }
}
There is no do...while loop in Rust, but we do have the `loop` statement which just loops forever:
fn main() {
    loop {
        println!("Just looping");   
    }
}
Rust has `break` and `continue` just like C++.

For loops

Rust also has `for` loops, but these are a bit different. Lets say you have a vector of ints and you want to print them all (we'll cover vectors/arrays, iterators, and generics in more detail in the future. For now, know that a `Vec<T>` is a sequence of `T`s and `iter()` returns an iterator from anything you might reasonably want to iterate over). A simple `for` loop would look like:
fn print_all(all: Vec<int>) {
    for a in all.iter() {
        println!("{}", a);
    }
}
If we want to index over the indices of `all` (a bit more like a standard C++ for loop over an array), you could do
fn print_all(all: Vec<int>) {
    for i in range(0, all.len()) {
        println!("{}: {}", i, all.get(i));
    }
}
Hopefully, it is obvious what the `range` and `len` functions do.

Switch/Match

Rust has a match expression which is similar to a C++ switch statement, but much more powerful. This simple version should look pretty familiar:
fn print_some(x: int) {
    match x {
        0 => println!("x is zero"),
        1 => println!("x is one"),
        10 => println!("x is ten"),
        y => println!("x is something else {}", y),
    }
}
There are some syntactic differences - we use `=>` to go from the matched value to the expression to execute, and the match arms are separated by `,` (that last `,` is optional). There are also some semantic differences which are not so obvious: the matched patterns must be exhaustive, that is all possible values of the matched expression (`x` in the above example) must be covered. Try removing the `y => ...` line and see what happens; that is because we only have matches for 0, 1, and 10 and obviously there are lots of other ints which don't get matched. In that last arm, `y` is bound to the value being matched (`x` in this case). We could also write:
fn print_some(x: int) {
    match x {
        x => println!("x is something else {}", x)
    }
}
Here the `x` in the match arm introduces a new variable which hides the argument `x`, just like declaring a variable in an inner scope.

If we don't want to name the variable, we can use `_` for an unnamed variable, which is like having a wildcard match. If we don't want to do anything, we can provide an empty branch:
fn print_some(x: int) {
    match x {
        0 => println!("x is zero"),
        1 => println!("x is one"),
        10 => println!("x is ten"),
        _ => {}
    }
}
Another semantic difference is that there is no fall through from one arm to the next.

We'll see in later posts that match is extremely powerful. For now I want to introduce just a couple more features - the 'or' operator for values and `if` clauses on arms. Hopefully an example is self-explanatory:
fn print_some_more(x: int) {
    match x {
        0 | 1 | 10 => println!("x is one of zero, one, or ten"),
        y if y < 20 => println!("x is less than 20, but not zero, one, or ten"),
        y if y == 200 => println!("x is 200 (but this is not very stylish)"),
        _ => {}
    }
}
Just like `if` expressions, `match` statements are actually expressions so we could re-write the last example as:
fn print_some_more(x: int) {
    let msg = match x {
        0 | 1 | 10 => "one of zero, one, or ten",
        y if y < 20 => "less than 20, but not zero, one, or ten",
        y if y == 200 => "200 (but this is not very stylish)",
        _ => "something else"
    };

    println!("x is {}", msg);
}
Note the semi-colon after the closing brace, that is because the `let` statement is a statement and must take the form `let msg = ...;`. We fill the rhs with a match expression (which doesn't usually need a semi-colon), but the `let` statement does. This catches me out all the time.

Motivation: Rust match statements avoid the common bugs with C++ switch statements - you can't forget a `break` and unintentionally fall through; if you add a case to an enum (more later on) the compiler will make sure it is covered by your `match` statement.

Method call

Finally, just a quick note that methods exist in Rust, similarly to C++. They are always called via the `.` operator (no `->`, more on this in another post). We saw a few examples above (`len`, `iter`). We'll go into more detail in the future about how they are defined and called. Most assumptions you might make from C++ or Java are probably correct.

Planet MozillaTnotify, a Terminal Notification

Hello guys, this is short post to tell you about tnotify the easiest, most attractive way to fire off notifications from your terminal. This application is available on ruby gem repository, you know why...

Planet Mozilla10 Year Blogaversary

10 years ago today, I started blogging.

At the time, I thought I was rather late to the blogging party. In hindsight, it seems not. Since then, through one change of host (thanks, Mozillazine!), there have been 1,463 posts (including this one), and 11,486 comments. Huge thanks to everyone who’s read and/or commented (well, if you commented without reading, not so much) in the last 10 years. It’s uncertain whether I or the blog will last another 10 (the Lord is in control of that) but here’s to however long it is!

Planet MozillaFirefox 29 beta 8 to beta 9

For the last beta for Desktop, this beta is smaller than the previous. This beta mostly fixes the remaining critical bugs including a critical Netflix bug with Silverlight. Some search configuration bugs with Bing and Yahoo have been fixed.

  • 34 changesets
  • 56 files changed
  • 490 insertions
  • 270 deletions

ExtensionOccurrences
cpp16
h10
js8
html8
xml3
jsm2
in2
css2
xul1
sjs1
nsi1
ini1
idl1

ModuleOccurrences
content13
browser10
gfx7
js5
widget4
extensions4
toolkit3
security3
mobile3
xpcom2
netwerk1
ipc1

List of changesets:

Tim TaubertBug 966843 - Fix intermittent browser_615394-SSWindowState_events.js hangs. r=smacleod, a=test-only - 7938ad1529f9
Jeff GilbertBug 892990 - Cap waiting time for wait-for-composite in WebGL reftests. r=bjacob, a=test-only - a27d9f49afac
Matt WoodrowBug 988309 - Add nsRenderingContext constructor for a DrawTarget. r=roc, a=sledru - 843dc71f729b
Matt WoodrowBug 991767 - Use Moz2D for printing surfaces. r=roc, a=sledru - 55422890fb8f
Nicholas HurleyBug 994344 - Prevent access of null mDB in seer. r=mcmanus, a=sledru - e24aafe4dffd
Nathan FroydBug 993546 - Refactor malloc-wrapping memory reporter implementations. r=njn, a=abillings, ba=noidlchanges - c8f1a4f5ca4d
Mike ConleyBug 996186 - Customize mode briefly makes titlebar transparent during transition. r=Gijs, a=sylvestre - caf2147c04f4
Gijs KruitboschBug 987792 - increase panel buttons' labels' clip rect height, r=jaws, a=sylvestre - 23311c5022ea
Mike ConleyBug 996148 - With tabs in titlebar disabled, the private browsing mask can be incorrectly positioned in certain modes. r=MattN. a=gavin. - 5a2fd00c18e0
Richard NewmanBug 941744 - Remove Send Tab intent filter from RELEASE_BUILDs. r=nalexander, a=sylvestre - b30d5dfe0421
Dan GlastonburyBug 966630 - Clamp level to TexImage operations to [0..31]. r=jgilbert, a=abillings - 17432cc5074d
Jan de MooijBug 995607 - Fix an AutoDebugModeInvalidation issue. r=shu, a=abillings - 838a0ac967ae
Marty RosenbergBug 982398 - Make sure a script isn't lazy before calling it. r=jandem, a=sledru - d5a00d84b8d6
Mike ConnorBug 958883 - Use HTTPS for Yahoo searches, part 1. r=gavin, a=sledru - c5d91a0d3a2e
Mike ConnorBug 958883 - Use HTTPS for Yahoo searches, part 2. r=mfinkle, a=sledru - 38f6b630f2cd
Mike ConnorBug 984530 - Bing search tags are not working properly. r=mfinkle, a=sledru - 148684fea6d4
Mike ConnorBug 983723 - Yahoo search tags are not working properly. r=mfinkle, a=sledru - 4873bad57984
Sean StanglBug 856796 - Attempt detection of YARR bug. r=till, a=sledru - 51375c4deaf6
Matt WoodrowBug 995065 - Backout Bug 947227 on the release branches. r=jgilbert, a=sledru - cea0324a3ac6
Robert StrongBug 996960 - Add new RTL languages to locales.nsi - Uyghur ug and Urdu ur. r=ehsan, a=sledru - 92cae49290ae
Camilo VieccoBug 987816 - verifying with certificateUsageVerifyCA always return failure. r=dkeeler a=lsblakk - b192379d4d65
Masayuki NakanoBug 964718 Don't dispatch DOM event if internal event doesn't need that. Otherwise, implement Duplicate() method r=smaug, a=sylvestre - aaf5030cbf98
Olli PettayBug 957666 - Send back a huge retry reconnection time in delayedServerEvents.sjs. r=ehsan, a=test-only - 3051963e5ec3
Nicolas B. PierronBug 992968. r=efaust, a=sledru - df4f53346f73
Tim TaubertBug 947570 - Disable browser_597071.js until rewritten for Marionette. r=smacleod, a=test-only - 85e9ba2c9371
Patrick BrossetBug 993190 - Use the outline highlighter on fennec. r=miker, a=sledru - ddbb75065940
Mike ConnorBug 997179 - Bing is not present in default search engines. r=mfinkle, a=sledru - bd52a0b3dc58
Tim TaubertBug 824021 - Don't clear set of windows to resurrect on write when receiving messages. r=yoric, a=test-only - 5f78c1ba85e3
Ralph GilesBug 995289 - Use fmod to wrap custom waveform phase. r=padenot a=sledru - 3b65e1a1c52a
Monica ChewBug 997759: Prefs for phishing and malware tables are comma-sep lists (r=gcp,ba=sledru) - 034a63535df0
Sebastian HengstBug 997625 - Sync panel: Checkbox for syncing passwords enabled (not disabled) while sync credentials need reauth r=ttaubert a=sylvestre - 0add478cf3b4
Bobby HolleyBug 994335 - Null-check aProtoDoc in nsXULPrototypeScript::Serialize. r=smaug, a=sledru - 7f0030f9aac0
Matthew NoorenbergheBug 988156 - Backout fd0d0f6002dc (Bug 846566) for causing fullscreen videos on Netflix to appear black. r=smichaud a=sylvestre - 9ce31b9cd02d
Tim TaubertBug 996159 - Fix sync config section on Windows with 125% DPI. r=gavin a=sylvestre - bd6fdc0dcac0

r= means reviewed by
a= means uplift approved by

Previous changelogs:

Planet MozillaRust for C++ programmers - an intermission - why Rust

I realise that in terms of learning Rust, I had jumped straight to the 'how' and skipped the 'why'. I guess I am in enough of a Rust bubble that I can't imagine why you wouldn't want to learn it. So, I will make a bit more of an effort to explain why things are how they are. Here I will try to give a bit of an overview/motivation.

If you are using C or C++, it is probably because you have to - either you need low-level access to the system, or need every last drop of performance, or both. Rust aims to do offer the same level of abstraction around memory, the same performance, but be safer and make you more productive.

Concretely, there are many languages out there that you might prefer to use to C++: Java, Scala, Haskell, Python, and so forth, but you can't because either the level of abstraction is too high - you don't get direct access to memory, you are forced to use garbage collection, etc. - or there are performance issues - either performance is unpredictable or its simply not fast enough. Rust does not force you to use garbage collection, and as in C++, you get raw pointers to memory to play with. Rust subscribes to the 'pay for what you use' philosophy of C++. If you don't use a feature, then you don't pay any performance overhead for its existence. Furthermore, all language features in Rust have predictable (and usually small) cost.

Whilst these constraints make Rust a (rare) viable alternative to C++, Rust also has benefits: it is memory safe - Rust's type system ensures that you don't get the kind of memory errors which are common in C++ - memory leaks, accessing un-initialised memory, dangling pointers - all are impossible in Rust. Furthermore, whenever other constraints allow, Rust strives to prevent other safety issues too - for example, all array indexing is bounds checked (of course, if you want to avoid the cost, you can (at the expense of safety) - Rust allows you to do this in unsafe blocks, along with many other unsafe things. Crucially, Rust ensures that unsafety in unsafe blocks stays in unsafe blocks and can't affect the rest of your program). Finally, Rust takes many concepts from modern programming languages and introduces them to the systems language space. Hopefully, that makes programming in Rust more productive, efficient, and enjoyable.

I would like to motivate some of the language features from part 1. Local type inference is convenient and useful without sacrificing safety or performance (it's even in modern versions of C++ now). A minor convenience is that language items are consistently denoted by keyword (`fn`, `let`, etc.), this makes scanning by eye or by tools easier, in general the syntax of Rust is simpler and more consistent than C++. The `println!` macro is safer than printf - the number of arguments is statically checked against the number of 'holes' in the string and the arguments are type checked. This means you can't make the printf mistakes of printing memory as if it had a different type or addressing memory further down the stack by mistake. These are fairly minor things, but I hope they illustrate the philosophy behind the design of Rust.

Planet MozillaGrymt - because I didn't invent Grunt here

grymt is a python tool that takes a directory full of .html, .css and .js and prepares the html for optimial production use.

For a teaser:

  1. Look at the "input"

  2. Look at the "output" (Note! You have to right-click and view source)

So why did I write my own tool and not use Grunt?!

Glad you asked! The reason is simple: I couldn't get Grunt to work.

Grunt is a framework. It's a place where you say which "recipes" to execute and how. It's effectively a common config framework. Like make.
However, I tried to set up a bunch of recipes in my Gruntfile.js and most of them worked well individually but it was a hellish nightmare to get it all to work together just the way I want it.

For example, the grunt-contrib-uglify is fine for doing the minification but it doesn't work with concatenation and it doesn't deal with taking one input file and outputting to a different file.
Basically, I spent two evenings getting things to work but I could never get exactly what I wanted. So I wrote my own and because I'm quite familiar with this kind of stuff, I did it in Python. Not because it's better than Node but just because I had it near by and was able to quicker build something.

So what sweet features do you get out of grymt?

  1. You can easily make an output file have a hash in the filename. E.g. vendor-$hash.min.js becomes vendor-64f7425.min.js and thus the filename is always unique but doesn't change in between deployments unless you change the files.

  2. It automatically notices which files already have been minified. E.g. no need to minify somelib.min.js but do minify otherlib.js.

  3. You can put $git_revision anywhere in your HTML and this gets expanded automatically. For example, view the source of buggy.peterbe.com and look at the first 20 lines.

  4. Images inside CSS get rewritten to have unique names (based on files' modified time) so they can be far-future cached aggresively too.

  5. You never have to write down any lists of file names in soome Gruntfile.js equivalent file

  6. It copies ALL files from a source directory. This is important in case you have something like this inside your javascript code: $('<img>').attr('src', 'picture.jpg') for example.

  7. You can chose to inline all the minified and concatenated CSS or javascript. Inlining CSS is neat for single page apps where you have a majority of primed cache hits. Instead of one .html and one .css you get just one .html and the amount of bytes is the same. Not having to do another HTTP request can save a lot of time on web performance.

  8. The generated (aka. "dist" directory) contains everything you need. It does not refer back to the source directory in any way. This means you can set up your apache/nginx to point directly at the root of your "dist" directory.

So what's the catch?

  1. It's not Grunt. It's not a framework. It does only what it does and if you want it to do more you have to work on grymt itself.

  2. The files you want to analyze, process and output all have to be in a sub directory.
    Look at how I've laid out the files here in this project for example. ALL files that you need is all in one sub-directory called app. So, to run grymt I simply run: grymt app.

  3. The HTML files you throw into it have to be plain HTML files. No templates for server-side code.

How do you use it?

pip install grymt

Then you need a directory it can process, e.g ./client/ (assumed to contain a .html file(s)).

grymt ./client

For more options, check out

grymt --help

What's in the future of grymt?

If people like it and want to add features, I'm more than happy to accept pull requests. Some future potential feature work:

  • I haven't needed it immediately, yet, myself, but it would be nice to add things like coffeescript, less, sass etc into pre-processing hooks.

  • It would be easy to automatically generate and insert a reference to a appcache manifest. Since every file used and mentioned is noticed, we could very accurately generate an appcache file that is less prone to human error.

  • Spitting out some stats about number bytes saved and number of files reduced.

Planet MozillaPoV Firefox

A nice project by Antonio Ospite:

It’s a device made of a programmable LED strip attached to a jump rope, which can be used with high exposure photography for Light Painting. The principle behind it is the same as those cool POV (Persistence of vision) displays.

Very cool!

(via Hack A Day)


Planet MozillaWeb Components and you – dangers to avoid

Lego
Legos by C Slack

Web Components are a hot topic now. Creating widgets on the web that are part of the browser’s rendering flow is amazing. So is inheriting from and enhancing existing ones. Don’t like how a SELECT looks or works? Get it and override what you don’t like. With the web consumed on mobile devices performance is the main goal. Anything we can do to save on battery consumption and to keep interfaces responsive without being sluggish is a good thing to do.

Web Components are a natural evolution of HTML. HTML is too basic to allow us to create App interfaces. When we defined HTML5 we missed the opportunity to create semantic widgets existing in other UI libraries. Instead of looking at the class names people used in HTML, it might have been more prudent to look at what other RIA environments did. We limited the scope of new elements to what people already hacked together using JS and the DOM. Instead we should have aimed for parity with richer environments or desktop apps. But hey, hindsight is easy.

What I am more worried about right now is that there is a high chance that we could mess up Web Components. It is important for every web developer to speak up now and talk to the people who build browsers. We need to make this happen in a way our end users benefit from Web Components the best. We need to ensure that we focus our excitement on the long-term goal of Web Components. Not on how to use them right now when the platforms they run on aren’t quite ready yet.

What are the chances to mess up? There are a few. From what I gathered at several events and from various talks I see the following dangers:

  • One browser solutions
  • Dependency on filler libraries
  • Creating inaccessible solutions
  • Hiding complex and inadequate solutions behind an element
  • Repeating the “just another plugin doing $x” mistakes

One browser solutions

This should be pretty obvious: things that only work in one browser are only good for that browser. They can only be used when this browser is the only one available in that environment. There is nothing wrong with pursuing this as a tech company. Apple shows that when you control the software and the environment you can create superb products people love. It is, however, a loss for the web as a whole as we just can not force people to have a certain browser or environment. This is against the whole concept of the web. Luckily enough, different browsers support Web Components (granted at various levels of support). We should be diligent about asking for this to go on and go further. We need this, and a great concept like Web Components shouldn’t be reliant on one company supporting them. A lot of other web innovation that heralded as a great solution for everything went away quickly when only one browser supported it. Shared technology is safer technology. Whilst it is true that more people having a stake in something makes it harder to deliver, it also means more eyeballs to predict issues. Overall, sharing efforts prevents an open technology to be a vehicle for a certain product.

Dependency on filler libraries

A few years ago we had a great and – at the same time – terrible idea: let’s fix the problems in browsers with JavaScript. Let’s fix the weirdness of the DOM by creating libraries like jQuery, prototype, mootools and others. Let’s fix layout quirks with CSS libraries. Let’s extend the functionality of CSS with preprocessors. Let’s simulate functionality of modern browsers in older browsers with polyfills.

All these aim at a simple goal: gloss over the differences in browsers and allow people to use future technologies right now. This is on the one hand a great concept: it empowers new developers to do things without having to worry about browser issues. It also allows any developer to play with up and coming technology before its release date. This means we can learn from developers what they want and need by monitoring how they implement interfaces.

But we seem to forget that these solutions were build to be stop-gaps and we become reliant on them. Developers don’t want to go back to a standard interface of DOM interaction once they got used to $(). What people don’t use, browser makers can cross off their already full schedules. That’s why a lot of standard proposals and even basic HTML5 features are missing in them. Why put effort into something developers don’t use? We fall into the trap of “this works now, we have this”, which fails to help us once performance becomes an issue. Many jQuery solutions on the desktop fail to perform well on mobile devices. Not because of jQuery itself, but because of how we used it.

Which leads me to Web Components solutions like X-Tags, Polymer and Brick. These are great as they make Web Components available right now and across various browsers. Using them gives us a glimpse of how amazing the future for us is. We need to ensure that we don’t become dependent on them. Instead we need to keep our eye on moving on with implementing the core functionality in browsers. Libraries are tools to get things done now. We should allow them to become redundant.

For now, these frameworks are small, nimble and perform well. That can change as all software tends to grow over time. In an environment strapped for resources like a $25 smartphone or embedded systems in a TV set every byte is a prisoner. Any code that is there to support IE8 is nothing but dead weight.

Creating inaccessible solutions

Let’s face facts: the average web developer is more confused about accessibility than excited by it. There are many reasons for this, none of which worth bringing up here. The fact remains that an inaccessible interface doesn’t help anyone. We tout Flash as being evil as it blocks out people. Yet we build widgets that are not keyboard accessible. We fail to provide proper labeling. We make things too hard to use and expect the steady hand of a brain surgeon as we create tight interaction boundaries. Luckily enough, there is a new excitement about accessibility and Web Components. We have the chance to do something new and do it right this time. This means we should communicate with people of different ability and experts in the field. Let’s not just convert our jQuery plugins to Web Components in verbatim. Let’s start fresh.

Hiding complex and inadequate solutions behind an element

In essence, Web Components allow you to write custom elements that do a lot more than HTML allows you to do now. This is great, as it makes HTML extensible (and not in the weird XHTML2 way). It can also be dangerous, as it is simple to hide a lot of inefficient code in a component, much like any abstraction does. Just because we can make everything into an element now, doesn’t mean we should. What goes into a component should be exceptional code. It should perform exceptionally well and have the least dependencies. Let’s not create lots of great looking components full of great features that under the hood are slow and hard to maintain. Just because you can’t see it doesn’t mean the rules don’t apply.

Repeating the “just another plugin doing $x” mistake

You can create your own carousel using Web Components. That doesn’t mean though that you have to. Chances are that someone already built one and the inheritance model of Web Components allows you to re-use this work. Just take it and tweak it to your personal needs. If you look for jQuery plugins that are image carousels right now you better bring some time. There are a lot out there – in various states of support and maintenance. It is simple to write one, but hard to maintain.

Writing a good widget is much harder than it looks. Let’s not create a lot of components because we can. Instead let’s pool our research and findings and build a few that do the job and override features as needed. Core components will have to change over time to cater for different environmental needs. That can only happen when we have a set of them, tested, proven and well architected.

Summary

I am super excited about this and I can see a bright future for the web ahead. This involves all of us and I would love Flex developers to take a look at what we do here and bring their experience in. We need a rich web, and I don’t see creating DOM based widgets to be the solution for that for much longer with the diversity of devices ahead.

Planet MozillaDjango Eadred v0.3 released! Django app for generating sample data.

Django Eadred gives you some scaffolding for generating sample data to make it easier for new contributors to get up and running quickly, bootstrapping required database data, and generating large amounts of random data for testing graphs and things like that.

The v0.3 release is a small one, but good:

  • Added support for Python 3.3 and later (Thank you Trey Hunner!)
  • Fixed test infrastructure and added tox support.
  • Fixed documentation.

There are no backwards-compatability problems with previous versions.

To update, do:

pip install -U eadred

Planet MozillaRust for C++ programmers - part 1: Hello world

This is the first in a series of blog posts (none written yet) which aim to help experienced C++ programmers learn Rust. Expect updates to be sporadic at best. In this first blog post we'll just get setup and do a few super basic things. Much better resources are at the tutorial and reference manual.

First you need to install Rust. You can download a nightly build from http://www.rust-lang.org/install.html (I recommend the nighlties rather than 'stable' versions - the nightlies are stable in that they won't crash too much (no more than the stable versions) and you're going to have to get used to Rust evolving under you sooner or later anyway). Assuming you manage to install things properly, you should then have a `rustc` command available to you. Test it with `rustc -v`.

Now for our first program. Create a file, copy and paste the following into it and save it as `hello.rs` or something equally imaginative.
fn main() {
    println!("Hello world!");
}
Compile this using `rustc hello.rs`, and then run `./hello`. It should display the expected greeting \o/

Two compiler options you should know are `-o ex_name` to specify the name of the executable and `-g` to output debug info; you can then debug as expected using gdb or lldb, etc. Use `-h` to show other options.

OK, back to the code. A few interesting points - we use `fn` to define a function or method. `main()` is the default entry point for our programs (we'll leave program args for later). There are no separate declarations or header files as with C++. `println!` is Rust's equivalent of printf. The `!` means that it is a macro, for now you can just treat it like a regular function. A subset of the standard library is available without needing to be explicitly imported/included (we'll talk about that later). The `println!` macros is included as part of that subset.

Lets change our example a little bit:
fn main() {
    let world = "world";
    println!("Hello {}!", world);
}
`let` is used to introduce a variable, world is the variable name and it is a string (technically the type is `&'static str`, but more on that in a later post). We don't need to specify the type, it will be inferred for us.

Using `{}` in the `println!` statement is like using `%s` in printf. In fact, it is a bit more general than that because Rust will try to convert the variable to a string if it is not one already*. You can easily play around with this sort of thing - try multiple strings and using numbers (integer and float literals will work).

If you like, you can explicitly give the type of `world`:
    let world: &'static str = "world";
In C++ we write `T x` to declare a variable `x` with type `T`. In Rust we write `x: T`, whether in `let` statements or function signatures, etc. Mostly we omit explicit types in `let` statements, but they are required for function arguments. Lets add another function to see it work:
fn foo(_x: &'static str) -> &'static str {
    "world"
}

fn main() {
    println!("Hello {}!", foo("bar"));
}
The function `foo` has a single argument `_x` which is a string literal (we pass it "bar" from `main`). We don't actually use that argument in `foo`. Usually, Rust will warn us about this. By prefixing the argument name with `_` we avoid these warnings. In fact, we don't need to name the argument at all, we could just use `_`.

The return type for a function is given after `->`. If the function doesn't return anything (a void function in C++), we don't need to give a return type at all (as in `main`). If you want to be super-explicit, you can write `-> ()`, `()` is the void type in Rust. `foo` returns a string literal.

You don't need the `return` keyword in Rust, if the last expression in a function body (or any other body, we'll see more of this later) is not finished with a semicolon, then it is the return value. So `foo` will always return "world". The `return` keyword still exists so we can do early returns. You can replace `"world"` with `return "world";` and it will have the same effect.



* This is a programmer specified conversion which uses the `Show` trait, which works a bit like `toString` in Java. You can also use `{:?}` which gives a compiler generated representation which is sometimes useful for debugging. As with printf, there are many other options.

Planet MozillaMozilla's pushes - March 2014

Here's March's monthly analysis of the pushes to our Mozilla development trees (read about Gaia merges at the end of the blog post).
You can load the data as an HTML page or as a json file.

TRENDS

March (as February did) has the highest number of pushes EVER.
We will soon have 8,000 pushes/month as our norm.
The only noticeable change in the distribution of pushes is that non-integration trees had a higher share of the cake (17.80% on Mar. vs 14.60% on Feb.).

HIGHLIGHTS

  • 7,939 pushes
    • NEW RECORD
  • 284 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 435 pushes on March, 4th
    • NEW RECORD
  • 16.07 pushes/hour (average)

GENERAL REMARKS

Try keeps on having around 50% of all the pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes.

RECORDS

  • March 2014 was the month with most pushes (7,939 pushes)
  • March 2014 has the highest pushes/day average with 284 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • March 4th, 2014 had the highest number of pushes in one day with 435 pushes



DISCLAIMERS

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • Gaia pushes are more or less counted. I will write a blog post about in the near term.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaThis week in Mozilla RelEng – April 17th, 2014

Major Highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Planet MozillaFirefox OS and Academic Programs

Although Mozilla feels almost like a household name at this point, it is a relatively small organization – tiny, actually – compared to the companies that ship similar types of software [1]. We must, however, have the impact of a much larger entity in order to ensure that the internet stays an open platform accessible to all.

Producing consumer software which influences the browser and smartphone OS markets in specific ways is how we make that impact. Shipping that software requires teams of people to design, build and test it, and all the countless other aspects of the release process. We can hire some of these people, but remember: we’re relatively tiny. We cannot compete with multi-billion-dollar mega-companies while operating in traditional ways. Mozilla has to be far more than the sum of its paid-employee parts in order to accomplish audaciously ambitious things.

Open source code and open communication allow participation and contribution from people that are not employees. And this is where opportunities lie for both Mozilla and universities.

For universities, undergraduates can earn credit, get real world experience, and ship software to hundreds of millions of people. Graduate researchers can break ground in areas where their findings can be applied to real world problems, and where they can see the impact of their work in the hands of people around the world. And students of any kind can participate before, during and after any involvement that is formally part of their school program.

For Mozilla, we receive contributions that help move our products and projects forward, often in areas that aren’t getting enough attention only because we don’t have the resources to do so. We get an influx of new ideas and new directions. We gain awesome contributors and can educate tomorrow’s technology workers about our mission.

I’ve been working with a few different programs recently to increase student involvement in the Firefox OS:

  • Portland State University:  The PSU CS Capstone program, run by Prof. Warren Harrison, has teams of students tackling projects for open source groups. The teams are responsible for all parts of the software life-cycle during the project. In the spring of 2013, a group of five students implemented an example messaging app using Persona and Firebase, documenting the challenges of Web platform development and the Firefox OS  development/debugging workflow. This year’s group will implement a feature inside Firefox OS itself.
  • Facebook Open Academy: This is a program coordinated by Stanford and Facebook, that puts teams from multiple universities together to build something proposed by an existing open source project. The Firefox OS team includes students from Carnegie-Mellon, Purdue, Harvard, Columbia in the US, and Tampere UT in Finland. They’re adding a new feature to Firefox OS which allows you to share apps directly between devices using NFC and Bluetooth. With 14 members across five universities, this team is collaborating via Github, Google Groups, IRC and weekly meetings for both the front-end and back-end parts, providing experience with remote working, group coordination and cross-team collaboration.
  • University of MichiganProf. Z. Morley Mao’s mobile research group has started looking at device and network performance in Firefox OS. They’ve got a stack of phones and SIM cards, and we’re working with them to find ways to improve battery life and network efficiency on our devices. They’ve started a collection of focus areas and related research on the Mozilla wiki.

If you’re at an academic institution and would like to learn more about how to get your department or your students involved, or if you’re a Mozillian who wants to coordinate a project with your alma mater, email me!

 

1. Mozilla has ~1000 employees. According to Wikipedia, Google has ~50,000 employees, Apple ~80,000 and Microsoft ~100,000.


Planet MozillaThe day Santa died

Today, just as we prepare for an Easter break, I heard my youngest daughter arguing with her older sister, and then the younger came running to me.

Dad, does the Easter Bunny and Santa exist?

As a parent, or anyone speaking to a child, you don’t want to take away their dreams or hopes. At the same time, it’s a rough world, so you see it as your responsibility to teach them as much as possible about how the world works so they both know more but also, to be honest, avoid the risk of them being ridiculed for not knowing something.

So should I leave it be and then she’ll eventually find out? Or should I tell her the truth, with the risk of making her sad? I do believe in honesty but at the same time I don’t want to be cynical.

I thought a bit about it, what to say and how to approach the situation. After some thinking, I decided to ask her what she thought:

What do you think? Do you believe they exist?

Yes. Or… I think so.

Have you ever seen any of them in real life?

Well… We had a bunny at the kindergarten once. Who wore shoes. And then you were Santa once, daddy.

Right. So do you believe they exist?

Hmm… No, not really, I guess.
sounding more like she wished they existed, than actually thinking they do

And then she bounced off, having learned another lesson abut life.

Planet MozillaWhy doing visual refreshes of Firefox is hard

We’re getting closer and closer to releasing Australis with Firefox 29, and that gives me more time to write something that’s been on my mind the last couple of weeks/months. Extra impetus was provided by sentiment along the lines of “how did you possibly miss this / think fix X was a good idea?” from some people outside the core development team, responding to some of our changes.

In this post, I’d like to give you an idea of the number of combinations of options, configurations, themes, add-ons, fonts and styles. It is enormous. Firefox generally tries to fit in with your operating system as best it can, and that means we have to pay attention to lots of things. And yes, that means sometimes we miss things. Here’s a breakdown of some of the things I’ve seen fly by as we made Australis, all linked to bugs specific to particular scenarios (there are 54 individual bugs linked, the majority of which were fixed for 29).

Firefox is available on three main (tier-1) platforms: Windows, Linux, and OS X.

All three platforms support lightweight themes, of which we support light and dark text variants. On light text lightweight themes, we invert the text and icons to be bright (which usually means the theme itself has a dark background). Interactions between these themes and the OS are not always the same everywhere, which leads to bugs.

Different toolbars like the menubar and the bookmarks toolbar can be toggled on and off (which sometimes makes certain ideas more difficult), and the menubar has an autohide state, which is new on Linux and caused specific bugs there.

And although we normally always show the tabbar and the navbar, there are popup windows where we don’t (toolbar=no), which, you guessed it, causes bugs.

Then we have per-window private browsing, which has an indicator which makes things ‘fun’ (and soon private browsing will look even more different from normal browsing).

Of course, while we stick to English layout direction is more or less fixed, but we ship both LTR and RTL locales on all platforms, which changes the order of things, which frequently leads to bugs.

Then there’s the padding that we added for “customize mode”, which affects layout of the toolbars and the (‘fake’) titlebar, which had its own problems.

Some issues are specific to pinned or overflowing tabs (sometimes even particular tabs), as well as panorama/tab groups.

Beyond that, styling is somewhat platform-specific, each with their own quirks:

OS X

OS X is, in a certain sense, “easiest” because the OS doesn’t have a lot of options that mess with things (font size, for example, isn’t easily configurable). But there’s still some variation:

  • Lion vs. pre-lion: 10.6, which we still support, has no fullscreen button in the titlebar (unlike 10.7-10.9) and has no concept of “Lion fullscreen”.
  • Spaces: causes odd bugs with panels.
  • HiDPI (“retina”): this causes bugs / missed cases. Add external displays which might not be hidpi, and you get even odder bugs (this one’s 10.9-only, too, it seems!).
  • RTL. Coupled with retina –> more bugs.
  • 10.9 broke more stuff.
  • Titlebar can be turned on/off now: cue more bugs.

Linux

Linux really means “Unix that has GTK”, as far as theming is concerned. Unfortunately that ends up being a wide spectrum of cases:

Windows

Windows really means “Windows XP, Windows Vista, Windows 7, Windows 8(.1) and all the corresponding Windows Server versions”. Which then means:

This list doesn’t include bugs caused or revealed by add-ons, but of course those also add interesting behaviour to the mix.

All in all, it’s been an interesting first year as an employee at Mozilla (I started April 1st, 2013), and I can’t wait to see all our changes ship: Firefox 29 is scheduled for release on April 29th.

Planet MozillaThe Case for the Ubiquitous Mobile Web

As we use more and more mobile devices in our lives, an open platform is becoming more, not less important.

In an article "The decline of the Mobile Web", Chris Dixon worries about the future of the Web, as despite the dramatic uptake in mobile device usage, mobile Web usage is rapidly declining in favor of apps.

John Gruber of Daring Fireball argues, this is a success of good over bad user experiences and suggests we celebrate all this as an evolution of the Web towards a dumb pipe delivering data to whatever device and platform provides the user experience.

Apples and Oranges

Unfortunately, Gruber is drawing the wrong conclusions from a good premise. I think no one would disagree that users do (and should) gravitate towards good experiences. But when he weighs the UX of apps against not-mobile-oriented websites accessed via dedicated browsers, he's comparing apples to oranges and draws the conclusion that the Web's underlying technologies are therefore inherently inferior.

And besides, Web technology doesn't matter to him, as long as the user-facing app uses something akin to HTTP in the background he'll still count it as the Web:

"App Stores are walled gardens, but the apps themselves are just clients to the open web/internet."

This is a very strange use of the word "open". There's nothing open about relegating the Web to acting as a "dumb pipe" like the underlying communication protocols are supposed to be.

As it stands, Gruber is splitting hairs -- calling it the "Web" just because HTTP is involved -- but he's missing the point. Users don't care about what the data pipe looks like, they care about their window into that data.

The Ubiquitous Walled Garden

Why it is harmful to redefine the term Open Web like this becomes clear when we start our discussion from a level playing field. We should ask:

Assuming the same good user experience, is an application written on a proprietary platform just as good for the user as one written on an open stack?

Consider Evernote CEO Phil Libin's recent prediction we're moving towards a network of connected devices, where the experiences are not enclosed in apps, but are "just there".1

That's a wonderful world, just so long as you only use devices blessed by a company (iPhone, iWatch, iFridge and iTV). The walled garden is beautiful on the inside, if you enjoy the exact experiences they deem suitable for you (and that don't interfere with their revenue models, etc. etc.) Who owns the platform, owns the user. And while "users flock to the best experiences", the worst part is the owner of the platform can choke off innovation whenever they feel threatened by it, and the users may never know what they're missing out on.

The only way around this is by embracing a shared development platform that is not owned by any one competitor (hence, open). And this platform is the Web.

Will it suffice if all those services run on closed platforms, just so long as they speak HTTP in the background? No. What makes the Web open is that you can connect to its resources from anywhere, with any device, so long as a browser exists for the platform2. The Web, if it merely acts as a delivery vehicle for data, is not open anymore and cannot act as the level playing field for innovation and choice that it is meant to be.

Time for more, not less openness

This is, in a nutshell, why Firefox OS is such an important project. It's not yet another proprietary mobile app platform. In Firefox OS, the open Web is the platform.

Firefox OS is about weaving the open Web into the very fabric of the mobile landscape. It's about enabling the next generation of makers to hack their devices to their heart's content. It's about providing users with a platform that fosters actual innovation rather than giving them the illusion of choice.

And that's a user experience worth fighting for.


  1. He is, of course, not the first to predict such a thing, but with “smart watches” and such being released left and right, we’re certainly closer than ever to this reality.

  2. Or the platform is the browser engine, as is the case with Firefox OS.

Planet MozillaKiss our old Mac Mini test pool goodbye

Today we have stopped running test jobs on our old Revision 3 Mac Mini test pool (see previous announcement).

There's a very very long lit of people that have been involved in this project (see bug 864866).
I want to thank ahal, fgomes, jgriffin, jmaher, jrmuizel and rail for their help on the last mile.

We're very happy to have finally decommissioned this non-datacenter-friendly infrastructure.

A bit of history

These minis were purchased back in early 2010 and we bought more than 300 of them.
At first, we run on them Fedora 12, Fedora 12 x64, Windows Xp, Windows 7 and Mac 10.5. Later on we also added 10.6 to the mix (if my memory doesn't fail me).

Somewhere in 2012, we moved the Mac 10.6 testings to the revision 4 new mac server minis and deprecated the 10.5 rev3 testing pool. We then re-purposed those machines to increase the Windows and the Fedora pools.

By May of 2013, we stopped running Windows on them.
During 2013, we moved a lot of the Fedora testing to EC2.
Now, we managed to move the B2G reftests and Firefox debug mochitest-browser-chrome to EC2.

NOTE: I hope my memory does not fail me

Delivery of the Mac minis (photo credit to joduinn)
Racked at the datacenter (photo credit to joduinn)



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Planet MozillaCommunity and Volunteers

It was suggested that I cross-post this from mozilla.dev.planning onto my blog. This is in reply to a thread entitled "Proposal: Move Thunderbird and SeaMonkey to mozilla-central" about (essentially) merging comm-central back into mozilla-central. There have been many technical concerns raised in the thread (that I'm not going to rehash here). What I'm more interested in is the lack of community feeling there. As Nicholas Nethercote said in that thread:
"I am surprised [...] by how heartless the discussion has been."
I should note that I did have some help editing this down from my original post. Turns out I tend to write inflammatory statements that don't help get me point across. Who knew? Anyway, thanks to all of you who helped me out there!

My full post is below (with a few links added and plaintext formatting converted to HTML formatting):
On Monday, April 14, 2014 4:52:53 PM UTC-4, Nicholas Nethercote wrote:
> The technical aspects of this decision have been discussed to death,
> so I won't say anything about that. I am surprised, however, by how
> heartless the discussion has been.

I agree, the technical bits here seem to have solutions suggested by Joshua and others, but the non-technical parts of this discussion have left me feeling disheartened and confused with the Mozilla community.

I find it ironic/amusing/sad/upsetting that a few threads above this is a thread entitled "Contributor pathways, engagement points and bug mentoring" while in this thread I see community contributors being blocked at every turn!

Here I don't see people attempting to foster a community by putting their best foot forward. I see people trying to get their job done; with an attitude of "if this doesn't help me, get it outta my way!" I don't think this is the right way to grow a community. I don't think this is how Mozilla HAS grown it's community. I don't think it's in line with what Mozilla expects from it's community members (both employees and volunteers!)

Personally, I dislike the amount of Mozilla Corporation goals focus in this thread. Can we have a discussion as part of a larger community? Why must it focus on Corporate goals? I'm not part of the corporation, I don't really care what its goals are or are not. I care about Mozilla, I care about providing high-quality, free, open source software to improve the experience of the Internet for everyone. And no, I'm not talking about Firefox. I'm talking about Thunderbird. I understand that Mozilla's goals are currently Firefox and Firefox OS, but these are not my personal goals.

At the Summit I had a few conversations with people about "on-boarding" new employees and getting them to understand how the community works and that interacting with the community in a positive manner is an important part of Mozilla. I don't remember the exact context, but part of it was that it is important that new employees don't think of it as "How can I use the community?", for that implies taking advtange of them, but "How can I work with the community?"

Please don't see this as an "employees vs. volunteers" argument. I believe that I'm expected to live up to these same goals. If I, as a volunteer, can help an employee achieve his goals; I'm more than willing, no...I'm EXPECTED to do that. I think this is a two-way relationship that must be fostered. It has seemed to me that over the past couple of years that I've been hanging around here there's been less and less focus on the community and more and more on the Corporation.

I understand Thunderbird and SeaMonkey may not be important to you, but it is important to me! (And others who contribute to the Thunderbird/SeaMonkey community, including employees who contribute on their spare time.) When Mozilla stopped directly supporting development of Thunderbird it was widely announced that "Thunderbird is dead!". We, as part of the Mozilla community, have been fighting to prove this wrong. Could you please respect our efforts? Merging c-c into m-c will help us focus our efforts on building a great product instead of spending significant effort on keeping a dying one on life-support. (And prove to all that "Thunderbird is dead!" was just a sensational headline.)
I don't have much else to say beyond that (besides thanks for reading this far!)

Planet MozillaWho We Are And How We Should Be

“Every kingdom divided against itself will be ruined, and every city or household divided against itself will not stand.” — Jesus

It has been said that “Mozilla has a long history of gathering people with a wide diversity of political, social, and religious beliefs to work with Mozilla.” This is very true (although perhaps not all beliefs are represented in the proportions they are in the wider world). And so, like any collection of people who agree on some things and disagree on others, we have historically needed to figure out how that works in practice, and how we can avoid being a “kingdom divided”.

Our most recent attempt to write this down was the Community Participation Guidelines. As I see it, the principle behind the CPGs was, in regard to non-mission things: leave it outside. We agreed to agree on the mission, and agreed to disagree on everything else. And, the hope was, that created a safe space for everyone to collaborate on what we agreed on, and put our combined efforts into keeping the Internet open and free.

That principle has taken a few knocks recently, and from more than one direction.

I suggest that, to move forward, we need to again figure out, as Debbie Cohen describes it, “how we are going to be, together”. In TRIBE terms, we need a Designed Alliance. And we need to understand its consequences, commit to it as a united community, and back it up forcefully when challenged. Is that CPG principle still the right one? Are the CPGs the best expression of it?

But before we figure out how to be, we need to figure out who we are. What is the mission around which we are uniting? What’s included, and what’s excluded? Does Mozilla have a strict or expansive interpretation of the Mozilla Manifesto? I have read many articles over the past few weeks which simply assume the answer to this question – and go on to draw quite far-reaching conclusions. But the assumptions made in various quarters have been significantly different, and therefore so have the conclusions.

Now everyone has had a chance to take a breath after recent events, and with an interim MoCo CEO in place and Mozilla moving forward, I think it’s time to start this conversation. I hope to post more over the next few days about who I think we are and how I think we should be, and I encourage others to do the same.

Planet MozillaWeekly review 2014-04-16

Accomplishments & status:

Planet MozillaBrowser inconsistencies: animated GIF and drawImage()

I just got asked why Firefox doesn’t do the same thing as Chrome does when you copy a GIF into a canvas element using drawImage(). The short answer is: Chrome’s behaviour is not according to the spec. Chrome copies the currently visible frame of the GIF whereas Firefox copies the first frame. The latter is consistent with the spec.

You can see the behaviour at this demo page:
animated GIF on canvas

Here’s the bug on Firefox and the bug request in Webkit to make it consistent thanks to Peter Kasting there is also a bug filed for Blink.

The only way to make this work across browsers seems to be to convert the GIF into its frames and play them in a canvas, much like jsGIF does.

Planet MozillaChanges Firefox 29 beta7 to beta8

A bigger changelog that I would have liked. However, it is mainly about top crashs, polishing Australis and some sync bugs/improvements.

A few webapp bugs have been also fixed.

  • 57 changesets
  • 93 files changed
  • 1892 insertions
  • 409 deletions

ExtensionOccurrences
js25
css13
cpp8
xml6
jsm6
java5
xul4
ini3
mn2
html2
h2
json1
inc1
c1

ModuleOccurrences
browser36
services14
mobile11
content3
js2
image2
gfx2
dom2
toolkit1
testing1
netwerk1
mozglue1
modules1
layout1
accessible1

List of changesets:

Gavin SharpBug 995041 - Properly disable the problematic portions of browser_aboutHome.js. a=test-only - b1e9827af66f
Matthew NoorenbergheBug 992270 - ignoreAllUncaughtExceptions in the about:home test of browser_google_behavior.js. r=gavin, a=test-only - 0e107cfcd3fd
Tim TaubertNo Bug - Fix browser_net_timing-division.js leak when run as the last test on a CLOSED TREE. rs=past, a=test-only - f91bdb05883b
Ryan VanderMeulenBug 994798 - Disable browser_frameworker.js on Linux debug for frequent timeouts. a=test-only - e7806ccfe24f
JW WangBug 945475 - Clear |mVideoFrameContainer| to stop staled callbacks which give incorrect videoWidth/videoHeight. r=roc, a=sledru - eaf92a872145
Matt WoodrowBug 991767 - Use Moz2D for printing surfaces. r=roc, a=sledru - 5be8148fea1f
Ryan VanderMeulenBacked out changeset 5be8148fea1f (Bug 991767) for bustage. - 8d21ce8b440a
Chris KarlofBug 989549 - Call signOut() in FxAccountsClient.jsm from signOut() in FxAccounts.jsm. r=markh, a=sledru - 8b66928d0515
Mark HammondBug 986636 - Use icon instead of [?] on sync prefs when master password locked. r=ttaubert, a=sledru - 785fb5b58ae5
Mark HammondBug 985145 - Make node reassignment work correctly with FxA. r=rnewman, a=sledru - dcbe04c7a069
Ryan VanderMeulenBug 994798 - Disable browser_frameworker_sandbox.js on Linux debug for frequent timeouts. a=test-only - 66f9ad218574
Mike de Boer[Australis] Bug 477948: Keyhole back/ forward button for Linux. r=jaws, a=sledru. - 871c60982cac
Mike de Boer[Australis] Bug 477948: update tests for new keyhole on Linux. r=jaws, a=sledru. - ff6f90421768
Dão GottwaldBug 989701 - Set -moz-box-align:center for #urlbar-container and reduce the url bar's vertical margin in order to correctly align the urlbar-back-button-clip-path. r=mdeboer, a=sledru. - 4b355a2745cd
Jared WeinBug 971034 - Australis - [Windows] Zoom reset button isn't as tall as other zoom buttons in toolbar. r=mikedeboer, a=sledru. - 9d3d5c2225aa
Jared WeinBug 967110 - Add an inverted help icon and arrow to show on the menu panel anchor when the Help subview is open. r=mconley, a=sledru. - 5dcc734a5736
Mike ConleyBug 989609 - Dynamically added toolbars with API-created widgets should not break customize mode. r=Unfocused,mdeboer, a=sledru. - 9dd4a9d6739a
Jared WeinBug 971630 - Australis: Far right/left selected overflow tabs look bad on session restore. r=mconley, a=sledru. - 9798420b26fc
Jared WeinBug 993421 - Only set the position:relative on the PanelUI-footer-inner when a subview is showing. r=Gijs, a=sledru. - 5aa6eb09fe62
Mike ConleyBug 992373 - Items in the panel jump up slightly when the customization transition finishes. r=Gijs, mikedeboer, a=sledru. - 7c5fb4327c30
Jared WeinBug 993299 - Australis - There is no minimum width set in customization mode. r=mconley, a=sledru. - 3537a7b4b992
Mike ConleyBug 990218 - Simplify OS X's titlebar styling rules for tabs in titlebar. r=MattN, feedback=Gijs, a=sledru. - 163d2250a03e
Mike ConleyBug 994758 - Rename tabHeight define to tabMinHeight, which is more accurate. r=dao, a=sledru. - 9eca66ee5b10
Mike de BoerBug 989466: revert clip-path change made in Bug 893661 to fix font scaling. r=dao, a=sledru. - f1c211a4714d
Timothy NikkelBug 944353. If we've encountered an error while decoding an image and the main thread has asked to do more decoding of that image before the main thread has acknowledged the error then refuse to decode more. r=seth a=sledru - 1cf083a2ffe7
Timothy NikkelBug 944353. If we've had a decoder error then the image is not usable. r=seth a=sledru - d27facd3d35d
Blair McBrideBug 990979 - about:welcomeback is missing a CSS file on aero variant of the Windows theme. r=MattN a=sylvestre - 31aca79a5126
Matthew NoorenbergheBug 946987 - Add 2x tab images for Windows and use them for 1.25dppx and higher. r=mconley a=Sylvestre - 975b76d0b1c0
Matthew NoorenbergheBug 980220 - UITour: [Linux] Change the highlight style to have better fallback without an X compositor. r=Unfocused, ui-r=mmaslaney a=sylvestre - 8e6041de3ce7
Gijs KruitboschBug 989289 - only migrate builtin toolbars, also migrate toolbox, r=mconley a=sylvestre - e946bc71ae2b
Gijs KruitboschBacked out changeset 9d3d5c2225aa (Bug 971034) for stretching all the icons when navbar includes add-ons with overly large toolbar icons, a=backout, rs=mconley,MattN,gijs,jaws,Unfocused - 915700dc5198
Mark HammondBug 990834 (part 1) - minor refactor of hawk tests. r=ckarlof, a=sylvestre - deb83f2f75fa
Mark HammondBug 990834 (part 2) - Add support/tweak retry and backoff header support to hawk and tokenserverclient. r=rnewman, a=sylvestre - 329a2a180a8b
Mark HammondBug 990834 (part 3) - Fix handling of hawk errors. r=ckarlof, a=sylvestre - b074e386a410
Jonathan WattBug 991400 - Prevent gfxPath instances from being created on the stack (they're refcounted). r=dholbert, a=sylvestre - 679aa869f39f
Gijs KruitboschBug 971034 - adjust min-height of zoom control reset button only, r=jaws, a=sylvestre - c6f80ae1ee23
Gijs KruitboschBug 992747 - toolbar visibility setting doesn't work for non-customizable toolbars, r=jaws, a=sylvestre - 04e63b14de25
Gijs KruitboschBug 977572 - catch drag end and drop events from bookmarks drag handler so we can clean up, r=mak, a=sylvestre - 5eb91b9f89ec
Gijs KruitboschBug 993322 - fix widgets not showing up in toolbox, r=mconley, a=sylvestre - c1bbbe2e1309
Mike ConleyBug 973694 - Fix glitchy-looking private browsing indicator on OS X when tabs in titlebar are disabled. r=MattN, a=sylvestre - d20804c31f61
Jeff MuizelaarBug 969226 - Check if there is enough data to read u32 to avoid buffer overflow. r=bgirard, a=abillings - 05c933823ad8
Valentin GosuBug 991471 - Fix offset when setting host on URL. r=mcmanus, a=abillings - 1be8ef9bf661
Myk MelezBug 989294 - Release index when app is uninstalled. r=mfinkle, a=sledru - 7872e02410a7
Nick AlexanderBug 981827 - Make Android and Desktop FxAccounts client use same key parameters. r=rnewman, a=sledru - 13a97e892449
Wes JohnstonBug 994456 - Add a preview surface for webrtc in webapps. r=gcp, a=sledru - 4dd58172981c
David MajorBug 970362 - Block F-Secure on Windows XP. r=bsmedberg, a=sledru - 756b592c869f
Mike de BoerBug 993932: remove border-color transition to remedy TART regression. r=dao, a=sylvestre. - 8855f67b592c
Mike ConleyBug 995161 - Customize mode can still break after bootstrapped add-on with custom legacy:true toolbar restarts. r=Gijs, a=sylvestre. - 27221179c8b0
Gijs KruitboschBug 989683 - restrict how we inherit the menubar text color to tabsintitlebar cases on non-aero, r=dao, a=sylvestre. - bf8adf5a7040
Matthew NoorenbergheBug 987407 - Set the pref startup.homepage_override_url in-product for beta 29. r+a=gavin - a7433dd3312a
Bobby HolleyBug 993918 - Shut down CAPS and XPConnect after imagelib and gfx. r=Ms2ger,bsmedberg, a=sylvestre - afc5f648e247
Tim TaubertBug 995266 - Prevent mochitest-browser harness from leaking due to SimpleTest overrides. r=ted, a=test-only - f11f4dda1cde
Alexander SurkovBug 977668 - Firefox hangs on Facebook text entry when inline lookups pop up. r=jwei, a=sledru - 109cc0131968
Wes JohnstonBug 990395 - Use a url to find browser apps rather than a scheme. r=mfinkle, a=sledru - 2ab3be04316a
Rick EyreBug 981280 - Disable WebVTT support on 29 r=bz,cpearce a=sledru - 3a3224245147
Myk MelezBug 990125 - Ignore automatic update checks in webapp processes. r=mfinkle, a=sledru - 3960907890b7
Jan de MooijBug 991457 - Don't DCE MLoadElement if it needs a hole check. r=h4writer, a=sledru - 3437e5663d9e


r= means reviewed by
a= means uplift approved by

Previous changelogs:

Planet MozillaWelcome to the new Toolkit peers – Paolo, Matt, Jared and Irving

Slightly belated in some cases but I’d like to formally welcome four new toolkit peers. Paolo Amadini, Matthew Noorenberghe, Jared Wein and Irving Reid have all shown themselves to be well capable of reviewing patches in any of the toolkit code. Paolo, Matt and Jared actually got added a few months ago but apparently I failed to make an announcement at the time. Irving was added just last week. Please congratulate them all and don’t go too hard on their review queues!

Also if you think there are others who should be peers of Toolkit (or current peers that are no longer relevant) then please let me know.

Planet MozillaGetting the number of lines of text in an Element

One of the biggest problems I faced when developing vtt.js is that a lot of the layout algorithm depends on being able to know the line height of the subtitle text. This boils down to being able to know the line height of the div within which the subtitle text sits. A lot of the time this is easy to get:

  var lineHeight = div.style.lineHeight;

But, what if you haven't set a line height? Then you would need to get the computed value of the line height:

  var lineHeight = window.getComputedStyle(null, div).getPropertyValue("lineHeight");

This works... some of the time. On some browsers if you try to get the computed value of the line height and you haven't explicitly set a line height, the computed property will return back as the value normal. That's helpful...

After much searching I found out that you if you use getClientRects on an inline element it will return you a TextRectangle box for each line of text in the inline element. At that point you can either assume that each line has the same height and get just use the height property of the first TextRectangle or to get a somewhat more accurate number you can take the height of the inline element and divide it by the number of TextRectangles you have.

  var inlineElement = document.getElementById("myInlineElement"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

This works really well for the amount of actual code you need to write. I've read about more accurate methods, but they take some serious coding. Like walking through each character in the text and tracking when overflow happens serious.

Now back to my original question which was how to get the number of lines of text in a div (block level) element. The way I did this was to wrap my div which has my content in another div, and set the inner div's display property to inline. Then you can calculate the line height/number of lines of text of the inner div since it has inline display. This way you retain your contents block level layout while being able to figure out how many lines of text it is.

This is it all put together:

  <div>
    <div id="content" style="display:inline;">
      This is all my content in here. I wonder how many lines it is?
    </div>
  </div>
  var inlineElement = document.getElementById("content"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

Planet MozillaIf I had a million dollars

Kraft DinnerArmen has a blog post up about the cost savings Mozilla has been able to realize in its continuous integration infrastructure in Amazon over just the last 3 months. This has been a bit of a sea change for release engineering, who have historically been conservative with regards to changing core infrastructure and practices. We’re all coming to grips with the new world order, but I’m quite excited about the possibilities.

Some quick back-of-the-envelope calculations based on other recent numbers from Armen:

  • starting with a low-ball estimate of 7,000 pushes/month, if we project the rate of spending from December ($19/push) over an entire year, we end up with $1,596,000.
  • at the new rate ($6/push), a year of AWS time will cost only $504,000.
  • that’s a yearly savings of $1,092,000.

If history has taught us anything, continued growth will eat in to at least part of that savings, but think of what Mozilla could do with an extra million dollars. Depending on where we hire them, that money could easily buy 5-10 more engineers to continue driving the mission forward.

Planet MozillaIch bin ein Xamarin(er) ♥

My new home office

As of the beginning of the April I am a Xamarin (that is what Xamarin employees call themselves).

At Xummit I met the rest of the Xamarins and I had an incredible time there (dare I say magical ♥).
I met old friends like Rodrigo Moya, Jason Smith, David Siegel, Cody Russell, Neil Patel, Connor Curran, Gord Allot and others, but also made new friends:

  • Zack Gramana: The right amount of crazy and creative. He is helping me with my new pet project.
  • Seth Rosetter: SF chilled out hacker with an ear for techno and extreme positive attitude, a delight to hang out with.
  • Mike Krüger: One of the friendliest people I got to meet and know with exactly my kind of humour.
  • Victoria Grothey: Incredibly nice person with lots of energy and always smiling.
  • Marek Safar: The most passionate beer expert I know I guess. Also rumour has it that either I am stalking him or he is stalking me.
  • Václav Vančura: An awesome designer who motivated me to start drawing again. Thanks for that. And many many more.

One thing I believe in, is that interpersonal relationships between co-workers is a must for a community or a company to be productive and successful. Xamarin promoted (and still promotes) this positive habit, achieved it and even more. The upbeat attitude and enthusiasm at Xamarin is infectious. Combined with the diversity in culture as well as stuff/tasks to do brings the best out of Xamarins. I will not forget the bus ride to the venue. 8 people with 7 different nationalities, but all happy and psyched about what they are doing and what others are doing ♥.

Since I joined Xamarin I started doing more Mono in my free time too. Currently I am porting

Synapse to Mac (since I loved the interface and some of the functionalities I couldn’t find in Alfred and Quicksilver). Here is a small very early sneak peak :)

Synapse for Mac in the making

I am loving Xamarin and all its stands for and brings to the table.

P.S: Hylke Bons has a fan base here at Xamarin :)

Planet MozillaIn which I demonstrate Supreme Court fitness in property law comparable to that of Justice Breyer

I said previously that I had two law posts to make. Here’s the non-Mozilla-related post.

Introduction

I’ve blogged about visiting the Supreme Court for oral arguments before. I had the opportunity to do so again for the extremely interesting week of January 13 earlier this year. I attended oral arguments concerning the Appointments Clause, assembly restrictions in Massachusetts, bankruptcy shenanigans, and railroad property law. A month ago, the first decision, in the property law case, Marvin M. Brandt Revocable Trust v. United States, was announced. I’m going to blog about it a little, because I think it’s cool and because of its impact on rail trails.

Before I do that, I’d like to note that the Marvin M. Brandt Revocable Trust v. United States article on Wikipedia is entirely my work (and my mistakes :-) ). (At present. Release the vandals in 3, 2, 1….) It’s the first article I’ve written start to finish. I’m more than a bit proud of that. And I’m particularly excited to have done it in such a cool area of law. :-)

Background

Back in the 1800s as the United States expanded toward the Pacific Ocean, it needed to be able to efficiently transport goods and people across that distance. At the time, the solution was railroads. So Congress passed acts incenting railroad creation by granting rights of way across federal land. After initially granting rights of way to specific, named railroads in separate bills, Congress streamlined the process in the General Railroad Right-of-Way Act of 1875. Under this act, any railroad meeting certain conditions could get a right of way, til those provisions’ repeal in 1976.

The facts

Fast-foward to (coincidentally) 1976. The United States granted a land patent (that is, a document making clear — “patent” — title to land) to Melvin Brandt for 83 acres in Wyoming, as part of a land swap. One limitation on the grant was that it was subject to a railroad right-of-way originally granted to the Laramie Hahn’s Peak & Pacific Railway Company under the 1875 Act. The grant mentioned no other limitations on the right-of-way.

LHP&P never really worked as a railroad, and it passed through several hands. In 2004 the ultimate owners legally abandoned it. What happened to the right-of-way? This is where things got complicated.

The United States wanted the right-of-way land, so it filed suit to quiet title in its favor to clear up ownership. The United States resolved claims with everyone along the way — except for Marvin Brandt, Melvin’s son.

Brandt’s position

Brandt argued that the right of way was an easement. An easement is a restriction on your ownership of land, that says some other person can enter into and (perhaps) use it for some particular purpose. So your house’s land may have an easement across it for a sidewalk, that allows people to go on the sidewalk, walk through, and briefly stop on it, and you have to accept that. You still own the land; you just don’t quite have free rein over it. (This is why you’re usually responsible for clearing snow off your sidewalk. It’s your land, your fault if someone slips and twists an ankle and it was reasonably foreseeable.) When an easement terminates, the land is unburdened by the easement. No physical property changes hands, the easement just doesn’t exist, and the land owner can again prevent entry and use of his land.

Brandt buttressed this argument by pointing to Great Northern Railway Company v. United States. In this 1942 case, the Supreme Court decided whether Great Northern could drill for oil and gas on an 1875 Act right-of-way. The United States said no, it couldn’t — the right-of-way was in the nature of an easement, only an easement had been granted, all signs (language, legislative history, early interpretation, Congress’s construction of it in subsequent acts) said it was an easement. The 1942 Court agreed. Open and shut case for Brandt, right? Yes and no.

The United States’s position

The United States argued that 1875 Act rights of way were a “limited fee made on implied condition of reverter”. Let’s unpack this gibberish. “fee” is roughly “ownership”, and “reverter” refers to what happens to the property after some condition (here, abandonment) holds. The United States thought railroad rights of way were an unusual sort of easement. Easements don’t typically let you come in and tear things up, but it’s necessary for railroads to dig, bore, build up, lay track, and so on. So these “railroad easements” were a fee in those regards. And in regard to reversion after abandonment, ownership reverted to the United States.

In light of Great Northern, this may sound ridiculous. But the United States found language in earlier cases, and to an extent in Great Northern, saying that railroad easements had “attributes of the fee”. And two cases predating Great Northern had treated 1875 Act rights of way as limited fees. The problem was, in those cases the Supreme Court had conflated 1875 Act rights-of-way with rights-of-way under acts before 1871. In 1871, Congress changed policy from basically giving railroads land, to only letting them lay tracks on it. Congress wanted to encourage settlement, not just the arbitrary enrichment of railroads (who had become incredibly huge land owners in the West). The Court conflated the two because, in at least one of the cases, neither side had filed briefs, and the Court made a legal mistake.

The United States argued that Great Northern didn’t really say 1875 Act rights of way were easements.

Oral argument

Oral argument was pretty interesting. I read half a dozen briefs and the lower court opinion in the case, so I was moderately prepared to follow argument. In some ways I was almost on par with the justices. Justice Breyer candidly admitted to fumbling with his recollections of A. James Casner‘s property law class, about which he briefly rambled (as is his wont — he’s known for rambling :-) ).

Oral argument generally trended against the United States. Sparks flew when the United States attorney began argument. Justice Alito bluntly told him the United States should receive a “prize for understatement” for “acknowledg[ing in its brief] that there is language in [] Great Northern and in the government’s brief in that case that lends some support to [Brandt's] argument.” Alito recited the brief’s subject headings, all forcefully arguing that the right-of-way was an easement and only an easement.

The argument didn’t go much better from there on for the United States. Various justices wanted to know how much land would be affected by a judgment that these rights-of-way were easements — permitting takings claims for just compensation, especially when the land had already been taken by the United States. No answer was forthcoming, because the records had been taken so long ago and were so geographically distributed. Breyer in particular repeatedly asked if there were any other easement-but-not-always constructs in the common law of property.

Opinions

The Court announced an opinion on March 10, just under two months after oral argument. Fast turnarounds typically indicate uncomplicated cases, and this was such a case. The justices divided 8-1 for Brandt, uncritically adopting his position. Chief Justice Roberts wrote the opinion, which began with a half-dozen pages of history of the West and particularly of LHP&P. (Definitely give it a read if you like Western history.) Roberts emphasized that the United States lost because it had won in Great Northern and faulted it for its “stark change in position”. He also asserted that 1875 Act railroad rights of way must be analyzed as common law easements — not a strange amalgam as the United States had argued.

Justice Sotomayor dissented alone. She argued that Great Northern had decided only one aspect of the property interest in railroad rights of way, and it hadn’t decided how reversion should play out. She also thought that railroad rights of way shouldn’t be analyzed under the common law, because of the extent to which they went beyond what normal easements allowed.

In the end the United States was roundly rebuked and defeated. Sometimes 8-1 decisions are a matter of some recognized, fundamental disagreement; see for example many of Justice Thomas’s solo dissents. But when a decision goes this way, in a case barely implicating deep jurisprudential disputes, you have to second-guess yourself a bit when you’re on the losing side. It’s one thing to lose with others agreeing with you. But when no one else sees it as you do, perhaps you’re the one who’s wrong.

Why did the United States pursue the case to a resounding loss? This particular case arose a bit weirdly. It was pushed by various property-rights groups, at the start. And for where it was raised, in the Tenth Circuit, existing circuit precedent said Brandt’s argument would lose, which it did. Brandt appealed to the Supreme Court, citing the circuit split: a good way to get your case heard, but no guarantee. What possibly tipped the balance was that the United States, despite winning, agreed the Court should hear the case. Why?

It looks to me like the United States got greedy. It saw an opportunity to wipe out the other circuits’ bad precedents, and it blinded itself to the weakness of its argument.

Consequences

What happens to Brandt specifically? The case returns to the Tenth Circuit to respond to the decision, but it’s unclear to me what’s supposed to happen there. I’d think they’d just quiet title in Brandt and be done, but the Rails-to-Trails Conservancy says it’ll keep working in the Tenth Circuit to “narrow the ultimate impact of the Supreme Court’s ruling”. How they can work against a predetermined quiet title action, I don’t know. (It’s possible this is just a face-saving claim on their part.). And it’s possible the United States might just acquire the right of way using eminent domain. (Why not do that and avoid suit? Money, of course. If it owns the land, no just compensation to pay. If not, that’s money out of the government’s pocket.) So Brandt’s not quite out of the woods yet, pun probably intended.

But Brandt’s particular plight isn’t the important thing here. It’s all the other places where suddenly takings claims can go forward. No one knows how many of these there are. Statutes of limitations and estoppel will preclude many claims, but not all of them. It’s still an unresolved mess.

Lessons

This touches a deeper concern. The United States acted here because it wanted to create rail trails, converting useless railroad corridors into bike trails. I like bikes. I like bike trails. But the law authorizing rail trails was enacted with flagrant disregard for the actual ownership of railroads in disuse. The CBO estimated the law wouldn’t cost a penny, but it now could cost $500 million, maybe more after this decision. We should demand a higher standard of Congress in the laws it passes.

Planet MozillaIterating a number sequence for lulz and jail time

Hello, readers! Today I bring you two posts about law: one Mozilla-related, one not. This is the Mozilla-related post. Mozillians may already know this background, but I’ll review for those who don’t.

The “hack”

In 2010 Goatse Security (don’t look them up) discovered a flaw in AT&T’s website. AT&T’s site detected accesses from iPads, extracted a unique account number sent by the iPad, then replied with a private account email address. Account numbers were guessable, so if someone “spoofed” their UA to look like the iPad browser, they could harvest private email addresses using their guesses.

The lulz

<figure class="aligncenter" id="attachment_4220" style="width: 410px;">Andrew Auernheimer ("weev") wearing an old-school AT&T baseball cap
Andrew Auernheimer, i.e. weev, CC-BY-SA
</figure>

The people who figured this out were classic Internet trolls interested (to a degree) in minor mayhem (“lulz”) because they could, and they scraped 114000+ email addresses. Eventually Andrew Auernheimer (known online as “weev”) sent the list to Gawker for an exclusive.

The sky is falling!

AT&T, Apple, the people whose addresses had been scraped, and/or the government panicked and freaked out. The government argued that Auernheimer violated the Computer Fraud and Abuse Act, “exceeding authorized access” by UA-spoofing and loading pages using guessed account numbers.

This is a broad interpretation of “authorized access”. Auernheimer evaded no security measures, only accessed public, non-login-protected pages using common techniques. Anyone who could guess the address could view those pages using common browser addons. People guess at the existence of web addresses all the time. This site’s addresses appear of the form “/year/month/day/post-title/”. The monthly archive links to the side on my site have the form “/year/month/”. It’s a good guess that changing these components does what you expect: no dastardly hacking skills required, just logical guesses and experimentation. And automation’s hardly nefarious.

So what’s Mozilla’s brief with this?

Developers UA-spoof all the time for a variety of innocuous reasons. Newspapers have UA-spoofed during online price discrimination investigations. If UA spoofing is a crime, many people not out for lulz are in trouble, subject to a federal attorney’s whims.

The same is true for constructing addresses by modifying embedded numbers. I’ve provided one example. Jesse once wrote a generic implementation of the technique. Wikipedia uses these tactics internally, for example in the Supreme Court infobox template to linkify docket numbers.

Mozilla thus signed onto an amicus brief in the case. The brief laid out the reasons why the actions the government considered criminal, were “commonplace, legitimate techniques”.

The cool part of the brief

I read the brief last summer through one of Auernheimer’s attorneys at the inestimable Volokh Conspiracy. I’ve been lightly meaning to blog about this discussion of number-changing ever since:

Changing the value of X in the AT&T webpage address is trivial to do. For example, to visit this Court’s homepage, one might type the address “http://www.ca3.uscourts.gov/” into the address bar of the browser window. The browser sends an HTTP request to the Court website, which will respond with this Court’s homepage. Changing the “3” to “4” by typing in the browser window address bar returns the Court of Appeals for the Fourth Circuit’s homepage. Changing the “3” to a “12” returns an error message.

Illustrating the number-guessing technique (and implying its limitations in the “12″ part) via the circuit courts’ own websites? Brilliant.

Back to Auernheimer

The court recently threw out Auernheimer’s conviction. Not on CFAA grounds — on more esoteric matters of filing the case in the wrong court. But the opinion contains dicta implying that breaching a password gate or code-based barrier may be necessary to achieve a conviction. The government could bring the case in the right court, but with the implied warning here, it seems risky.

Sympathy

Auernheimer isn’t necessarily a sympathetic defendant. It’s arguably impolite and discourteous to publicly disclose a site vulnerability without giving the site notice and time to fix the issue. It may be “hard to feel sorry for them being handed federal criminal charges” as Ars Technica suggested.

But that doesn’t mean he committed a crime or shouldn’t be defended for doing things web developers often do. Justice means defending people who have broken no laws, when they are threatened with prosecution. It doesn’t mean failing to defend someone just because you don’t like his (legal) actions. Prosecution here was wrong.

One final note

I heard about the AT&T issue and the brief outside Mozilla. I’m unsure what Mozilla channel I should have followed, to observe or discuss the decision to sign onto this brief. Mozilla was right to sign on here. But our input processes for that decision could be better.

Planet MozillaWelcome cbeard

Among Mozillians, there is a small (not too small, in fact..) group of people who were already here before 15-jul-2003. After that date, we saw old-time contributors rejoin Mozilla one by one, and new hires too, something we had forgotten about since the 2002 Netscape layoffs. Chris Beard was one of them, at the end of 2004 IIRC (time flies, holy cow, time flies...). If old-time Mozillians saw a necessary little shift in the local culture because of these new hires, it was clearly not the case with cbeard, who adapted so well to Mozilla we immediately used his IRC nick to mention him. Having a vision, dealing very well with the community, always open to discussion, leading new projects, highly respected, I'm glad he was appointed interim CEO. Welcome Chris!

Planet MozillaContributing to FirefoxOS Cordova initiative

After the project I was working on got cancelled, I started contributing to Firefox OS Cordova project. Cordova is an open source framework for writing multi-platform native mobile applications using web technology. Cordova provides you with javascript APIs and the plumbing necessary to access the device's internals, such as battery status, GPS and camera. Neat stuff. Each mobile operating system has its own platform implementation for doing the communication between cordova's javascript API and the native OS code.

This post will focus on how to get started writing the Firefox OS platform and plugins. To get a better understanding on how to use cordova to write a Firefox OS app, I highly recommend the mozilla hacks post on the subject.

Cordova is written in node.js, you just need to understand javascript to work on it. It took me much code digging and asking around to get started, but you won't have to!

The repositories

Cordova code is organized into multiple repositories. The main ones you need to be aware of for Firefox OS development are cordova-cli, cordova-firefoxos and cordova-plugin-*. Here is a brief description of them:

  • cordova-cli - is where the code for the command line tools is located. There is some platform specific code under src/metadata which are config parsers. Firefox OS uses it to get the initial version of the manifest with the correct app name and other values.
  • cordova-firefoxos - is the repository for the Firefox OS platform tools. The code here is responsible for handling Firefox OS cordova commands and for the initial skeletal app.
  • cordova-plugin-* - are repositories for plugins. A plugin repository contains code for each supported platform too.

Running it locally

To work on the platform, you need to run on the latest code from the repositories. It's super helpful to run cordova entirely from local files so that you can edit code and see the effects. With the multiple repository organization used by cordova, this can be tricky. Make sure you have git and node.js installed. A github account will be handy if you plan to send us your changes. The prompt samples below are using bash.

First lets get cordova-cli from mozilla-cordova github account and install the dependencies. From the directory you'd like to keep cordova code run:

$ git clone https://github.com/mozilla-cordova/cordova-cli.git
$ cd cordova-cli
$ npm install
$ cd ..

The cordova binary is located at cordova-cli/bin/cordova. From now on this is the binary we'll use for all our cordova command line needs. You can add it to your PATH if you want, I'll use the relative path for clarity. Next let's clone Firefox OS platform bits from cordova-firefoxos repository:

$ git clone https://github.com/mozilla-cordova/cordova-firefoxos.git

No need to install dependencies for cordova-firefoxos, they're already part of the repository. Before creating an app, there's a little trick to tell cordova to use the local platform code we just downloaded. Create a file named firefoxos.json with the following contents:

{
    "lib": {
        "firefoxos": {
            "uri": "/<FULL PATH TO>/cordova-firefoxos",
            "version": "dev",
            "id": "cordova-firefoxos-dev"
        }
    }
}

Make sure to set the full path to cordova-firefoxos folder under uri. We can now create a new cordova app by running create. Let's create the app in myapp folder and give it the even more original project name of io.myapp and name it myapp. The fourth parameter to create is the json config file we just create as a string. To create the app run:

$ cordova-cli/bin/cordova create myapp io.myapp myapp "$(cat firefoxos.json)"
$ cd myapp

Alternatively, to use a local copy of cordova-firefoxos platform code on a cordova app that already exists, you can create a json file with the same content as above under yourapp/.cordova/config.json. In fact, that fourth parameter created that file for you. Go check.

To add the platform, all you need to run is:

$ ../cordova-cli/bin/cordova platform add firefoxos

That's it. If you make any changes to cordova-firefoxos, remove and add the platform again to make sure you have the latest.

Adding a plugin

Working with local plugins is much simpler. Lets download the contacts plugin as an exemple:

$ cd ..
$ git clone https://github.com/mozilla-cordova/cordova-plugin-contacts.git

Adding a local version is pretty simple, just add the path as parameter to plugin add command:

$ cd myapp
$ ../cordova-cli/bin/cordova plugin add ../cordova-plugin-contacts

NOTE: if at this point you hit a ReferenceError: xml_helpers is not defined error, don't despair. It's a bug in cordova-plugman code, which is responsible for plugin management. We can fix it by getting the latest version of cordova-plugman, and making sure cordova-cli uses it too. Here's how:

$ cd ..
$ git clone https://github.com/apache/cordova-plugman.git
$ cd cordova-cli
$ npm install ../cordova-plugman
$ cd ../myapp
$ ../cordova-cli/bin/cordova plugin add ../cordova-plugin-contacts

To see changes you made to plugin code you have to remove then add the plugin again. To remove the plugin you need to use the plugin name, not the path. Running ../cordova-cli/bin/cordova plugin ls will show you the names of installed plugins. For example, to remove the contacts plugin run ../cordova-cli/bin/cordova plugin remove org.apache.cordova.contacts.

That's it, you are now running the latest and greatest versions of it all!

Firefox OS plugin development: from javascript to javascript

Cordova provides you with a javascript API. They try to follow standards when possible. Firefox OS is built on web standards too. Sometimes they use the same API. How can a plugin developer access Firefox OS API when they clash?

Cordova provides us with a modulemapper library to access the original values of overwritten properties. Let's take a look at how the battery-status plugin uses modulemapper:

var mozBattery = cordova.require('cordova/modulemapper').getOriginalSymbol(window, 'navigator.battery');

The variable mozBattery now points to the original navigator.battery. The first parameter to getOriginalSymbol is the context, pretty much always window. The second is the value you want to get. To find out what value to use on the second parameter, check the <js-module> element in the plugin's plugin.xml configuration file. For the battery-status plugin it is:

<js-module src="www/battery.js" name="battery">
    <clobbers target="navigator.battery" />
</js-module>

The <clobbers> element's target attribute has the value that was overwritten.

Contributing

If you got this far, you're ready to get started! Open up your favorite editor and hack on. If you want to help with Firefox OS support, check out our status site and the project's wiki.

While writing this post I got news that I'm joining the team. Super excited to improve cordova support for Firefox OS! If you want to chat with us, we hang out on #cordova channel on mozilla's irc server.

Planet MozillaChris Beard Stories

You may have heard that Chris Beard came back (he never really left) to Mozilla as interim CEO. I have many Chris Beard stories, but here are just a couple of personal ones.

The first was back in 2006 when I first contracted for Mozilla writing an add-on. Chris was product managing the add-on and we were on an early call with others trying to wrap up and get a first version out the door. I forget the details, but the general tone of the conversation changed for me when Chris said something to the effect of “let’s ship something we are proud of and that users will love”. Up until that time I had volunteered for many years for Mozilla with a carefree attitude. This was Chris’ way of saying that what we are doing is important, and we have to do it well. After that I contracted on other projects but also put in a lot of volunteer time. It never lost the fun aspect, but I knew what we were doing was serious and making an impact.

Fast forward to 2010, to the Mozilla Balkans Meeting in Ljubljana. We gathered in the center of the city at a typical Slovenian ‘gostilna’ (restaurant) and were told a special guest was coming. Everyone was expecting a famous Balkans singer. Instead our brand new CEO at that time Gary Kovacs walked in, accompanied by Chris and a few others. After all the excitement, we settled down to eat and I was sitting beside Chris. We talked about many things, but throughout he was passionate and sharing his big ideas both for what I was working on and the opportunities that Mozilla had moving forward. Every encounter with Chris was a piece of advice, inspiration, a big idea or all wrapped up in one.

Balkans Mozillians

Chris in Ljubljana with the team. Picture by Tristan Nitot on Flickr.

Somehow I feel the best Chris Beard stories are to come.

Enhanced by Zemanta

Planet MozillaAn easy way to test the New Firefox Beta look and feel before it's released

Screenshot of the new Firefox UI on Windows 7 with the menu panel openThe new Firefox Beta is faster, simplified and easier to customize and we need your help to test it out before it gets released in a few weeks time. There is now an easy way to review Firefox user interface changes without even installing the new version. "How is that possible?" you might ask. We have a collection of hundreds of screenshots of the new Firefox in various different configurations (affecting features such as tabs, toolbars, themes, customization mode, and the new menu) that are ready for you to review. It's fast and easy to do in three simple steps:
  1. Open up the screenshot review tool and enter a nickname to log in (there may be a small reward for the most valuable contributions so keep that in mind when choosing)
  2. A random screenshot will be displayed where you can simply identify any visual issues related to the new user interface that you may see. Simply drag to select the region of the image and add a comment (and optionally a bug number). See an example.
  3. When you're done reviewing that image, simply click the button to get another and go to step 2. Endless fun ensues!
Of course, installing Firefox Beta, testing the functionality and filing bugs is still really valuable and encouraged. Areas to focus on include the new customization mode, menu panel, tabs, and Firefox Account Sync. So, what are you waiting for? Start reviewing now.

Internet Explorer blogWhat's new in F12 with Windows 8.1 Update

We are excited to announce a set of substantial updates to the developer tools in IE, (F12) as well as in Visual Studio 2013 Update 2. The updates in F12 accompany the new releases in IE11.

Previously, we described the capabilities of F12, and its focus on a fast, iterative workflow, on providing accurate data in the DOM Explorer and on providing actionable data in the memory and performance tools.

As you use F12 with this update, you will notice enhancements for the following:

  • A tighter, iterative workflow with change tracking in the CSS tools
  • An ability to debug in the code you wrote such as CoffeeScript or TypeScript, with sourcemaps support, and debug “just-my-code,” if you are using libraries from other developers
  • Enhancements that get you to solutions faster, for example when finding memory leaks, with the enhanced JS snapshot tools and filtering improvements.

Let’s take a look at these improvements to the F12 tools.

Tracking Changes you make in CSS with Change-bars

One of the great strengths of tools like F12 is they allow you to edit the look of any Web site directly within the browser, without you requiring our source code. However, if you spend time making edits it is difficult to keep track of all the changes you make across your CSS and then applying those final edits back to you origin sources. To improve this situation we have introduced an ability to record and track changes, both visually with ‘change-bars’, as well as through a new ‘Changes’ CSS panel in the in the DOM Explorer.

Any change you make in the Styles pane in the DOM Explorer to CSS rules and properties will have a visual clue in the left-margin next to the property or rule that you edited. These are the ‘change-bars’, and they show green for new properties, yellow for changed properties and values and red for deleted properties. These change-bars will be retained, even if you begin to look at other DOM nodes.

Changebars on the styles tab

Changebars on the Styles tab

As you may make multiple edits across many nodes, we added a new tab in the CSS panel that lists all the changes in the current F12 session, as a ‘diff’ view so that you can use it as a checklist when you make manual edits to your source code. It also supports the ability to copy and revert changes through a context menu.

The new changes tab

The new Changes tab

Debugging your App with “Just My Code”

If you are developing Web sites and apps, then you are likely using 3rd party libraries like jQuery or Angular, and typically these libraries are often minified. We often see developers who are debugging their code step into library code, and get buried in the depths of that library, before being able to get back to their code to debug their issue.

Visual Studio has supported a feature called “Just My Code” (JMC) managed languages for a while and with Visual Studio 2012, it is also enabled for JavaScript. The idea behind JMC is that we keep the debugger in your code, in the code you want to debug and not into code you can’t really change.

With this feature now in F12, there are two key things you will see when you debug

  • You will never ‘step into’ a file (library) that is marked as library code. You can mark a library through the file picker in the debugger (see below), even if you have stepped into the file. Once marked, any step operation will take you to your code.
  • If you enable “break on all exceptions” you will never break on an exception thrown and handled in library code.

Marking files as library code

Marking files as library code

By default F12 will recognize files that match the URL *.min.js as library code. You can modify this behavior easily however, by marking libraries in the debugger’s file picker or in the context menu of the file tab if you have the file open in the debugger.

We will show much more about this feature and the workflow in a follow-on blog post.

Debugging your App written with other Languages using Source Maps (v3)

As JavaScript apps have become more and more complex recently there’s an increasing trend to author in another language that compiles to JavaScript (the F12 tools are written in TypeScript and compiled to JavaScript for example). Similarly, you may have minified JavaScript code that is not the origin source that you wrote the app in. This compilation process means the JavaScript that executes in the browser and that you debug, isn’t the code you have in your editor, which makes debugging more challenging.

To solve this problem, there’s a community-driven format that has gained a lot of traction with browsers and Visual Studio, that maps origin source and compiled files called “Source Maps” (spec). These maps are generated during compilation and with this release of F12 we’ve added support for v3 of the Source Map spec.

When a compiled JavaScript file defines a valid source map, F12 will default to load the original source file, rather than the running JS file, when ‘source maps’ are turned on (a button on the debugger). You will have the following capabilities:

  • Files in the file picker use their origin source name, rather than the running documents.
  • The file(s) you open in the debugger, and use to step through your code are your origin source files, and for TypeScript, CoffeeScript and Script # these files will be colorized appropriately (as you can see with the TypeScript file below).

The file(s) you open in the debugger, and use to step through your code are your origin source files, and for TypeScript, CoffeeScript and Script # these files will be colorized appropriately

As with Just-My-Code, we will go into more depth on source maps in a follow-on blog post.

Three way snapshots

When debugging a memory leak, you typically will be presented with a large amount of data, even if filtered) that memory profilers create, which makes finding that leak complicated. In F12 we had previously taken steps for the tools to summarize the state of the app in the snapshot tiles, and opportunistically suggest potential issues with the detached DOM information indicators. However, in this update, we wanted to take this further, by helping developers zero-down on issues even easier.

The F12 Memory tool now includes the ability to compare three snapshots (and get a scoping view of those snapshots), that is a cleaner process to determine a leak. The snapshots are:

  • The initial state of your app before you start the scenario that provides the baseline for objects in your app.
  • Once your scenario is completed, which may augment the possible baseline for your app, which then requires a further snapshot once you execute the scenario again.
  • The ‘back to normal’ state of your application, once you try the scenario again. In this state all of the objects from your scenario should have been freed (or you expect them to be alive).

With these snapshots you can compare them and use the new “Scope” filter dropdown to select the “objects left over from snapshot #2,” which represents your scenario end state and potentially a set of objects that shouldn’t be around anymore, as shown below.

Scope filter

Scope filter

The types view that you see above, shows the list of objects and a gutter indicator of where potential issues may reside.

Tooling for IE on Windows Phone in Visual Studio 2013 Update 2

If you have spent time trying to make a compelling mobile version of your Web site you have likely hit issues with it not looking or working correctly on mobile browsers.  To help with this on Windows Phone, we’re excited to announce that in Visual Studio 2013 Update 2 we have enabled the use of Visual Studio’s debugging and performance tools for Internet Explorer on Windows Phone 8.1. For more details on this have a read of the Visual Studio ALM Blog on this topic.

Many other improvements…

In this release of F12 we’ve tried to address many customer requests, as well as address bugs that include several crashes caused by the network inspector and not respecting conditional comments when using F12 to emulator lower document modes. Rather than the list the bugs here the bugs will be updated on the Microsoft Connect site for IE (https://connect.microsoft.com/IE/Feedback).

Here’s a more detailed list of the changes you’ll see in this release of F12.

Shell

  • Ctrl+[ and ctrl+] to navigate between tools

Console

  • Dropdown to enumerate execution targets.
  • Inspect objects that are logged via console.log including printf-style formatting.
  • Locals (at breakpoint) in intellisense for the console.
  • Console $_ shortcut to access the result of the last evaluation in the console.
  • Always record console messages – before F12 launch (via Internet Options -> Advanced -> (check) Always record developer console messages).

Debugger

  • Persist BPs, Watches, Tabs and the like so I don't keep losing my state.
  • Debug my Typescript/compiled code inside F12 using Source Maps.
  • Debug just my code and not be bothered by library code (JMC).
  • Name eval code via //#sourceUrl=<url> comment.
  • Keyboard shortcut to abort and refresh page when broken via ctrl+shift+f5.
  • Fully qualified function names (e.g. a.b.c) in call stack and profiler views.

DOM Explorer

  • CSS pseudo states - set the pseudo state for an element to test out my pseudo styles.
  • CSS Changebars - see what values have changed in the styles view.
  • CSS Changes View - see final applied CSS changes and copy to clipboard.
  • Combined Computed Style Panel - view the CSS styles in a cohesive and CSS pane adding editing and links to source.
  • Ctrl+b in DOM Explorer for select element.

Emulation

  • Doc Mode Awareness - why my page is in certain doc mode in order to understand my compatibility scenarios better.

UI Responsiveness

  • Sort all levels of events in the timeline details view by duration.
  • Simplify the timeline details view by filtering out events within certain categories (e.g. GC, Image decoding).
  • Easily zoom the graph and detail views to the duration of time for a specific event instance (via context menu).
  • Identify the exact property that was changed (and the value it was changed to) when a DOM element's inline styles are programmatically modified.

Memory

  • Identify the line of code that was responsible for allocating a specific function, so that I can correlate memory with source.
  • Context menu item to show object in dominators view (and view retained size etc.).
  • Updates to the types views to show which type(s) accounted for the majority growth in a diff, so that you can rationalize object churn more easily.
  • Gridlines on table UI.
  • Settings UI (show built ins, circular refs, object ids).

Summary

With this update to IE11 and F12, we’re updating developer tools more often getting you the latest features and bug fixes as soon as we can. Expect to see and hear from us more and if you’d like to provide feedback, or ask for new features and help simply reach out on twitter @IEDevChat, or through the IE11 Send Feedback tool or on Connect.

This post just scratches the surface of what’s new in F12. Over the next few weeks we’ll be publishing blog entries that go into more depth and show you how to use F12. There’s also our complete F12 developer tools documentation on MSDN.

In the meantime, we look forward to your feedback. You can reach out to the team on Twitter @IEDevChat, through the IE11 Send Feedback tool or on Connect.

— Andy Sterland, Program Manager, F12 Tools

— Jonathan Carter, Program Manager, F12 Tools

— Simon Calvert, Lead Program Manager, F12 Tools

Planet MozillaMozilla is all of us

Ten years ago, a scrappy group of ten Mozilla staff, and thousands of volunteer Mozillians, broke up Microsoft’s monopoly on accessing the web with the release of Firefox 1.0. No single mastermind can claim credit for this achievement. Instead, it was a wildly diverse and global community brought together through their shared commitment to a singular goal: to protect and build the open web. They achieved something that seemed impossible. That’s what Mozillians can do when we’re at our best.

Over the last few years, we’ve taken on another huge challenge: building a smartphone incorporating the technology and values of the open web. In a few short years, we’ve taken Boot to Gecko, an idea for an open source operating system for mobile, all the way to the release of Firefox OS phones in 15+ countries. It was thousands of Mozillians — coders, localizers, partners, evangelists and others — that made this journey possible. These Mozillians, and the many more who will join us, will play a key role in achieving the audacious goal of putting the full power and potential of the web into the hands of the next two billion people who come online.

Over the last few weeks, the media and critics have jumped to the conclusion that our CEO defines who Mozilla is. But, that’s not the reality.

The reality is this: Mozilla is all of us. We are not one or two leaders, and we never have been. Mozilla is a global community of people building tools for a free and open web that we can’t build anywhere else. We’re people solving the tough problems on the web that most need solving. Mozilla is all of us taking action every day, wherever we are. Building. Teaching. Empowering. We all define who Mozilla is together. It’s the things we choose to build and teach and do every day that add up to ‘Mozilla’.

While hard, the past few weeks have been a reminder of that.  The attention, boycotts, ire from across the political spectrum, and departure of an original founder like Brendan would have devastated most companies, leaving them wounded and floundering with their leadership gone. But, Mozilla is not like most companies. Instead, we’re a global community that rolls up our sleeves to work on a common cause, not a company with single leader. Mozilla is all of us. As Mozillians, we need to remember this. And live it.

That’s one of the reasons I’m happy Chris Beard agreed to step in as interim CEO at the Mozilla Corporation today. Certainly, he knows technology and products, having played a key role in everything from the early success of Firefox to unveiling Firefox OS at the Mobile World Congress. But, more importantly right now, Chris is one of the best leaders I know at gathering people around Mozilla in a way that lets them have impact.

Just one example of where Chris has done this: the famous Firefox 1.0 ad in the New York Times.

Firefox 1.0 New York Times Ad

The notable thing about this ad is not its size or reach, but that Mozilla neither placed nor even paid for it. The ad was a grassroots effort, dreamed up and paid for by roughly 10,000 people who’d been using Firefox in beta and wanted the world to know that there was a real choice in how people could access the web. Chris was running marketing for Mozilla at the time. As he saw community momentum growing around the idea, he jumped in to help, bringing in more resources to make sure the ad actually made it into the Times. He did what Mozilla leaders do at their best: empower Mozillians to take concrete action to move our cause forward.

Mozilla has a tremendous amount of momentum right now. We’ve just shipped Firefox OS in 15 countries and released a $25 open source smartphone that will bring the web to tens of millions of people for the first time. We’re about to unleash the next round of events for our grassroots Maker Party campaign, which will bring in thousands of new volunteers and teach people around the world about how the web works. And we’re becoming a bigger — and more necessary — voice for trust and for privacy on the web at time when online security is facing unprecedented threats. The things we are all working on together are exciting, and they’re important.

In all honesty, the past few weeks have taken their toll. But, as they say, never waste a good crisis. We’re already seizing the opportunity to become even better and stronger than we were a month ago.
This starts with reminding ourselves that Mozilla is at its best when we all see ourselves as leaders, when we all bring our passion and our talent full bore to building Mozilla every single day. Chris has a role in making this happen. So do people like Mitchell and me. The members of our boards play a role, too. But, it is only when all of us roll up our sleeves to lead, act and inspire that we unlock the full potential of Mozilla. That is what we need to do right now.

 


Filed under: mozilla, openweb, webmakers

Planet MozillaLearn To Teach Programming – Software Carpentry

Today, post PyCon conference, I spent the entire day immersed in an incredibly dynamic and educational workshop by Software CarpentryLearn to Teach Programming“.  I’m going to do a mix of dumping my notes in a play-by-play fashion with possible sidebars for commenting on what I experienced personally so that I have a record of this to look back on as I move forward with Ascend Project planning and execution.

Meet Your Neighbours

The event started off, as they always do, with a go-round of people introducing themselves in short form.  As we started taking turns our teacher, Greg Wilson, asked for the person who just spoke to tap the next person to speak before sitting down.  This proved to be our first of many small applications of the science behind learning and how it can play out in real life.  While it apparently takes a room of kindergarten children 3 reminders to do this extra step during intros, it took this room of ~25 adults 14 requests before we mostly started doing so without prompting from Greg.  By the way, during the intros I learned about Dames Making Games which I can now add to my mental list of awesome women-in-tech groups and if you’re reading this and are in Toronto, check them out!

Teaching Is Performance

It raises your adrenaline, brings out your nervousness, and it’s something you need to work at. A few quick tips from Greg on preparing for your ‘performance’ as teacher: always bring cough drops, and figure out what your ‘tell’ is.  Like with poker, everyone has at least on thing they do when they are nervous.  I suspect for me its likely that my ‘tell’ is talking fast and/or having trouble not smiling too much (at least in poker, it is).  This was our first introduction to how we should be reflective about our teaching – even go so far as to record yourself if you can’t get honest feedback from people around you – so that you can spot these things about your manner and work on adjusting them to ‘perform’ teaching in a more confident and reliable manner.

Improv came up as a way to work on this where you can get feedback on how you perform and also learn to keep other people engaged.  I used to do improv when I was an awkward teenager and didn’t feel like I was a superstar at it but I wonder what it could be like now that I have more confidence.  I’ll be looking for classes in SF to try it out.  What’s there to lose?

Why Don’t We Teach In Teams?

Greg pointed out how teaching, unlike music and comedy, is such a solo activity.  Musicians typically build up their experience and skills by playing with others.  The best comedians by and large spent a significant amount of time in some sort of comedy troupe before striking out on their own as a stand-up or as major film stars.  Teachers though?  Often alone in their classrooms and if my partner is an example of the ‘norm’, definitely alone while grading and preparing lessons.  This is something worth exploring: what could teaching be like for the teacher if there was team teaching?  What could we do with more feedback, more often, and with someone helping us track measurable progress towards our goals as agents inspiring learning?  Finland has an excellent system of teacher feedback and peer/mentoring for their educators.  Teacher’s college is harder to get into there than medical school (not sure that’s a good thing, but it’s what Greg told us).

Key Points About Teaching & Learning

  • People have two kinds of memory layers – short and long term – and short term memory (which is what we are working with in classroom environments) can hold ~7 items +/- 2 so really we should aim for 5 in order to teach to our students’ capacity

 

  • We have to balance on/off time – we lose some time switching between tasks or concepts in the teaching but working with memory limitations as mentioned above, we must let people take breaks to reset & refresh

 

  • Avg person can take in info for about 45 minutes before their attention wanes from exhaustion.  For me, this is more like 30 minutes. Hearing this from Greg reminds me that I want to propose that all meetings I’m involved with at work move the default length to 30 minutes and that we have a set of rules for how to deal with ‘overage’.  Either email or mailing list post, etherpad, set up a follow-up meeting, or make a proposal and request feedback so that we are not taking an hour because we *have* an hour.

 

  • Apparently the military has a lot of research and effective solutions for human performance.  Greg mentioned being at a naval academy and the grad students he was lecturing to dropped into doing pushups when a bell sounded on the hour.  This sounds like a great practice for anyone trying to learn and be engaged with others – get your blood pumping and change your position.  Reminds me to get that automated rest-taking app running on my laptop again and to actually pay attention to it for a while instead of dismissing over and over.

 

  • Continuous ‘flow’ – oh that elusive state for programmers.  There was some sort of quote about coffee but I missed the first part, the gist was that when we are immersed in something and truly engaged we can override that 45 minute intake limitation from before but if we do more than pause (without switching contexts) we could end up breaking flow and it takes at least 5-10 minutes to get back into it. This is key for people who work in environments full of distractions and interruptions. I’ve been thinking a lot about this one lately as I’d like to work on breaking my very unproductive cycle of checking IRC and email in a loop as though I am event-driven.  I need to make times to get into ‘flow’ and do bigger tasks with more focus.

 

  • A sidebar of the distraction mention was the fact that, in programming, syntax can be the distraction. That is, errors in.  When you get stuck trying to figure out where your semi-colon or indentation is off you break out of ‘flow’. In a language/framework like Scratch this is not possible as the blocks cannot be dragged and dropped into any order that creates errors except in ways that are related to logic and program flow – worth stopping to think about (and keeping you in your engagement ‘flow’)

 

  • There are roughly three types of minds out there to work with in teaching: a) Novice b) Competent c) Expert.  The Novice doesn’t know what they don’t know so the most important thing to do when trying to teach a Novice is to make sure their mental model of the concept you are teaching is correct.  This is to become a lot of the focus in the rest of the day – methods of determining if our concept is getting across correctly.  The Expert is such because they have more connections between all the facts they know about the concept/skill and so they can leap from point A to point J in one move where it takes a Competent mind all the dots in between – executed well, but with thought and intention – to complete them.  It is *as hard* to get Novices to become Competent as it is to get Experts to see the concept they are trying to teach as a Competent person does.  Think about something you might be and Expert at and see if you can tell what steps you assume other people will know.

 

  • Another key point about the Expert is the idea of reflection. Being able to reflect on your skill is huge for honing it.  An example would be how I went to a hockey skating workshop where they video taped us skating our fastest and when I saw that video, saw how knock-kneed I was and how my internal map that I was using wide leg strokes did not actually look like that in the tape I was a) horrified but also b) it’s a reminder of how far I have to go and how much more work I need to do in order to reach a higher level of expertise, such as that reflected to me by the instructors.

Accepting Feedback and Critique

We spent some time talking about critique. In architecture, art, music, and many other disciplines there is a built-in system for critique.  It helps the student to build up their sense of self, to know their strengths and weaknesses.  We do not always have this in teaching.  In our workshop, Greg had people write down one piece of positive and one negative feedback on two sticky notes (yellow for positive, pink for negative) and he asked us to put them on a piece of paper at the front of the room before we headed out on our first break (just over an hour of instruction had occurred).  When we returned we discussed what the anonymous feedback had provided Greg with and what he could actually work on in the moment vs. what was useful for later.  He mentioned doing this, and letting it be anonymous, was a great way to build trust with your students. Also we talked about how to get better at accepting feedback, working with it, not letting it paralyze you or derail your lesson.

One of the key takeaways for me here was the idea that the most senior leader/teacher should model this for others.  Show that you can hear feedback, both good and negative (hopefully constructive), and be able to move forward without crumbling under the pressure.  While I’m nervous about feedback, I will do my best to ‘fake it till I make it’ on this point because it’s definitely more important to correct course and create a better experience for students than to be proud and lose their interest and especially, trust.

Concept Maps

Our next major concept was the concept map.  This is a way to help yourself understand what you are trying to teach. It’s also a way to check yourself for the 7 items +/- 2 factor. If you have more than 5 main concepts in the concept map, it’s time to evaluate it for what can be put aside for now or what can become the next lesson.  The concept map can also be shared with students as a way to make sure everyone is on the same page or at least starting with the same page.  Greg recommended handing out a printout of the concept map so that students could doodle and expand it in ways he might not have thought of.

We learned how the concept map should never be used for grading.  It’s mostly a tool for the teacher to know if they have managed to get across the mental model well enough for the novice to reflect back a matching map and feel comfortable moving on to the next concept. It’s also a way of preventing the “blank screen” where students can be frozen trying to come up with what to put down (in programming or in writing) and having a scaffolding there in the form of map, or hints, any form of guidance can basically jump start the student and hold their hand until they need less and less of it to self-start, self-direct, and truly *learn* autonomously.

We did an exercise where we drew up concept maps for how to teach a for loop.  This was my first time doing a concept map and it was hard.  Definitely will take practice and likely some more reading/looking at other concept maps to drive home the concept for myself.

<figure class="wp-caption aligncenter" id="attachment_1542" style="width: 304px;">concept map explaining a for loopThis is an attempt to map out the concepts required to understand a for loop – note we went over 5 items</figure>

Key points from Greg:

  • Make your concept map look ‘cheap’ so that people aren’t afraid to give you honest feedback
  • Write and share maps with each other – try this with your team at work on a project you’re starting – you might see that others have a *very* different sense of what is being attempted
  • Try not to need things in your concept map that you will “explain later” – if you can’t explain it now you’re going to disrupt the ‘flow’ of maximizing the short term memory limits
  • Transfer your map into a list of bullet points as it will help you put the most important concepts first
  • Think of concept mapping like couples dances. You both want to be doing the same dance or there will be a lot of bruised shins :)

Sticky Notes as Invaluable Teaching Tool

We used sticky notes at several points in this workshop.  While we only had two colours today, Greg recommends three colours to be used as follows:

  • Green:  Students can put this up in a visible place when they have completed the exercise currently being done
  • Yellow: Students can put this up when they have a question.  Also this is a great tool for ensuring more participation in the classroom setting.  Some people talk more than others, there are definitely certain types of people who take up more space, and the deal with the yellow stickies was: You get two, when you ask a question put one aside.  Another question?  Put the other aside.  Now you have no more questions until EVERYONE in the class has used at least one of their yellow stickies.
  • Red:  Students can pop this up in a visible place when they need help on something.  This is great for two reasons: 1) the student can keep *trying* instead of worrying about holding a hand up and waiting for eye contact with a teacher and 2) the student can request help without drawing too much attention to themselves.  This is great for classes with people who might have learned it’s best not to speak up, ask questions, or draw attention to themselves out of fear and/or shame.

Know Your End Goal

This probably shouldn’t have *blown my mind* but it did.  It’s so obvious yet I’ve never once designed curriculum with this approach. You can bet that’s all changed now.  Here’s the key point:

DESIGN YOUR LESSON BY WRITING THE ‘EXAM’ FIRST

Ya.  It’s maybe obvious.  You want to make sure the students leave knowing what you intended to teach them?  Well, figure out how you’re going to measure that success *first*, then build your lesson up to that.  “They understand the for loop” is not enough.  Be specific.  Have a multiple choice question that tests the output of a for loop and gives 3 plausible answers and one right answer.  Use this to check if you are teaching well – their failure to choose the right question is your failure to teach the concept correctly.  This doesn’t have to be for actual grading (unless you want to grade yourself). Think of this like Test Driven Development for curriculum.  Teach to the goal.  You will develop lessons faster and more efficiently.  Your learners will appreciate it.  They can tell when they are learning vs. having a lecturer do a brain dump on them that goes nowhere in particular.  Backwards design works.  Greg’s book plug related to this section:  “Seeing Like a State

Another tip?  Create one or more user profiles for your lesson.  In our workshop we created Dawn: 15 year old girl who is good at science and math, learning programming in a one-day workshop. Then we did an exercise in crafting a question that would confirm if we had successfully taught how functions work to her.

We learned about Allison Elliott Tew‘s work and about “Concept Inventory” which is a way to use common mistakes in mental modeling to create multiple choice questions where the incorrect answers can help you understand *how* someone has misunderstood the concept you are trying to teach.  Multiple choice is great because it’s quick to get you an assessment (teacher grading time).

Peer Instruction

Related to multiple-choice as test of understanding is Peer Instruction.  This is a method that uses a multiple choice question in a really interesting, and engaging fashion.

Developed by Eric Mazur in the 1990′s this method expects students to have done some pre-work on the material before coming to class so that the entirety of the lesson can be used to compare and correct conceptual maps and understanding of the material.  It goes like this (at least Greg’s interpretation – it differs in Wikipedia as to how Eric designed it):

  1. Provide a multiple choice question based on the pre-work content.  Ensure 3 plausible answers and one correct
  2. Students select and *commit* to an answer (there is not yet software for this, though there are clickers) – you can also ask people to hold up the number of fingers for their choice and have classroom helpers count
  3. If everyone picks the right answer you can move on but otherwise you ask people to talk in groups with their neighbours to examine each other’s choices and what the correct answer might be and why.  This is great for having people explain their mental model/map
  4. Vote again and have students commit to the answer
  5. Instruction reveals the answer as well as perhaps a single sentence explaining why
  6. Groups discuss again, this time they can explore their understanding with the correct answer alongside people who, likely, had the correct model

This teaching technique was proven in 1989 but is still widely unused (esp. in MOOCs). Greg told us that he can usually do about 10 of these types of questions in a 1 hour class.  We did an example of one in the workshop to test out the method and it was a lively exercise.  This was also an opportunity for Greg to help us notice how noise in the room helps a teacher determine when a good time is to check in, continue the lesson, or make sure people aren’t stuck.  Active, engaged learning is boisterous and noticeably relaxed.  Quiet can mean focus, and then as people complete the exercise you can hear some discussions start up as those who are done talk with each other about the exercise.  I look forward to getting a bit of expertise at this level of listening and was impressed by Greg’s skills in classroom energy level reading.

F*ck It, I’m Outta Here

I have several more pages of notes but it’s getting late and this is a long post. There’s one more part of the workshop that I’d like to write about:  The moment when you decided you didn’t want to learn something anymore.

This is a really great piece of advice for teachers.  Greg started by saying that he used to ask students what motivated them to learn, what great experience in learning they had so he could tap into that motivation as a teacher.  Now?  He asks people what DE-motivated them.  You get a lot out of people this way.  Ask someone (or think of your own experiences): “What was something you were curious about, working on, getting into, and what happened that made you say ‘f*ck it’ and drop it? If you could go back in time what would you change?”.

For my example I spoke about returning to gym class at 12 years of age after recovering for many months from a very physically traumatic incident where I was hit by a car while on my bike (15 bones broken, 6 months in a wheelchair).  Being immobilized *and* being a pre-teen caused me to put on a fair amount of weight and I was no longer very physically active or able.  I also had yet-to-be-diagnosed asthma.  Not only did I have to endure a gym class where those with natural talents were help up while the rest of us were discarded but I also continued to fail tremendously at getting more than a “Participation” certificate(! Every other result got a very nice badge) for the Canada Fitness Test.

My “F*ck it” moment was when I got so frustrated with never getting a badge that I stole someone’s gold badge when no one was watching.  I also ended up eschewing all sports and athletic pursuits for many years if there was any hint of tryouts or actual talent needed.  Years later, at 29, I taught myself how to run by using a couch-to-10K program that did repetitions of running and walking in order to build up endurance.  Not only did I succeed at that but I learned to *love* running and feeling healthier in my body.  If I could go back in time I would become a Physical Education teacher and make sure every kid in my class knew that it’s not about natural talent at anything. It’s about setting achievable goals for yourself and comparing your results against your OWN RESULTS.  Never mind some test, and other kids. We’re all very different but no one should be denied a sense of accomplishment.  It’s what keeps you coming back to learn & build on what you’ve learned.

<figure class="wp-caption aligncenter" id="attachment_1544" style="width: 290px;">Badges awarded to Canada Fitness Test ParticipantsThe coveted badges.</figure>

 

Now Go Read More: Keep Learning How to Teach

It was an amazing day.  I have more notes to transcribe for myself but I think I’ve managed to capture the major concepts I learned today that will all be invaluable in my work on Ascend and beyond. Greg is an experienced, passionate, driven teacher and his enthusiasm for *knowing* what works in education is contagious.  I want to be a better scientist and educator too. The Software Carpentry movement is picking up momentum.  Look for workshops, blog posts, and opportunities to participate in a town near you.   See their site for up to date information and also check out their materials page for additional resources.  I’ve got a few new books to read on the plane home tomorrow.

Planet MozillaNever Walk - A Talk About Entrepreneurship And Running

Part 1 - Roger

2011-10 Startup Week Presentation Never Walk.001.jpg

This is one of the most inspired moments in the history of athletics: Roger Bannister crossing the finish line on 6 May 1954 during a meet between British AAA and Oxford University at Iffley Road Track in Oxford, United Kingdom, where he became the first human to run the mile in less than four minutes. An extraordinary achievement which was, at the time, considered impossible. Seeing the picture of Roger crossing the line gives me goose bumps. Each and every time. This picture evokes so many emotions in me - in a lot of ways it’s the perfect capture of the perfect moment.

But we are getting ahead of ourselves. For now - keep Roger in mind, we will meet him again later.

2011-10 Startup Week Presentation Never Walk.002.jpg

“Reaching the finish line, never walking, enjoying the race. These three, in this order, are my goals.” — Haruki Murakami

This presentation is a story about running, running a business and running through life at large. And how all these things can be treated the same. A story about lessons learned. A story about failures, perseverance, winning and the sheer joy of accomplishment - large and small. And it is a story why we should never walk in life.

Let’s get ready… toe to the starting line.

Part 2 - A True Story

2011-10 Startup Week Presentation Never Walk.003.jpg

“We embrace pain. Pain is the purifier.” — Runner’s Proverb

2011-10 Startup Week Presentation Never Walk.004.jpg

In 2008 I found myself with pretty severe depression. A condition and feeling which I never experienced before. I felt helpless. I didn’t know what to do. And I didn’t know how to get out of it.

Over the course of some months I first talked with friends and family and tried to fix it myself. Thought I could figure out what it was, mend it and move on. But it didn’t work.

Eventually, I knew that I needed help. So I searched for help. And found a fantastic therapist. She worked with me through a lot of issues in my past - but more importantly she asked me why I stopped doing sports years ago, having spent most of my youth engaging in one sport or another. I didn’t know the answer. Life just got in the way.

2011-10 Startup Week Presentation Never Walk.005.jpg

My therapist asked me which sport I enjoyed most. The answer was immediately clear to me - running. Running is primal. It’s hardwired into our brains. Humans are born to run.

So I started running again. I ran for life. For my life.

2011-10 Startup Week Presentation Never Walk.006.jpg

About 10,000 miles later, after endless hours on the roads and trails in every place I lived & visited ever since, running with and without company - I learned something. I learned that the fundamental lessons which running taught me, hold true for running a startup. And running through your life.

They are the essential rules for any entrepreneur. They are the essence of living life. At least if you want to do the impossible - and break your own four minute mile.

Three - Ten Principles

2011-10 Startup Week Presentation Never Walk.007.jpg

“Somebody may beat me, but they are going to have to bleed to do it.” — Steve Prefontaine

2011-10 Startup Week Presentation Never Walk.008.jpg

Train hard. There is no way around it. It’s the foundation. Everything else will depend on it.

When I built my first startup, fresh out of university, I didn’t know anything. I had a huge ego and thought that I knew everything there is to know about building and running a startup. But I didn’t. I went into the race without training. It was ugly. I learned on the fly - which is fine. But I had people rely on me. And they suffered from my level of unpreparedness.

Train hard. If you want to race, you need to pour your heart and soul into the preparation. This is where races are lost and won.

2011-10 Startup Week Presentation Never Walk.009.jpg

Make sacrifices. Emil Zapotek is one of the greatest runners of all times. Emil wasn’t terribly talented or genetically gifted to run. But he made sacrifices. More than anyone else. And he won.

Building a startup requires huge sacrifices. I slept on the floor in my company when I worked through the night. I blew up a long-term relationship. I lost friends as I didn’t have the time to see them anymore. My first startup was a financial disaster. It was a sacrifice which, in the end, made me a better entrepreneur. And my following ventures so much better.

2011-10 Startup Week Presentation Never Walk.010.jpg

Make positive choices. Your life will be full of decision making points. Make sure you choose wisely. Choose the ones which will have a positive impact on you.

I made a choice in my startup which I paid dearly for - against my gut I chose the investor with the better term sheet. I wanted the money. When the company went downhill, it turned ugly. I didn’t make a positive choice - and paid dearly.

2011-10 Startup Week Presentation Never Walk.011.jpg

Seek your potential. I recently read that, unless you are an ultra-elite runner, you always have the ability to run faster. Always. I believe this is true for everything we do. Only very few people tap their whole potential.

Seek out your potential. Figure out what you’re good at and get better at it. Don’t waste time getting mediocre at something you’re bad at. It’s not worth it. I learned so much about myself doing startups, working at big, fast-growing companies and helping other entrepreneurs. I think I know my strengths now - and I am sure I haven’t reached the limits of my potential. Keep pushing. Become Muhammed Ali.

2011-10 Startup Week Presentation Never Walk.012.jpg

Set high goals. Remember Roger? When Roger set out to break the four-minute mile, people believed that the human body will never be able to run that fast. Doctors were of the opinion that the heart will explode if you run that fast. And despite all this, Roger knew that it was possible - he set his goal that high. And only weeks after he broke the four-minute mark, a handful of other runners broke the same barrier. The barrier was was only in their heads.

You can’t change the world if you don’t set out to do so. Be bold. Dream big. Who would have thought that we can put a man on the moon? Or that a little social network for Stanford students can become the largest website on the planet?

2011-10 Startup Week Presentation Never Walk.013.jpg

Relax under pressure. Look closely at Shalane Flanagan’s facial expression on this photo. Shalane is the world-record holder for the 3000m. And she is completely relaxed and in the zone while racing.

You can’t perform to the best of your abilities if you are tense. You will annoy the people around you. I know - I was tense when I did my first company. I yelled at people. It wasn’t nice - and it didn’t help. Learn to relax under pressure. Breathe deep - it will help you.

2011-10 Startup Week Presentation Never Walk.014.jpg

Attack pain. Pain is inevitable. You will feel pain. You can choose to let it dominate you or choose to attack it, ignore it, grind through it. At the end pain is just a neuro-signal. You can will your way through it. Pain is the purifier. Be Arnold.

I can’t count the amount of times I came to a point where I just wanted to stop. Wanted to give in to the pain. Or just take a break. Both in running, life and running my businesses. Ignore the feeling. Grind through. It’s just a neuro-signal. If its worth it - push on.

2011-10 Startup Week Presentation Never Walk.015.jpg

Push the pace. Go out and don’t hold back. Don’t be the guy who races in the shadow of others and tries to sneak by on the last few meters. Keep on pushing the pace. Steve Prefontaine to this day is the most courageous of runners in the world. He kept pushing the pace. Always.

You chose to start a company. Now do it properly - with every fibre of your body, continuously pushing the pace. Be bold. It’s the only way to succeed as a true leader.

2011-10 Startup Week Presentation Never Walk.016.jpg

Work as a team. Running looks from the onset like a very solitary sport. It is not. Roger had two good friends pace him through the first two and the third round of his four-round record run. Your team is everything. Without them you are nothing.

Embrace the spirit of the team in your organization. There is no room for anything else - you have to work as one, for a common goal. Even the brilliant Steve Jobs couldn’t make things happen without his team.

2011-10 Startup Week Presentation Never Walk.017.jpg

Run to win. History has it that Pheidippides died after reporting the Greek victory over Persia in the Battle of Marathon to Athens. Treat the marathon with respect. Run to win. Every time.

Don’t get into business if you aren’t in it for the win. And do what it takes to win. Honor Pheidippides. And run like Usain Bolt.

Encore - Two More

2011-10 Startup Week Presentation Never Walk.018.jpg

“I’m going to work so that it’s a pure guts race at the end, and if it is, I am the only one who can win it.” — Steve Prefontaine

2011-10 Startup Week Presentation Never Walk.019.jpg

Defeat the wall. When you run a marathon you will hit the wall. After 21 miles of running your body simply runs out of glycogen and wants to shut down. This is the point where your will is tested most. You push through it. You force carbohydrates into your body although your stomach started cramping up at mile 15. But deep down you always knew - it is possible. So you persevered and set one foot in front of the other. Repeat. And repeat.

In every venture, I hit the wall. There always was the day when I didn’t want to get out of bed. Where I just wanted to throw it all down the drain and give up. Persevere. Get dressed, get to work, get going. Force yourself through it. It won’t last. You can defeat the wall.

2011-10 Startup Week Presentation Never Walk.020.jpg

Relentless Focus & Boring Consistency. Running is all about spending hours and hours doing the same thing - running. You need to have laser-sharp focus and be consistent. There is no way around.

In your company there is nothing more important than making the main thing the main thing and then executing on it. It’s not flashy & glamorous - but it is how you will get to your goal. I ignored this piece of advice in my first company. I kept chasing the next new thing. And failed.

Again

2011-10 Startup Week Presentation Never Walk.021.jpg

  1. Train Hard
  2. Make Sacrifices
  3. Make Positive Choices
  4. Seek Your Potential
  5. Set High Goals
  6. Relax Under Pressure
  7. Attack Pain
  8. Push The Pace
  9. Work As A Team
  10. Run To Win
  11. Defeat The Wall
  12. Relentless Focus & Boring Consistency

One Rule

2011-10 Startup Week Presentation Never Walk.022.jpg

“The man who can drive himself further once the effort gets painful is the man who will win.” — Roger Bannister

2011-10 Startup Week Presentation Never Walk.023.jpg

The first rule is actually the first and second rule of everything you do.

2011-10 Startup Week Presentation Never Walk.024.jpg

If you don’t have a big, fat grin on your face when you run, don’t do it. Have fun while you’re out there. It is your race.

Remember

2011-10 Startup Week Presentation Never Walk.025.jpg

“Nach dem Spiel ist vor dem Spiel. – After the game is before the game.” — Sepp Herberger

2011-10 Startup Week Presentation Never Walk.026.jpg

“The only good race pace is suicide pace, and today looks like a good day to die.” — Steve Prefontaine

NEVER WALK.

Planet Mozillabrowser-chrome is greener and in many chunks

On Friday we rolled out a big change to split up our browser-chrome tests.  It started out as a great idea to split the devtools out into their own suite, then after testing, we ended up chunking the remaining browser chrome tests into 3 chunks.

No more 200 minute wait times, in fact we probably are running too many chunks.  A lot of heavy lifting took place, a lot of it in releng from Armen and Ben, and much work from Gavin and RyanVM who pushed hard and proposed great ideas to see this through.

What is next?

There are a few more test cases to fix and to get all these changes on Aurora.  We have more work we want to do (lower priority) on running the tests differently to help isolate issues where one test affects another test.

In the next few weeks I want to put together a list of projects and bugs that we can work on to make our tests more useful and reliable.  Stay tuned!

 


Planet MozillaWeird network problem - help!

I know that strictly speaking I'm posting this to Planet Mozilla, and it's about Chrome/Chromium, but someone here will be able to point me in the right direction.

I'm having odd trouble with Chrome establishing an SSL connection to my webserver. Not only does it not connect, it cuts off any communication to the server for 5 minutes.

Steps to reproduce:

  1. Ping darktrojan.net. It resolves to 64.13.238.140 and responds as you'd expect.
  2. Visit https://www.darktrojan.net/ in Chrome. It gives a cert error in Firefox, but in Chrome will fail to connect.
  3. Ping darktrojan.net again. No response.

This issue has appeared in the last few days - right when I need it to be working most - which suggests it's a (recently released) Chrome 34 problem, except that I can reproduce it in Chromium 33. I don't use either on a regular basis so I don't know if that has anything to do with anything. I also wonder if it's something to do with Heartbleed but my webhost have said the site was never vulnerable so I assume nothing's changed there.

Please email or tweet at me if you have any idea what's going on. I'm tearing my hair out here.

Planet MozillaNotes from UX Immersion Mobile Conference 2014

Last week I was in Denver for a three day conference put on by User Interface Engineering. I met lots of great people, and the workshops and talks were fantastic. Would highly recommend to anyone looking for a good UX conference to attend.

http://uxim14.uie.com/

Brad Frost

Screen Shot 2014-04-13 at 10.22.59 AM

Screen Shot 2014-04-13 at 10.23.06 AM

Screen Shot 2014-04-13 at 10.23.13 AM

We don’t know what will be under the Christmas tree in two years, but that is what we need to design for.
Principles of Adaptive Design
  • Ubiquity
  • Flexibility
  • Performance
  • Enhancement
  • Future Friendly
Tools
Atomic Design
  • Break down design elements into reusable components of a system:
  • Atoms
  • Molecules
  • Organisms
  • Templates
  • Pages

Screen Shot 2014-04-13 at 10.29.36 AM

More details on Atomic Design here: http://bradfrostweb.com/blog/post/atomic-web-design/

 


 

Ben Callahan

Screen Shot 2014-04-13 at 10.34.57 AM

Screen Shot 2014-04-13 at 6.52.15 PM

Dissecting Design

Part 1: Establish the Aesthetic

Use tools you are comfortable with to establish the aesthetic

 

Part 2: Solve the Problem
  • Static design tools (photoshop, etc)
  • Responsive design tools
  • html/css

You best solve problems using tools you are fluent with

 

Part 3: Refine the Solution
  • Static tools
  • Instead of static design hand-offs, consider design pairing: one engineer, one designer, working together side by side.

Efficiency is key with refining a design solution

 

Group improvisation

Screen Shot 2014-04-13 at 10.38.29 AM

The fact is, there is no one way to design for screens. Every project is different. Every team is different. It’s interesting to look at it as a form of group improvisation, where everyone is contributing in the way that makes this particular project work.

“Group improvisation is a challenge. Aside from the weighty technical problem of collective coherent thinking, there is the very human, even social need for sympathy from all members to bend for the common result.”

Group Improvisation requires individuals on a team to be…

  • fluent
  • humble
  • empathetic

 

Ben’s Theory on Web Process

Create guidelines instead of rigid processes. “The amount of process required is inversely proportional to the skill, humility, and empathy of your team.”

More details on Dissecting Design here: http://seesparkbox.com/foundry/dissecting_design


 

Luke Wroblewski

Screen Shot 2014-04-13 at 10.41.26 AM

Mobile Growth

Mobile shopping in US

  • 2011: 14%
  • 2012: 30%
  • 2013: 50%

Paypal mobile payments

  • 2010: $750M
  • 2011: $4B
  • 2012: $14B
  • 2013: $27B

Mobile revenue

  • Yelp: 40%
  • Facebook: 53%
  • Twitter: 75%

Screen Shot 2014-04-13 at 10.42.39 AM

We’ve only had about 6 years to figure out mobile design, vs 30 years of figuring out PCs. We have lots to learn. And more importantly, lots to unlearn.

On the hamburger menu
  • Test showed that a button that reads “MENU” was selected 20% more than when a hamburger menu was used
  • Interesting Polar Mobile case study, where hiding content under a menu vs using a segmented control showed an instant and major drop off in usage as soon as they changed it
  • Measure measure measure
On the importance of good inputs
  • Airport wifi login – 23 steps on mobile to pay money to get online
  • Designers talked to Luke about how they cut it down to 19.
  • Luke’s response – I have an idea that uses *4* inputs.
  • Hotel Tonight — Using a signature gesture to solve the baby booking the hotel room problem. So good.
  • Booking a hotel happens in 3 taps and a swipe, giving them a competitive advantage

Screen Shot 2014-04-13 at 10.43.50 AM

On Startups
  • Release – As quick as you can
  • Refine – by observing real use
  • Repeat – design is never done
Idea: Preemptive customer service

They were watching the user logs, and when they saw bugs they fixed them before users complained, and then reached out to let them know they had fixed something. User feedback was 100% positive. Brilliant.

Screen Shot 2014-04-13 at 10.44.26 AM

 


 

 

Jared Spool

Designing Designers

Job interview test
  • Present candidate with a messy sketch of a web form
  • A good designer cleans it up
  • A better designer simplifies
  • An even better ask why do we need this info

Side comment about unintentional design: What happens when you spend time working with everything in the system *except* the user’s experience

The need for design talent is growing, massively. How do we staff it?

IBM is investing 100M to expand design business. 1000+ UX designers are going to IBM. This means all the big corporations are going to start hiring UX like crazy. How do we as the design community even staff that? Especially since today, all design unicorns are self taught.

How to become a design unicorn <3
  1. Train yourself
  2. Practice your skills
  3. Deconstruct as many designs as you can
  4. Seek out feedback (and listen to it)
  5. Teach others

 

It doesn’t happen like this in school, though.
  • Schools have too many constraints
  • Out of date (3yr accreditation process)
  • There aren’t enough schools to keep up with the new jobs in demand
  • Schools don’t go deep enough
  • The semester / class based school system can’t support the kind of learning designers need to do to develop their skills

Tying the education problem back to Unintentional Design. We focused so much on the system that we forgot what we were actually trying to do.

Changes to education system?

What if design school were more like Medical Education (combines theory and craft). This idea of pre-med, medical school, internships, residences, and finally fellowships.

Changes to our workplace?
  • We are the future managers of this next wave. What can we do?
  • Building a culture of learning
  • Integrating *practice* into our routines (critiques, sketching, what else?)
  • Apply our design skills to design learning

Jared is exploring this idea with The Center Centre — formerly known as the Unicorn Institute


 

Nate Schutta

JQuery Mobile Prototyping Workshop

“If a picture is worth a thousand words, how many meetings is a prototype worth?”

Useful links:


Planet MozillaWhat does the fox (Firefox OS) say in Chile?



What does the fox (Firefox OS) say in Chile?

Planet MozillaAuthentication with Persona and MySQL in an Express application

Since its beginning I liked Persona (also known as BrowserID), because it:

  • technically supports a more decentralised Internet
  • makes authentication easier for users

Shame on me, only just a few weeks ago I found time to play with this. As a proof of concept, I prepared an Express application that connects to MySQL so I could have a better understanding about how this authentication system actually works in practice (from a developer point of view).

You can find the code here: Express Persona MySQL Example.

The application is essentially based on Express Persona authentication module, but it separates the client part from the server side and adds a MySQL layer. So, instead of NodeJS Express for the server side, we could also use any other language, let's say Perl Mojolicious, but at the same time continuing to use the same code for the client webapp.

An example MySQL dump and an Apache virtual host configuration is provided as well (the latter for proxying requests from the client to the server and for ensuring 'same origin policy' is respected). We must not forget that Persona takes care only about authentication, so account creation must be handled apart.

One thing that can help when designing an application/service is knowing that custom Persona URLs can also be used. For instance, in the client code: /login/persona/verify is forwarded to http://localhost:4646/persona/verify (via Apache proxy) and this latter URL can also be further customised thanks to the Express-persona module (verifyPath optional parameter).

On the other hand, as a reference, the magic at the client side is done by navigator.id.watch.

In the slides below Alina details a bit more (in Spanish) about Persona and how to deploy the code I comment:

Hope this helps to get more people to try Persona!

Planet MozillaCOPYFILE_DISABLE and python distutils in python 2.6

My friend and colleague Jannis (aka jezdez) Leidel saved my bacon today where I had gotten completely stuck.

So, I have this python2.6 virtualenv and whenever I ran python setup.py sdist upload it would upload a really nasty tarball to PyPI. What would happen is that when people do pip install premailer it would file horribly and look something like this:

...
IOError: [Errno 2] No such file or directory: '/path/to/virtual-env/build/premailer/setup.py'

What?!?! If you download the tarball and unpack it you'll see that there definitely is a setup.py file in there.

Anyway. What happens, which I didn't realize was that within the .tar.gz file there were these strange copies of files. For example for every file.py there was a ._file.py etc.

Here's what the file looked like after a tarball had been created:

(premailer26)peterbe@mpb:~/dev/PYTHON/premailer (master)$ tar -zvtf dist/premailer-2.0.2.tar.gz
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 ./._premailer-2.0.2
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._LICENSE
-rw-r--r--  0 peterbe staff    1517 Mar 28 10:13 premailer-2.0.2/LICENSE
-rw-r--r--  0 peterbe staff     280 Apr  9 21:10 premailer-2.0.2/._MANIFEST.in
-rw-r--r--  0 peterbe staff      34 Apr  9 21:10 premailer-2.0.2/MANIFEST.in
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/PKG-INFO
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer/
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer.egg-info
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._README.md
-rw-r--r--  0 peterbe staff    5185 Mar 28 10:13 premailer-2.0.2/README.md
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._setup.cfg
-rw-r--r--  0 peterbe staff      59 Apr 11 15:51 premailer-2.0.2/setup.cfg
-rw-r--r--  0 peterbe staff     280 Apr  9 21:09 premailer-2.0.2/._setup.py
-rw-r--r--  0 peterbe staff    2079 Apr  9 21:09 premailer-2.0.2/setup.py
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._dependency_links.txt
-rw-r--r--  0 peterbe staff       1 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/dependency_links.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/._not-zip-safe
-rw-r--r--  0 peterbe staff       1 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/not-zip-safe
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/PKG-INFO
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._requires.txt
-rw-r--r--  0 peterbe staff      23 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/requires.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._SOURCES.txt
-rw-r--r--  0 peterbe staff     329 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/SOURCES.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._top_level.txt
-rw-r--r--  0 peterbe staff      10 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/top_level.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:21 premailer-2.0.2/premailer/.___init__.py
-rw-r--r--  0 peterbe staff      66 Apr  9 21:21 premailer-2.0.2/premailer/__init__.py
-rw-r--r--  0 peterbe staff     280 Apr  9 09:23 premailer-2.0.2/premailer/.___main__.py
-rw-r--r--  0 peterbe staff    3315 Apr  9 09:23 premailer-2.0.2/premailer/__main__.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._premailer.py
-rw-r--r--  0 peterbe staff   15368 Apr  8 16:22 premailer-2.0.2/premailer/premailer.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._test_premailer.py
-rw-r--r--  0 peterbe staff   37184 Apr  8 16:22 premailer-2.0.2/premailer/test_premailer.py

Strangly, this only happened in a Python 2.6 environment. The problem went away when I created a brand new Python 2.7 enviroment with the latest setuptools.

So basically, the fault lies with OSX and a strange interaction between OSX and tar.
This superuser.com answer does a much better job explaining this "flaw".

So, the solution to the problem is to create the distribution like this instead:

$ COPYFILE_DISABLE=true python setup.py sdist

If you do that, you get a healthy lookin tarball that actually works to pip install. Thanks jezdez for pointing that out!

Planet MozillaFirefox 27 Bug Statistics

I’m writing today to present the bug statistics for Firefox 27. My apologies for the tardiness of this blog post; too many things have got in my way recently. I try to get these posts out at the end of life of the respective Firefox version as that allows me to present the statistics across the entire life-cycle of a Firefox version. For Firefox 27, this should have coincided with Firefox 28′s release a few weeks ago. Again, my apologies for getting this out later than usual.

The first story I want to tell is about the high-level breakdown of all tracked bug in this release. As you can see below there was a marked drop in the total bug volume in Firefox 27. Perhaps unsurprisingly this allowed us to focus a bit more which resulted in a smaller amount of unresolved and unconfirmed bugs being shipped in this release. The numbers are still much higher than we would like but it is a small victory for the overall quality of Firefox if these numbers continue to trend downward.

Firefox27_TotalBugs

The second story I want to tell is about the percentage of incoming bugs confirmed. This is typically an indication of the effectiveness of our incoming bug triage practices. As the volume of incoming bugs decreases we like to see the number of confirmed bugs increase. Unfortunately we have been trending the opposite direction for some time. Previously I had attributed this to the ever increasing volume of bugs but I can no longer rely on this excuse. Looking forward to Firefox 28 I can say that we’ve made remarkable improvement in this area in an effort to reverse this trend. I’ll share more on that in a few weeks.

Firefox27_Confirmed

The third story I’d like to share is that of when fixes landed for Firefox 27. The following chart I’ve plotted the average time-line for the past few releases along with Firefox 27′s time-line. In general we expect to see an ever increasing curve toward through the Nightly cycle, trailing off as we proceed through Aurora and Beta, with spikes in the first half of these cycles.

Firefox 27 appeared to be trending higher than average as we approached the end of each cycle. While these numbers are not completely out of control it does put a bit of extra strain on QA. After all, the later a fix lands, the less time we have to test it. Ultimately this creates risk to the quality of the product we ship, but as long as we recognize that we can try to plan for it accordingly.

Firefox27_Fixes-by-Date

The fourth story I want to tell is about the number of bugs reopened. We typically reopen a bug when something is fundamentally flawed with the initial implementation and/or if a patch needs to be backed out. Even in cases where a regression is found, we tend to leave the bug closed and deal with the regression in its own bug report. As such, a high volume of bugs being reopened is usually indicative of a release that saw much churn and may point to quality issues in release.

Unfortunately Firefox 27 continues the story of many of the version before it and represents a marginal increase in the number of bugs reopened. Of course, the other side of this story may be that testing was more effective. It’s hard to say concretely just looking at the bug numbers.

Firefox27_Reopened

The fifth story I want to tell is one of stability. The following chart shows the number of topcrash bugs reported against Firefox 27 as compared to previous releases. For those unaware, a topcrash bug are those crashes which show up most frequently in the wild and present the greatest risk to quality and security for our users. The unfortunate story for Firefox 27 is that we’ve seen an end to the downward trend that we saw started with Firefox 25 and continued with Firefox 26. The volume of topcrashes puts Firefox 27 in the same ballpark as the rash of point-releases we saw in Firefox’s teens.

Of course there’s two sides to every story. The other side of this may very well be that we got better at reporting stability issues and that resulted in a higher volume of known bugs. It’s hard to say for sure.

Firefox27_Topcrashes

The final story I want to tell today is about the percentage of regressions reported post-release. As we hone our processes, bring on more engineers, and get assistance from more contributors, we’ve been getting better at finding and fixing regressions. It’s inevitable that the more code landing in a release increases the potential for regression. Naturally this leads to an increase in the total number of regressions reported. Firefox 27 was no different so I thought I’d look at regressions a little differently this time around.

The following chart shows the ratio of regressions reported before release to regressions reported after release. A release with a high-volume of post-release regressions is a failure from a QA perspective because it means many bugs slipping through our fingers. I wouldn’t expect the number of post-release regressions to ever be 0 but we need to strive to always be better.

Firefox 27 represents a huge victory on this front. We saw a huge drop in the number of Firefox 27 regressions reported post-release. For months we’ve sought to improve our triage processes, engage more with developers, and work harder to involve volunteers in our day to day efforts. It’s nice to see these efforts finally paying off.

Firefox27_Regressions

That’s Firefox 27, in a nutshell, from a QA perspective. I think it’s useful to be able to reflect on the bug numbers and see what kind of an impact our efforts are having on the product. I really do enjoy visualizing the data and talking about our “victories”, but it’s just as interesting seeing what the data is telling us about where we may have failed. I believe that learning from failures has far more impact than building on successes and acts as a great motivator. What we want to avoid is those crippling failures. I think Firefox 27 is a nice iterative step forward.

Planet MozillaHosting your JavaScript library builds for Bower

A while ago I blogged about the troubles of hosting a pre-built distribution of vtt.js for Bower. The issue was that there is a build step we have to do to get a distributable file that Bower can use. So we couldn't just point Bower at our repo and be done with it as we weren't currently checking in the builds. I decided on hosting these builds in a separate repo instead of checking the builds into the main repo. However, this got troublesome after a while (as you might be able to imagine) since I was building and commiting the Bower updates manually instead of making a script like I should have. It might be a good thing that I didn't end up automating it with a script since we decided to switch to hosting the builds in the same repo as the source code.

The way I ended up solving this was to build a grunt task that utilizes a number of other tasks to build and commit the files while bumping our library version. This way we're not checking in new dist files with every little change to the code. Dist files which won't even be available through Bower or node because they're not attached to a particular version. We only need to build and check in the dist files when we're ready to make a new release.

I called this grunt task release and it utilizes the grunt-contrib-concat, grunt-contrib-uglify, and grunt-bump modules.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );

  grunt.registerTask( "stage-dist", "Stage dist files.", function() {
    exec( "git add dist/*", this.async() );
  });

  grunt.registerTask("release", "Build the distributables and bump the version.", function(arg) {
    grunt.task.run( "build", "stage-dist", "bump:" + arg );
  });

I've also separated out builds into dev builds and dist builds. This way in the normal course of development we don't build dist files which are tracked by git and have to worry about not commiting those changes. Which would be the case because our test suite needs to build the library in order to test it.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );
  grunt.registerTask( "dev-build", [ "uglify:dev", "concat:dev" ])
  grunt.registerTask( "default", [ "jshint", "dev-build" ]);

Then when we're ready to make a new release with a new dist we would just run.

  grunt release:patch // Or major or minor if we want too.

Planet MozillaWriting for Webmaker’s new “Explore” page

Explore copy.021

What should this copy say?

We’re shipping a new “explore” page for Webmaker. The goal: help users get their feet wet, quickly grokking what they can do on Webmaker.org. Plus: make it easy to browse through the list of skills in the Web Literacy Standard, finding resources and teaching kits for each.

It’s like an interactive text book for teaching web literacy.

The main writing challenge: what should the top panel say? The main headline and two blurbs that follow.

In my mind, this section should try do three things:

  1. State what this is. And why you care.
  2. Tell a story about the list of skills at left. When you hit this page, you see a list of rainbow-coloured words that can be confusing or random if you’re here for the first time. “Sharing. Collaborating. Community Participation…. Hmmm…. what does that all actually mean?”
  3. Focus on what users can do here. What does exploring those things do for you? What’s the action or value?

Explore copy.022

First draft

Here a start:

Teach the web with Webmaker

Explore creative ways to teach
 digital skills…
through fun making and sharing, backed by the
global Mozilla community’s Web Literacy Standard.

Free. Open source. Fun.

Each skill has free resources and teaching kits anyone can use to teach others –
to help create a more web literate world.

Next steps

Planet MozillaThis week in Mozilla RelEng – April 11th, 2014

Major highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Planet MozillaThis week in Mozilla RelEng – April 11th, 2014

Major highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Planet MozillaTechnology Trends (April 2014)

Earlier this month I was asked to present my thoughts and observations on “Technology Trends” in front of a group of Dutch business leaders. A lot of my thinking these days circles around the notion of “exponential growth” and the disruptive forces which come with this (full credit goes to Singularity University for putting these ideas into my head) and the notion of “ambient/ubiquitous computing” (full credit to my former colleague and friend Allen Wirfs-Brock).

In summary I believe we are truly in the midst of a new era with fundamental changes coming at an ever increasing pace at us.

Here’s my deck – it mostly works standalone.

Vodafone-WIP.jpg

Planet MozillaChanges Firefox 29 beta6 to beta7

This beta is a bit bigger than the beta6. It fixes some UI bugs, two bugs in the Gamepad API and some top crash bugs like bug 976536 or bug 987248.

  • 32 changesets
  • 50 files changed
  • 1414 insertions
  • 522 deletions

ExtensionOccurrences
cpp21
js10
h5
css4
xul1
mn1
mk1
json1
jsm1
java1
ini1
in1
build1

ModuleOccurrences
browser16
js8
image5
layout4
gfx4
mobile3
toolkit2
xpfe1
widget1
view1
netwerk1
media1
hal1
content1

List of changesets:

Nick AlexanderBug 967022 - Fix Gingerbread progressbar animation bustage. r=rnewman, a=sylvestre - 26f9d2df24af
Neil DeakinBug 972566, when a window is resized, panels should be repositioned after the view reflow rather than within the webshell listener, r=tn, a=lsblakk. - 1a92004a684f
Mike ConleyBug 989289 - Forcibly set the 'mode' attribute to 'icons' on toolbar construction. r=jaws, a=sledru. - 85d2c5b844bc
Gijs KruitboschBug 988191 - change to WCAG algorithm for titlebar font, r=jaws, a=sledru. - 5e0b16fe8951
Mike de Boer[Australis] Bug 986324: small refactor of urlbar and search field styles. r=dao, a=sledru. - 274d760590d5
Mike ConleyBacked out changeset 9fc38ffaff75 (Bug 986920) - a90a4219b520
Mike ConleyBug 989761 - Make sure background tabs have the right z-index in relation to the classic theme fog. r=dao, a=sledru. - 552251cb84b9
Mike ConleyBug 984455 - Bookmarks menu and toolbar context menus can be broken after underflowing from nav-bar chevron. r=mak,mdeboer,Gijs. a=sledru. - 3f2d6f68c415
Jan de MooijBug 986678 - Fix type check in TryAddTypeBarrierForWrite. r=bhackett, a=abillings - c19e0e0a8535
Jon CoppeardBug 986843 - Don't sweep empty zones if they contain marked compartments. r=terrence, a=sledru - ed9793adc2c7
Douglas CrosherBug 919592 - Ionmonkey (ARM): Guard against branches being out of range and bail out of compilation if so. r=mjrosenb, a=sledru - 7be150811dd8
Richard MartiBug 967674 - Port new Fxa sync options work to in-content prefs. r=markh, a=sledru - c8bcfc32f855
Till SchneidereitBug 976536 - Don't relazify inlined functions. r=jandem, a=sledru - ee6aea5824b7
Ted MielczarekBug 980876 - Be smarter about sending gamepad updates from the background thread. r=smaug, a=sledru - 7ccc27d5c8f4
Ted MielczarekBug 980876 - Null check GamepadService in case of events still in play during shutdown. r=smaug, a=sledru - 30c45853f8cb
Bobby HolleyBug 913138 - Release nsLayoutStatics when the layout module is unloaded. r=bsmedberg - 64fcbdc63ed7
Bobby HolleyBug 913138 - Shut down gfx at the end of layout shutdown. r=bsmedberg - 6899f7b4f57c
Bobby HolleyBug 913138 - Move imgLoader singleton management out of nsContentUtils. r=bsmedberg - 58786efcdbbb
Bobby HolleyBug 913138 - Shut down imagelib at the end of layout shutdown. r=bsmedberg a=sylvestre - 968f7b3ff551
Nick AlexanderBug 988437 - Part 1: Allow unpickling across Android Account types; bump pickle version. r=rnewman, a=sylvestre - 5dfea367b8b9
Nick AlexanderBug 988437 - Part 2: Make Firefox Account Android Account type unique per package. r=rnewman, a=sylvestre - 47c8852fde22
Matthew NoorenbergheBug 972684 - Don't use about:home in browser_findbar.js since it leads to intermittent failures and isn't necessary for the test. r=mikedeboer, a=test-only - b39c5ca49785
Edwin FloresBug 812881 - Ensure OMX plugins instantiate only one OMXClient instance. r=sotaro, a=sledru - 14b8222e1a24
Nicholas HurleyBug 987248 - Prevent divide-by-zero in seer. r=mcmanus, a=sledru - afdcb5d5d7cc
Tim ChienBug 963590 - [Mac] Make sure lightweight themes don't affect fullscreen toolbar height/position. r=MattN, a=sledru - 2d58340206f4
Gijs KruitboschBug 979653 - Fix dir attribute checks for url field in rtl mode. r=ehsan, a=sledru - 44a94313968a
Jeff GilbertBug 963962 - Fix use of CreateDrawTargetForData in CanvasLayerD3D9/10. r=Bas, a=sledru - 635f912b3164
Gijs KruitboschBacked out changeset 85d2c5b844bc (Bug 989289) because we realized it'd break add-on toolbars, a=backout - 1244d500650c
Blair McBrideBug 987492 - CustomizableUI.jsm should provide convenience APIs around windows, r=gijs,mconley, a=sledru. - 9c70e4856b3f
Mike de BoerBug 990533: use correct toolbar icon for the Home button when placed on the Bookmarks toolbar. r=mak, a=sledru. - 2948b8b5d51d
Mike de BoerBug 993265: preserve bookmark folder icons on the Bookmarks toolbar. r=mak, a=sledru. - 32d5b6ea4a64
Matt WoodrowBug 988862 - Treat DIRECT2D render mode as GDI when drawing directly to the window through BasicLayers. r=jrmuizel, a=sledru. - f5622633b23f

r= means reviewed by
a= means uplift approved by

Previous changelogs:

Planet MozillaWhat have I been working on? (2014/03)

So it’s April the 11th already and here I am writing about what I did on March. Oh well!

I spent a bunch of time gathering and discussing requisites/feedback for AppManager v2, which implied

  1. thinking about the new ideas we sketched while at the Portland work week in February
  2. thinking about which AppManager questions to ask my team and the Partner Engineering team when we all were at Mountain View – because you can’t show up at a meeting without a set of questions ready to be asked
  3. then summarise the feedback and transmit it to Darrin, our UX guy who couldn’t be at the meeting in Mountain View
  4. then discuss the new questions with Paul & team who are going to implement it

And we were at Mountain View for the quarterly Apps meeting, when us in the Apps+Marketplace section of Mozilla get to talk apps and strategy and stuff for two days or more. It’s always funny when you meet UK-based workmates at another office and realise you’ve never spoken before, or you have, but didn’t associate their faces to their irc nicknames.

It was also the last of the meetings held at the already ex-Mozilla office in Castro Street, so it was sort of sad and bitter to leave Ten Forward (the big meeting room where most of the meetings and announcements have been happening) for the very last time. Mozilla HQ is now somewhere else in Mountain View, but I’ll always remember the Castro St. office with a smile because that’s where my first week at Mozilla happened :-)

But before I flew to San Francisco I attended GinJS, which I had been willing to attend for ages and couldn’t (because I’m never in town when it happens). I hadn’t even planned to go to that one, but some folks from Telefonica Digital were going and sort of convinced me to attend too. It was funny to sign up for the meeting while walking down Old Street on the way to the pub where GinJS was held. That’s what the mobile internet was designed for! Then at GinJS I met a number of cool people-some I had spoken before, some I hadn’t. I recommend you attend it, if you can make it :-)

I was also on the Components panel in EdgeConf. I still haven’t written about the experience and the aftermath of the conference because I basically fell ill at the end of it, and was very busy after that, but I’ll do it. I promise!

I also attended the first instalment of TRIBE, a sort of internal personal development program that is run at Mozilla. The first unwritten rule of TRIBE is you don’t talk about TRIBE… nah, I’m just joking! The first session is about “becoming aware of yourself”, and it was quite interesting to observe myself and my reactions in a conscious way rather than in the totally reactive, subconscious led mode we tend to operate under most of the time. It was also interesting to speak to other attendees and see things from their own point of view. I know it sounds tacky but it has led me to consider treating others more compassionately, or at least try to empathise more rather than instantly judge. This kind of seminars should be mandatory, whether you work in open source or not.

This session was held in Paris, so that gave me the chance to try and find the best croissant place in the morning and have a look at some nice views in the evening when we finished. Also, I went through the most amazingly thrilling and mindblowing-scary experience in a long time: a taxi ride to the airport during rush hour. I thought we were going to die on each of the multiple and violent street turns we did. I saw the Eiffel Tower in the distance and thought “Goodbye Paris, goodbye life” as we sped past a bus, just a few centimeters apart. Or as a bike almost ran over the taxi (an the opposite, too). I’m pretty sure I left a mark on the floor of the taxi as my foot involuntarily tried to brake all the time. And I thought that traffic in Rome was crazy… hah!

In between all this I managed to update a bunch of the existing Mortar templates, help improve some Brick components, publish an article in Mozilla Hacks, and give a ton of feedback on miscellaneous things (code, sites, peers, potential screencasts, conference talks, articles). Did I say a ton? Make it a ton and a half. Oh, and I also interviewed another potential intern. I’m starting to enjoy that–I wonder if it’s bad!

And I took a week of holidays.

I initially planned on hacking with WebMIDI and a KORG nanoKEY2, but my brain wasn’t willing to collaborate, so I just accepted that fact, and tried not to think much about it. The weather has been really warm and sunny so far so I’ve been hiking and staying outside as much as possible, and that’s been good after spending so much time indoors (or in planes).

flattr this!

Planet MozillaFast arrow functions in Firefox 31

Last week I spent some time optimizing ES6 arrow functions. Arrow functions allow you to write function expressions like this:

a.map(s => s.length);

Instead of the much more verbose:

a.map(function(s){ return s.length });

Arrow functions are not just syntactic sugar though, they also bind their this-value lexically. This means that, unlike normal functions, arrow functions use the same this-value as the script in which they are defined. See the documentation for more info.

Firefox has had support for arrow functions since Firefox 22, but they used to be slower than normal functions for two reasons:

  1. Bound functions: SpiderMonkey used to do the equivalent of |arrow.bind(this)| whenever it evaluated an arrow expression. This made arrow functions slower than normal functions because calls to bound functions are currently not optimized or inlined in the JITs. It also used more memory because we’d allocate two function objects instead of one for arrow expressions.
    In bug 989204 I changed this so that we treat arrow functions exactly like normal function expressions, except that we also store the lexical this-value in an extended function slot. Then, whenever this is used inside the arrow function, we get it from the function’s extended slot. This means that arrow functions behave a lot more like normal functions now. For instance, the JITs will optimize calls to them and they can be inlined.
  2. Ion compilation: IonMonkey could not compile scripts containing arrow functions. I fixed this in bug 988993.

With these changes, arrow functions are about as fast as normal functions. I verified this with the following micro-benchmark:

function test(arr) {
    var t = new Date;
    arr.reduce((prev, cur) => prev + cur);
    alert(new Date - t);
}
var arr = [];
for (var i=0; i<10000000; i++) {
    arr.push(3);
}
test(arr);

I compared a nightly build from April 1st to today’s nightly and got the following results:
arrow

We’re 64x faster because Ion is now able to inline the arrow function directly without going through relatively slow bound function code on every call.

Other browsers don’t support arrow functions yet, so they are not used a lot on the web, but it’s important to offer good performance for new features if we want people to start using them. Also, Firefox frontend developers love arrow functions (grepping for “=>” in browser/ shows hundreds of them) so these changes should also help the browser itself :)

Planet MozillaTiziana Sellitto: Outreach Program For Women a year later

It’s passed a year and a new Summer will begin…a new summer for the women that will be chosen and that will start soon the GNOME’s Outreach Program for Women.

This summer Mozilla will participate with three different projects listed here and among them the Mozilla Bug Wrangler for Desktop QA that is the one I applied for last year. It has been a great experience for me and I want to wish good luck to everyone who submitted the application.

I hope you’ll have a wonderful and productive summer :)

Planet MozillaPart 2: How to deal with IFFY requirements

My last post was basically a very long winded way of saying, "we have a problem". It kind of did a little dance around "why is there a problem" and "how do we fix it", but I want to explore these two questions in a bit more detail. Specifically, I want to return to the two case studies and explore why our test harnesses don't work and why mozharness does work even though both have IFFY (in flux for years) requirements. Then I will explore how to use the lessons learned to improve our general test harness design. ### DRY is not everything I talked a lot about the [DRY principle][1] in the last article. Basically the conclusion about it was that it is very useful, but that we tend to fixate on it to the point where we ignore other equally useful principles. Having reached this conclusion, I did a quick internet search and found [an article][2] by Joel Abrahamsson arguing the exact same point (albeit much more succinctly than me). Through his article I found out about the [SOLID principles][3] of object oriented design (have I been living under a rock?). They are all very useful guidelines, but there are two that immediately made me think of our test harnesses in a bad way. The first is the [single responsibility principle][4] (which I was delighted to find is meant to mitigate requirement changes) and the second is the [open/closed principle][5]. The single responsibility principle states that a class should only be responsible for one thing, and responsibility for that thing should not be shared with other classes. What is a responsibility? A responsibility is defined as a *reason to change*. To use the wikipedia example, a class that prints a block of text can undergo two changes. The content of the text can change, or the format of the text can change. These are two different responsibilities that should be split out into different classes. The open/closed principle states that software should be open for extension, but closed for modification. In other words, it should be possible to change the behaviour of the software only by adding new code without needing to modify any existing code. A popular way of implementing this is through abstract base classes. Here the interface is closed for modification, and each new implementation is an extension of that. Our test harnesses fail miserably at both of these principles. Instead of having several classes each with a well defined responsibility, we have a single class responsible for everything. Instead of being able to add some functionality without worrying about breaking something else, you have to take great pains that your change won't affect some other platform you don't even care about! Mozharness on the other hand, while not perfect, does a much better job at both principles. The concept of actions makes it easy to extend functionality without modifying existing code. Just add a new action to the list! The core library is also much better separated by responsibility. There is a clear separation between general script, build, and testing related functionality. ### Inheritance is evil This is probably old news to many people, but this is something that I'm just starting to figure out on my own. I like Zed Shaw's [analogy from *Learn Python the Hard Way*][6] the best. Instead of butchering it, here it is in its entirety. > In the fairy tales about heroes defeating evil villains there's always a dark forest of some kind. > It could be a cave, a forest, another planet, just some place that everyone knows the hero > shouldn't go. Of course, shortly after the villain is introduced you find out, yes, the hero has > to go to that stupid forest to kill the bad guy. It seems the hero just keeps getting into > situations that require him to risk his life in this evil forest. > > You rarely read fairy tales about the heroes who are smart enough to just avoid the whole situation > entirely. You never hear a hero say, "Wait a minute, if I leave to make my fortunes on the high seas > leaving Buttercup behind I could die and then she'd have to marry some ugly prince named Humperdink. > Humperdink! I think I'll stay here and start a Farm Boy for Rent business." If he did that there'd > be no fire swamp, dying, reanimation, sword fights, giants, or any kind of story really. Because of > this, the forest in these stories seems to exist like a black hole that drags the hero in no matter > what they do. > > In object-oriented programming, Inheritance is the evil forest. Experienced programmers know to > avoid this evil because they know that deep inside the Dark Forest Inheritance is the Evil Queen > Multiple Inheritance. She likes to eat software and programmers with her massive complexity teeth, > chewing on the flesh of the fallen. But the forest is so powerful and so tempting that nearly every > programmer has to go into it, and try to make it out alive with the Evil Queen's head before they > can call themselves real programmers. You just can't resist the Inheritance Forest's pull, so you go > in. After the adventure you learn to just stay out of that stupid forest and bring an army if you > are ever forced to go in again. > > This is basically a funny way to say that I'm going to teach you something you should avoid called > Inheritance. Programmers who are currently in the forest battling the Queen will probably tell you > that you have to go in. They say this because they need your help since what they've created is > probably too much for them to handle. But you should always remember this: > > Most of the uses of inheritance can be simplified or replaced with composition, and multiple > inheritance should be avoided at all costs. I had never heard the (apparently popular) term "composition over inheritance". Basically, unless you really really mean it, always go for "X has a Y" instead of "X is a Y". Never do "X is a Y" for the sole purpose of avoiding code duplication. This is exactly the mistake we made in our test harnesses. The Android and B2G runners just inherited everything from the desktop runner, but oops, turns out all three are actually quite different from one another. Mozharness, while again not perfect, does a better job at avoiding inheritance. While it makes heavy use of the mixin pattern (which, yes, is still inheritance) at least it promotes separation of concerns more than classic inheritance. ### Practical Lessons So this is all well and great, but how can we apply all of this to our automation code base? A smarter way to approach our test harness design would have been to have most of the shared code between the three platforms in a single (relatively) bare-bones runner that *has a* target environment (e.g desktop Firefox, Fennec or B2G in this case). In this model there is no inheritance, and no code duplication. It is easy to extend without modifying (just add a new target environment) and there are clear and distinct responsibilities between managing tests/results and actually launching them. In fact this how the gaia team implemented their [marionette-js-runner][7]. I'm not sure if that pattern is common to node's [mocha runner][8] or something of their design, but I like it. I'd also like our test harnesses to employ mozharness' concept of actions. Each action could be an atomic as possible setup step. For example, setting preferences in the profile is a single action. Setting environment is another. Parsing a manifest could be a third. Each target environment would consist of a list of actions that are run in a particular order. If code needs to be shared, simply add the corresponding action to whichever targets need it. If not, just don't include the action in the list for targets that don't need it. My dream end state here is that there is no distinction between test runners and mozharness scripts. They are both trying to do the same thing (perform setup, launch some code, collect results) so why bother wrapping one around the other? The test harness should just *be* a mozharness script and vice versa. This would bring actions into test harnesses, and allow mozharness scripts to live in-tree. ### Conclusion Is it possible to avoid code duplication with a project that has IFFY requirements? I think yes. But I still maintain it is exceptionally hard. It wasn't until after it was too late and I had a lot of time to think about it that I realized the mistakes we made. And even with what I know now, I don't think I would have fared much better given the urgency and time constraints we were under. Though next time, I think I'll at least have a better chance. [1]: http://en.wikipedia.org/wiki/DRY_principle [2]: http://joelabrahamsson.com/the-dry-obsession [3]: http://en.wikipedia.org/wiki/SOLID [4]: http://en.wikipedia.org/wiki/Single_responsibility_principle [5]: http://en.wikipedia.org/wiki/Open/closed_principle [6]: http://learnpythonthehardway.org/book/ex44.html [7]: https://github.com/mozilla-b2g/marionette-js-runner [8]: http://visionmedia.github.io/mocha

Planet MozillaFirefox Automation report – week 9/10 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 9 and 10. I for myself was a week on vacation. A bit of relaxing before the work on the TPS test framework should get started.

Highlights

In preparation to run Mozmill tests for Firefox Metro in our Mozmill-CI system, Andreea has started to get support for Metro builds and appropriate tests included.

With the help from Henrik we got Mozmill 2.0.6 released. It contains a helpful fix for waitForPageLoad(), which let you know about the page being loaded and its status in case of a failure. This might help us to nail down the intermittent failures when loading remote and even local pages. But the most important part of this release is indeed the support of mozcrash. Even that we cannot have a full support yet due to missing symbol files for daily builds on ftp.mozilla.org, we can at least show that a crash was happening during a testrun, and let the user know about the local minidump files.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 9 and week 10.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 9 and week 10.

Planet Mozilla21st Century Nesting

Our neighbours have acquired a 21st century bird’s nest:

Not only is it behind a satellite dish but, if you look closely, large parts of it are constructed from the wire ties that the builders (who are still working on our estate) use for tying layers of bricks together. We believe it belongs to a couple of magpies, and it contains six (low-tech) eggs.

I have no idea what effect this has on their reception…

Planet MozillaPreventing heartbleed bugs with safe programming languages

The Heartbleed bug in OpenSSL has resulted in a fair amount of damage across the internet. The bug itself was quite simple and is a textbook case for why programming in unsafe languages like C can be problematic.

As an experiment to see if a safer systems programming language could have prevented the bug I tried rewriting the problematic function in the ATS programming language. I’ve written about ATS as a safer C before. This gives a real world testcase for it. I used the latest version of ATS, called ATS2.

ATS compiles to C code. The function interfaces it generates can exactly match existing C functions and be callable from C. I used this feature to replace the dtls1_process_heartbeat and tls1_process_heartbeat functions in OpnSSL with ATS versions. These two functions are the ones that were patched to correct the heartbleed bug.

The approach I took was to follow something similar to that outlined by John Skaller on the ATS mailing list:

ATS on the other hand is basically C with a better type system.
You can write very low level C like code without a lot of the scary
dependent typing stuff and then you will have code like C, that
will crash if you make mistakes.

If you use the high level typing stuff coding is a lot more work
and requires more thinking, but you get much stronger assurances
of program correctness, stronger than you can get in Ocaml
or even Haskell, and you can even hope for *better* performance
than C by elision of run time checks otherwise considered mandatory,
due to proof of correctness from the type system. Expect over
50% of your code to be such proofs in critical software and probably
90% of your brain power to go into constructing them rather than
just implementing the algorithm. It's a paradigm shift.

I started with wrapping the C code directly and calling that from ATS. From there I rewrote the C code into unsafe ATS. Once that worked I added types to find errors.

I’ve put the modified OpenSSl code in a github fork. The two branches there, ats and ats_safe, represent the latter two stages in implementing the functions in ATS.

I’ll give a quick overview of the different paths I took then go into some detail about how I used ATS to find the errors.

Wrapping C code

I’ve written about this approach before. ATS allows embedding C directly so the first start was to embed the dtls1_process_heartbeat C code in an ATS file, call that from an ATS function and expose that ATS function as the real dtls1_process_heartbeat. The code for this is in my first attempt of d1_both.dats.

Unsafe ATS

The second stage was to write the functions using ATS but unsafely. This code is a direct translation of the C code with no additional typechecking via ATS features. It uses usafe ATS code. The rewritten d1_both.dats contains this version.

The code is quite ugly but compiles and matches the C version. When installed on a test system it shows the heartbleed bug still. It uses all the pointer arithmetic and hard coded offsets as the C code. Here’s a snippet of one branch of the function:

val buffer = OPENSSL_malloc(1 + 2 + $UN.cast2int(payload) + padding)
val bp = buffer

val () = $UN.ptr0_set<uchar> (bp, TLS1_HB_RESPONSE)
val bp = ptr0_succ<uchar> (bp)
val bp = s2n (payload, bp)
val () = unsafe_memcpy (bp, pl, payload)
val bp = ptr_add (bp, payload)
val () = RAND_pseudo_bytes (bp, padding)
val r = dtls1_write_bytes (s, TLS1_RT_HEARTBEAT, buffer, 3 + $UN.cast2int(payload) + padding)
val () = if r >=0 && ptr_isnot_null (get_msg_callback (s)) then
           call_msg_callback (get_msg_callback (s),
                              1, get_version (s), TLS1_RT_HEARTBEAT,
                              buffer, $UN.cast2uint (3 + $UN.cast2int(payload) + padding), s,
                              get_msg_callback_arg (s))
val () = OPENSSL_free (buffer)

It should be pretty easy to follow this comparing the code to the C version.

Safer ATS

The third stage was adding types to the unsafe ATS version to check that the pointer arithmetic is correct and no bounds errors occur. This version of d1_both.dats fails to compile if certain bounds checks aren’t asserted. If the assertloc at line 123, line 178 or line 193 is removed then a constraint error is produced. This error is effectively preventing the heartbleed bug.

Testable Vesion

The last stage I did was to implement the fix to the tls1_process_heartbeat function and factor out some of the helper routines so it could be shared. This is in the ats_safe branch which is where the newer changes are happening. This version removes the assertloc usage and changes to graceful failure so it could be tested on a live site.

I tested this version of OpenSSL and heartbleed test programs fail to dump memory.

The approach to safety

The tls_process_heartbeat function obtains a pointer to data provided by the sender and the amount of data sent from one of the OpenSSL internal structures. It expects the data to be in the following format:

 byte = hbtype
 ushort = payload length
 byte[n] = bytes of length 'payload length'
 byte[16]= padding

The existing C code makes the mistake of trusting the ‘payload length’ the sender supplies and passes that to a memcpy. If the actual length of the data is less than the ‘payload length’ then random data from memory gets sent in the response.

In ATS pointers can be manipulated at will but they can’t be dereferenced or used unless there is a view in scope that proves what is located at that memory address. By passing around views, and subsets of views, it becomes possible to check that ATS code doesn’t access memory it shouldn’t. Views become like capabilities. You hand them out when you want code to have the capability to do things with the memory safely and take it back when it’s done.

Views

To model what the C code does I created an ATS view that represents the layout of the data in memory:

dataview record_data_v (addr, int) =
  | {l:agz} {n:nat | n > 16 + 2 + 1} make_record_data_v (l, n) of (ptr l, size_t n)
  | record_data_v_fail (null, 0) of ()

A ‘view’ is like a standard ML datatype but exists at type checking time only. It is erased in the final version of the program so has no runtime overhead. This view has two constructors. The first is for data held at a memory address l of length n. The length is constrained to be greater than 16 + 2 + 1 which is the size of the ‘byte’, ‘ushort’ and ‘padding’ mentioned previously. By putting the constraint here we immediately force anyone creating this view to check the length they pass in. The second constructor, record_data_v_fail, is for the case of an invalid record buffer.

The function that creates this view looks like:

implement get_record (s) =
  val len = get_record_length (s)
  val data = get_record_data (s)
in
  if len > 16 + 2 + 1 then
    (make_record_data_v (data, len) | data, len)
  else
    (record_data_v_fail () | null_ptr1 (), i2sz 0)
end

Here the len and data are obtained from the SSL structure. The length is checked and the view is created and returned along with the pointer to the data and the length. If the length check is removed there is a compile error due to the constraint we placed earlier on make_record_data_v. Calling code looks like:

val (pf_data | p_data, data_len) = get_record (s)

p_data is a pointer. data_len is an unsigned value and pf_data is our view. In my code the pf_ suffix denotes a proof of some sort (in this case the view) and p_ denotes a pointer.

Proof functions

In ATS we can’t do anything at all with the p_data pointer other than increment, decrement and pass it around. To dereference it we must obtain a view proving what is at that memory address. To get speciailized views specific for the data we want I created some proof functions that convert the record_data_v view to views that provide access to memory. These are the proof functions:

(* These proof functions extract proofs out of the record_data_v
   to allow access to the data stored in the record. The constants
   for the size of the padding, payload buffer, etc are checked
   within the proofs so that functions that manipulate memory
   are checked that they remain within the correct bounds and
   use the appropriate pointer values
*)
prfun extract_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l, n),
                array_v (byte, l, n) -<lin,prf> record_data_v (l,n))
prfun extract_hbtype_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (byte @ l, byte @ l -<lin,prf> record_data_v (l,n))
prfun extract_payload_length_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1, 2),
                array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))
prfun extract_payload_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1+2, n-16-2-1),
                array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))
prfun extract_padding_proof {l:agz} {n:nat} {n2:nat | n2 <= n - 16 - 2 - 1}
               (pf: record_data_v (l, n), payload_length: size_t n2):
               (array_v (byte, l + n2 + 1 + 2, 16),
                array_v (byte, l + n2 + 1 + 2, 16) -<lin, prf> record_data_v (l, n))

Proof functions are run at type checking time. They manipulate proofs. Let’s breakdown what the extract_hbtype_proof function does:

prfun extract_hbtype_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (byte @ l, byte @ l -<lin,prf> record_data_v (l,n))

This function takes a single argument, pf, that is a record_data_v instance for an address l and length n. This proof argument is consumed. Once it is called it cannot be accessed again (it is a linear proof). The function returns two things. The first is a proof byte @ l which says “there is a byte stored at address l”. The second is a linear proof function that takes the first proof we returned, consumes it so it can’t be reused, and returns the original proof we passed in as an argument.

This is a fairly common idiom in ATS. What it does is takes a proof, destroys it, returns a new one and provides a way of destroying the new one and bring back the old one. Here’s how the function is used:

prval (pf, pff) = extract_hbtype_proof (pf_data)
val hbtype = $UN.cast2int (!p_data)
prval pf_data = pff (pf)

prval is a declaration of a proof variable. pf is my idiomatic name for a proof and pff is what I use for proof functions that destroy proofs and return the original.

The !p_data is similar to *p_data in C. It dereferences what is held at the pointer. When this happens in ATS it searches for a proof that we can access some memory at p_data. The pf proof we obtained says we have a byte @ p_data so we get a byte out of it.

A more complicated proof function is:

prfun extract_payload_length_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1, 2),
                array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))

The array_v view repesents a contigous array of memory. The three arguments it takes are the type of data stored in the array, the address of the beginning, and the number of elements. So this function consume the record_data_v and produces a proof saying there is a two element array of bytes held at the 1st byte offset from the original memory address held by the record view. Someone with access to this proof cannot access the entire memory buffer held by the SSL record. It can only get the 2 bytes from the 1st offset.

Safe memcpy

One more complicated view:

prfun extract_payload_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1+2, n-16-2-1),
                array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))

This returns a proof for an array of bytes starting at the 3rd byte of the record buffer. Its length is equal to the length of the record buffer less the size of the payload, and first two data items. It’s used during the memcpy call like so:

prval (pf_dst, pff_dst) = extract_payload_data_proof (pf_response)
prval (pf_src, pff_src) = extract_payload_data_proof (pf_data)
val () = safe_memcpy (pf_dst, pf_src 
           add_ptr1_bsz (p_buffer, i2sz 3),
           add_ptr1_bsz (p_data, i2sz 3),
           payload_length)
prval pf_response = pff_dst(pf_dst)
prval pf_data = pff_src(pf_src)

By having a proof that provides access to only the payload data area we can be sure that the memcpy can not copy memory outside those bounds. Even though the code does manual pointer arithmetic (the add_ptr1_bszfunction) this is safe. An attempt to use a pointer outside the range of the proof results in a compile error.

The same concept is used when setting the padding to random bytes:

prval (pf, pff) = extract_padding_proof (pf_response, payload_length)
val () = RAND_pseudo_bytes (pf |
                            add_ptr_bsz (p_buffer, payload_length + 1 + 2),
                            padding)
prval pf_response = pff(pf)a

Runtime checks

The code does runtime checks that constrain the bounds of various length variables:

if payload_length > 0 then
    if data_len >= payload_length + padding + 1 + 2 then
    ...
...

Without those checks a compile error is produced. The original heartbeat flaw was the absence of similar runtime checks. The code as structured can’t suffer from that flaw and still be compiled.

Testing

This code can be built and tested. First step is to install ATS2:

$ tar xvf ATS2-Postiats-0.0.7.tgz
$ cd ATS2-Postiats-0.0.7
$ ./configure
$ make
$ export PATSHOME=`pwd`
$ export PATH=$PATH:$PATSHOME/bin

Then compile the openssl code with my ATS additions:

$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git checkout -b ats_safe origin/ats_safe
$ ./config
$ make
$ make test

Try changing some of the code, modifying the constraints tests etc, to get an idea of what it is doing.

For testing in a VM, I installed Ubuntu, setup an nginx instance serving an HTTPS site and did something like:

$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git diff 5219d3dd350cc74498dd49daef5e6ee8c34d9857 >~/safe.patch
$ cd ..
$ apt-get source openssl
$ cd openssl-1.0.1e/
$ patch -p1 <~/safe.patch
  ...might need to fix merge conflicts here...
$ fakeroot debian/rules build binary
$ cd ..
$ sudo dpkg -i libssl1.0.0_1.0.1e-3ubuntu1.2_amd64.deb \
               libssl-dev_1.0.1e-3ubuntu1.2_amd64.deb 
$ sudo /etc/init.d/nginx restart

You can then use a heartbleed tester on the HTTPS server and it should fail.

Conclusion

I think the approach of converting unsafe C code piecemeal worked quite well in this instance. Being able to combine existing C code and ATS makes this much easier. I only concentrated on detecting this particular programming error but it would be possible to use other ATS features to detect memory leaks, abstraction violations and other things. It’s possible to get very specific in defining safe interfaces at a cost of complexity in code.

Although I’ve used ATS for production code this is my first time using ATS2. I may have missed idioms and library functions to make things easier so try not to judge the verbosity or difficulty of the code based on this experiement. The ATS community is helpful in picking up the language. My approach to doing this was basically add types then work through the compiler errors fixing each one until it builds.

One immediate question becomes “How do you trust your proof”. The record_data_v view and the proof functions that manipulate it define the level of checking that occurs. If they are wrong then the constraints checked by the program will be wrong. It comes down to having a trusted kernel of code (in this case the proof and view) and users of that kernel can then be trusted to be correct. Incorrect use of the kernel is what results in the stronger safety. From an auditing perspective it’s easier to check the small trusted kernel and then know the compiler will make sure pointer manipulations are correct.

The ATS specific additions are in the following files:

Planet MozillaGoogle RMA, or how I finally got to use Firefox OS on Wind Mobile

The last 24 hours have really been quite an adventure in debugging. It all started last week when I decided to order a Nexus 5 from Google. It arrived yesterday, on time, and I couldn’t wait to get home to unbox it. Soon after unboxing my new Nexus 5 I would discover something was not well.

After setting up my Google account and syncing all my data I usually like to try out the camera. This did not go very well. I was immediately presented with a “Camera could not connect” error. After rebooting a couple times the error continued to persist.

I then went to the internet to research my problem and got the usual advice: clear the cache, force quit any unnecessary apps, or do a factory reset. Try as I might, all of these efforts would fail. I actually tried a factory reset three times and that’s where things got weirder.

On the third factory reset I decided to opt out of syncing my data and just try the camera with a completely stock install. However, this time the camera icon was completely missing. It was absent from my home screen and the app drawer. It was absent from the Gallery app. The only way I was able to get the Camera app to launch was to select the camera button on my lock screen.

Now that I finally got to the Camera app I noticed it had defaulted to the front camera, so naturally I tried to switch to the rear. However when I tried this, the icon to switch cameras was completely missing. I tried some third party camera apps but they would just crash on startup.

After a couple hours jumping through these hoops between factory resets I was about to give in. I gave it one last ditch effort and flashed the phone using Google’s stock Android 4.4 APK. It took me about another hour between getting my environment set up and getting the image flashed to the phone. However the result was the same: missing camera icons and crashing all over the place.

It was now past 1am, I had been at this for hours. I finally gave in and called up Google. They promptly sent me an RMA tag and I shipped the phone back to them this morning for a full refund. And so began the next day of my adventure.

I was now at a point where I had to decide what I wanted to do. Was I going to order another Nexus 5 and trust that one would be fine or would I save myself the hassle and just dig out an old Android phone I had lying around?

I remembered that I still had a Nexus S which was perfectly fine, albeit getting a bit slow. After a bit of research on MDN I decided to try flashing the Nexus S to use B2G. I had never successfully flashed any phone to B2G before and I thought yesterday’s events might have been pushing toward this moment.

I followed the documentation, checked out the source code, sat through the lengthy config and build process (this took about 2 hours), and pushed the bits to my phone. I then swapped in my SIM card and crossed my fingers. It worked! It seemed like magic, but it worked. I can again do all the things I want to: make phone calls, take pictures, check email, and tweet to my hearts content; all on a phone powered by the web.

I have to say the process was fairly painless (apart from the hours spent troubleshooting the Nexus 5). The only problem I encountered was a small hiccup in the config.sh process. Fortunately, I was able to work around this pretty easily thanks to Bugzilla. I can’t help but recognize my success was largely due to the excellent documentation provided by Mozilla and the work of developers, testers, and contributors alike who came before me.

I’ve found the process to be pretty rewarding. I built B2G, which I’ve never succeeded at before; I flashed my phone, which I’ve never succeeded at before; and I feel like I learned something new today.

I’ve been waiting a long time to be able to test B2G 1.4 on Wind Mobile, and now I can. Sure I’m sleep deprived, and sure it’s not an “official” Firefox OS phone, but that does not diminish the victory for me; not one bit.

 

Planet MozillaCopyright and Software

As part of our discussions on responding to the EU Copyright Consultation, Benjamin Smedberg made an interesting proposal about how copyright should apply to software. With Chris Riley’s help, I expanded that proposal into the text below. Mozilla’s final submission, after review by various parties, argued for a reduced term of copyright for software of 5-10 years, but did not include this full proposal. So I publish it here for comment.

I think the innovation, which came from Benjamin, is the idea that the spirit of copyright law means that proprietary software should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires.

We believe copyright terms should be much shorter for software, and that there should be a public benefit tradeoff for receiving legal protection, comparable to other areas of IP.

We start with the premise that the purpose of copyright is to promote new creation by giving to their authors an exclusive right, but that this right is necessary time-limited because the public as a whole benefits from the public domain and the free sharing and reproduction of works. Given this premise, copyright policy has failed in the domain of software. All software has a much, much shorter life than the standard copyright term; by the end of the period, there is no longer any public benefit to be gained from the software entering the public domain, unlike virtually all other categories of copyrighted works. There is already more obsolete software out there than anyone can enumerate, and software as a concept is barely even 50 years old, so none is in the public domain. Any which did fall into the public domain after 50 or 70 years would be useful to no-one, as it would have been written for systems long obsolete.

We suggest two ideas to help the spirit of copyright be more effectively realized in the software domain.

Proprietary software (that is, software for which the source code is not immediately available for reuse anyway) should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires. Unlike a book, which can be read and copied by anyone at any stage before or after its copyright expires, software is often distributed as binary code which is intelligible to computers but very hard for humans to understand. Therefore, in order for software to properly fall into the public domain at the end of the copyright term, the source code (the human-readable form) needs to be made available at that time – otherwise, the spirit of copyright law is not achieved, because the public cannot truly benefit from the copyrighted material. An escrow system would be ideal to implement this.

This is also similar to the tradeoff between patent law and trade secret protection; you receive a legal protection for your activity in exchange for making it available to be used effectively by the broader public at the end of that period. Failing to take that tradeoff risks the possibility that someone will reverse engineer your methods, at which point they are unprotected.

Separately, the term of software copyright protection should be made much shorter (through international processes as relevant), and fixed for software products. We suggest that 14 years is the most appropriate length. This would mean that, for example, Windows XP would enter the public domain in August 2015, which is a year after Microsoft ceases to support it (and so presumably no longer considers it commercially viable). Members of the public who wish to continue to run Windows XP therefore have an interest in the source code being available so technically-capable companies can support them.

Planet MozillaAn Explanation of the Heartbleed bug for Regular People

I’ve put this explanation together for those who want to understand the Heartbleed bug, how it fits into the bigger picture of secure internet browsing, and what you can do to mitigate its affects.

HTTPS vs HTTP (padlock vs no padlock)

When you are browsing a site securely, you use https and you see a padlock icon in the url bar. When you are browsing insecurely you use http and you do not see a padlock icon.

<figure class="wp-caption aligncenter" id="attachment_763590" style="width: 313px;">Firefox url bar for HTTPS site (above) and non-HTTPS (below).Firefox url bar for HTTPS site (above) and non-HTTPS (below).</figure>

HTTPS relies on something called SSL/TLS.

Understanding SSL/TLS

SSL stands for Secure Sockets Layer and TLS stands for Transport Layer Security. TLS is the later version of the original, proprietary, SSL protocol developed by Netscape. Today, when people say SSL, they generally mean TLS, the current, standard version of the protocol.

Public and private keys

The TLS protocol relies heavily on public-key or asymmetric cryptography. In this kind of cryptography, two separate but paired keys are required: a public key and a private key. The public key is, as its name suggests, shared with the world and is used to encrypt plain-text data or to verify a digital signature. (A digital signature is a way to authenticate identity.) A matching private key, on the other hand, is used to decrypt data and to generate digital signatures. A private key should be safeguarded and never shared. Many private keys are protected by pass-phrases, but merely having access to the private key means you can likely use it.

Authentication and encryption

The purpose of SSL/TLS is to authenticate and encrypt web traffic.

Authenticate in this case means “verify that I am who I say I am.” This is very important because when you visit your bank’s website in your browser, you want to feel confident that you are visiting the web servers of — and thereby giving your information to — your actual bank and not another server claiming to be your bank. This authentication is achieved using something called certificates that are issued by Certificate Authorities (CA). Wikipedia explains thusly:

The digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the public key that is certified. In this model of trust relationships, a CA is a trusted third party that is trusted by both the subject (owner) of the certificate and the party relying upon the certificate.

In order to obtain a valid certificate from a CA, website owners must submit, at minimum, their server’s public key and demonstrate that they have access to the website (domain).

Encrypt in this case means “encode data such that only authorized parties may decode it.” Encrypting internet traffic is important for sensitive or otherwise private data because it is trivially easy eavesdrop on internet traffic. Information transmitted not using SSL is usually done so in plain-text and as such clearly readable by anyone. This might be acceptable for general internet broswing. After all, who cares who knows which NY Times article you are reading? But is not acceptable for a range of private data including user names, passwords and private messages.

Behind the scenes of an SSL/TLS connection

When you visit a website with HTTPs enabled, a multi-step process occurs so that a secure connection can be established. During this process, the sever and client (browser) send messages back and forth in order to a) authenticate the server’s (and sometimes the client’s) identity and, b) to negotiate what encryption scheme, including which cipher and which key, they will use for the session. Identities are authenticated using the digital certificates mentioned previously.

When all of that is complete, the secure connection is established and the server and client send traffic back and forth to each other.

All of this happens without you ever knowing about it. Once you see your bank’s login screen the process is complete, assuming you see the padlock icon in your browser’s url bar.

Keepalives and Heartbeats

Even though establishing an ssl connection happens almost imperceptibly to you, it does have an overhead in terms of computer and network resources. To minimize this overhead, network connections are often kept open and active until a given timeout threshold is exceed. When that happens, the connection is closed. If the client and server wish to communicate again, they need to re-negotiate the connection and re-incur the overhead of that negotiation.

One way to forestall a connection being closed is via keepalives. A keepalive message is used to tell a server “Hey, I know I haven’t used this connection in a little while, but I’m still here and I’m planning to use it again really soon.”

Keepalive functionality was added to the TLS protocol specification via the Heartbeat Extension. Instead of “Keepalives,” they’re called “Heartbeats,” but they do basically the same thing.

Specification vs Implementation

Let’s pause for a moment to talk about specifications vs implementations. A protocol is a defined way of doing something. In this case of TLS, that something is encrypted network communications. When a protocol is standardized, it means that a lot of people have agreed upon the exact way that protocol should work and this way is outlined in a specification. The specification for TLS is collaboratively developed, maintained and promoted by the standards body Internet Engineering Task Force (IETF). A specification in and of itself does not do anything. It is a set of documents, not a program. In order for a specifications to do something, they must be implemented by programmers.

OpenSSL implementation of TLS

OpenSSL is one implementation of the TLS protocol. There are others, including the open source GnuTLS as well as proprietary implementations. OpenSSL is a library, meaning that it is not a standalone software package, but one that is used by other software packages. These include the very popular webserver Apache.

The Heartbleed bug only applies to webservers with SSL/TLS enabled, and only those using specific versions of the open source OpenSSL library because the bug relates to an error in the code of that library, specifically the heartbeat extension code. It is not related to any errors in the TLS specification or and in any of the underlying ciper suites.

Usually this would be good news. However, because OpenSSL is so widely used, particularly the affected version, this simple bug has tremendously reach in terms of the number of servers and therefor the number of users it potentially affects.

What the heartbeat extension is supposed to do

The heartbeat extension is supposed to work as follows:

  • A client sends a heartbeat message to the server.
  • The message contains two pieces of data: a payload and the size of that payload. The payload can by anything up to 64kb.
  • When the server receives the heartbeat message, it is to add a bit of extra data to it (padding) and send it right back to the client.

Pretty simple, right? Heartbeat isn’t supposed to do anything other than let the server and client know they are each still there and accepting connections.

What the heartbeat code actually does

In the code for affected versions (1.0.1-1.0.1f) of the OpenSSL heartbeat extension, the programmer(s) made a simple but horrible mistake: They failed to verify the size of the received payload. Instead, they accepted what the client said was the size of the payload and returned this amount of data from memory, thinking it should be returning the same data it had received. Therefore, a client could send a payload of 1KB, say it was 64KB and receive that amount of data back, all from server memory.

If that’s confusing, try this analogy: Imagine you are my bank. I show up and make a deposit. I say the deposit is $64, but you don’t actually verify this amount. Moments later I request a withdrawal of the $64 I say I deposited. In fact, I really only deposited $1, but since you never checked, you have no choice but to give me $64, $63 of which doesn’t actually belong to me.

And, this is exactly how a someone could exploit this vulnerability. What comes back from memory doesn’t belong to the client that sent the heartbeat message, but it’s given a copy of it anyway. The data returned is random, but would be data that the OpenSSL library had been storing in memory. This should be pre-encryption (plain-text) data, including your user names and passwords. It could also technically be your server’s private key (because that is used in the securing process) and/or your server’s certificate (which is also not something you should share).

The ability to retrieve a server’s private key is very bad because that private key could be used to decrypt all past, present and future traffic to the sever. The ability to retreive a server’s certificate is also bad because it gives the ability to impersonate that server.

This, coupled with the widespread use of OpenSSL, is why this bug is so terribly bad. Oh, and it gets worse…

Taking advantage of this vulnerability leaves no trace

What’s worse is that logging isn’t part of the Heartbeat extension. Why would it be? Keepalives happen all the time and generally do not represent transmission of any significant data. There’s no reason to take up value time accessing the physical disk or taking up storage space to record that kind of information.

Because there is no logging, there is no trace left when someone takes advantage of this vulnerability.

The code that introduced this bug has been part of OpenSSl for 2+ years. This means that any data you’ve communicated to servers with this bug since then has the potential to be compromised, but there’s no way to determine definitively if it was.

This is why most of the internet is collectively freaking out.

What do server administrators need to do?

Server (website) administrators need to, if they haven’t already:

  1. Determine whether or not their systems are affected by the bug. (test)
  2. Patch and/or upgrade affected systems. (This will require a restart)
  3. Revoke and reissue keys and certificates for affected systems.

Furthermore, I strongly recommend you enable Perfect forward secrecy to safeguard data in the event that a private key is compromised:

When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session. So intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.

What do users (like me) need to do?

The most important thing regular users need to do is change your passwords on critical sites that were vulnerable (but only after they’ve been patched). Do you need to change all of your passwords everywhere? Probably not. Read You don’t need to change all your passwords for some good tips.

Additionally, if you’re not already using a password manager, I highly recommend LastPass, which is cross-platform and works on pretty much every device. Yesterday LastPass announced they are helping users to know which passwords they need to update and when it is safe to do so.

If you do end up trying LastPass, checkout my guide for setting it up with two-factor auth.

Further Reading


If you like visuals, check out this great video showing how the Heartbleed exploit works.

If you’re interested in learning more about networking, I highly recommend Ilya Grigorik‘s High Performance Browser Networking, which you can also read online for free.

If you want some additional technical details about Heartbleed (including actual code!) checkout these posts:

Oh, and you can listen to Kevin and I talk about Heartbleed on In Beta episode 96, “A Series of Mathy Things.”

Conclusion

Planet MozillaFirefox Automation report – week 7/8 2014

The current work load is still affecting my time for getting out our automation status reports. The current updates are a bit old but still worth to mention. So lets get them out.

Highlights

As mentioned in my last report, we had issues with Mozmill on Windows while running our restart tests. So during a restart of Firefox, Mozmill wasn’t waiting long enough to detect that the old process is gone, so a new instance has been started immediately. Sadly that process failed with the profile already in use error. Reason here was a broken handling of process.poll() in mozprocess, which Henrik fixed in bug 947248. Thankfully no other Mozmill release was necessary. We only re-created our Mozmill environments for the new mozprocess version.

With the ongoing pressure of getting automated tests implemented for the Metro mode of Firefox, our Softvision automation team members were concentrating their work on adding new library modules, and enhancing already existent ones. Another big step here was the addition of the Metro support to our Mozmill dashboard by Andrei. With that we got new filter settings to only show results for Metro builds.

When Henrik noticed one morning that our Mozmill-CI staging instance was running out of disk space, he did some investigation and has seen that we were using the identical settings from production for the build retention policy. Without having more then 40GB disk space we were trying to keep the latest 100 builds for each job. This certainly wont work, so Cosmin worked on reducing the amount of builds to 5. Beside this we were also able to fix our l10n testrun for localized builds of Firefox Aurora.

Given that we stopped support for Mozmill 1.5 and continued with Mozmill 2.0 already a while ago, we totally missed to update our Mozmill tests documentation on MDN. Henrik was working on getting all of it up-to-date, so new community members wont struggle anymore.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 7 and week 8.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda and notes from the Firefox Automation meetings of week 7 and week 8.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>