Planet MozillaFirefox 33 beta7 to beta8

  • 46 changesets
  • 110 files changed
  • 1976 insertions
  • 805 deletions

ExtensionOccurrences
cpp34
h14
html13
js11
jsm6
css4
xul3
xml3
ini3
cc3
c2
xhtml1
webidl1
svg1
py1
properties1
nsh1
list1
in1
idl1
dtd1

ModuleOccurrences
dom18
gfx16
browser14
layout12
toolkit9
security6
js6
content6
netwerk5
mobile3
media3
widget2
xpfe1
xpcom1
modules1
ipc1
embedding1

List of changesets:

David KeelerBug 1057123 - mozilla::pkix: Certificates with Key Usage asserting the keyCertSign bit may act as end-entities. r=briansmith, a=sledru - 599ae9ec1b9c
Robert StrongBug 1070988 - Windows installer should remove leftover chrome.manifest on pave over install to prevent startup crash with Firefox 32 and above with unpacked omni.ja. r=tabraldes, a=sledru - 9286fb781568
Bobby HolleyBug 1072174 - Handle all the cases XrayWrapper.cpp. r=peterv, a=abillings - bb4423c0da47
Brian NicholsonBug 1067429 - Alphabetize theme styles. r=lucasr, a=sledru - f29b8812b6d0
Brian NicholsonBug 1067429 - Create GeckoAppBase as the parent for Gecko.App. r=lucasr, a=sledru - 112a9fe148d2
Brian NicholsonBug 1067429 - Add values-v14, removing v14-only styles from values-v11. r=lucasr, a=sledru - 89d93cece9fd
David KeelerBug 1060929 - mozilla::pkix: Allow explicit encodings of default-valued BOOLEANs because lol standards. r=briansmith, a=sledru - 008eb429e655
Tim TaubertBug 1067173 - Bail out early if _resizeGrid() is called before the page has loaded. f=Mardak, r=adw, a=sledru - c043fec932a6
Markus StangeBug 1011166 - Improve the workarounds cairo does when rendering large gradients with pixman. r=roc, r=jrmuizel, a=sledru - a703ff0c7861
Edwin FloresBug 976023 - Fix crash in AppleMP3Reader. r=rillian, a=sledru - f2933e32b654
Nicolas SilvaBug 1066139 - Put stereo video behind a pref (off by default). r=Bas, a=sledru - e60e089a7904
Nicholas NethercoteBug 1070251 - Anonymize non-chrome inProcessTabChildGlobal URLs in memory reports when necessary. r=khuey, a=sledru - 09dcf9d94d33
Andrea MarchesiniBug 1060621 - WorkerScope should CC mLocation and mNavigator. r=bz, a=sledru - 32d5ee00c3ab
Andrea MarchesiniBug 1062920 - WorkerNavigator strings should honor general.*.override prefs. r=khuey, a=sledru - 6d53cfba12f0
Andrea MarchesiniBug 1069401 - UserAgent cannot be changed for specific websites in workers, r=khuey, r=bz, a=sledru - e178848e43d1
Gijs KruitboschBug 1065998 - Empty-check Windows8WindowFrameColor's customizationColor in case its registry value is gone. r=jaws, a=sledru - 12a5b8d685b2
Richard BarnesBug 1045973 - sec_error_extension_value_invalid: mozilla::pkix does not accept certificates with x509v3 extensions in x509v1 or x509v2 certificates. r=keeler, a=sledru - a4697303afa6
Branislav RankovBug 1058024 - IonMonkey: (ARM) Fix jsapi-tests/testJitMoveEmitterCycles. r=mjrosenb, a=sledru - 371e802df4dc
Rik CabanierBug 1072100 - mix-blend-mode doesn't work when set in JS. r=dbaron, a=sledru - badc5be25cc1
Jim ChenBug 1067018 - Make sure calloc/malloc/free usages match in Tools.h. r=jwatt, a=sledru - cf8866bd741f
Bill McCloskeyBug 1071003 - Fix null crash in XULDocument::ExecuteScript. r=smaug, a=sledru - b57f0af03f78
Felipe GomesBug 1063848 - Disable e10s in safe mode. r=bsmedberg, r=ally, a=sledru, ba=jorgev - 2b061899d368
Gijs KruitboschBug 1069300 - strings for panic/privacy/forget-button for beta, r=jaws,shorlander, a=dolske, l10n=pike, DONTBUILD=strings-only - 16e19b9cec72
Valentin GosuBug 1011354 - Use a mutex to guard access to nsHttpTransaction::mConnection. r=mcmanus, r=honzab, a=abillings - ac926de428c3
Terrence ColeBug 1064346 - JSFunction's extended attributes expect POD-style initialization. r=billm, a=abillings - fd4720dd6a46
Marty RosenbergBug 1073771 - Add namespaces and whatnot to make JitMoveEmitterCycles compile. r=dougc, a=test-only - 97feda79279e
Ed LeeBug 1058971 - [Legal]: text for sponsored tiles needs to be localized for Firefox 33 [r=adw a=sylvestre] - deaa75a553ac
Ed LeeBug 1064515 - update learn more link for sponsored tiles overlay [r=adw a=sylvestre] - b58a231c328c
Ed LeeBug 1071822 - update the learn more link in the tiles intro popup [r=adw a=sylvestre] - 0217719f20c5
Ed LeeBug 1059591 - Incorrectly formatted remotely hosted links causes new tab to be empty [r=adw a=sylvestre] - d34488e06177
Ed LeeBug 1070022 - Improve Contrast of Text on New Tab Page [r=adw a=sylvestre] - 8dd30191477e
Ed LeeBug 1068181 - NEW Indicator for Pinned Tiles on New Tab Page [r=ttaubert a=sylvestre] - 02da3cf36508
Ed LeeBug 1062256 - Improve the design of the »What is this« bubble on about:newtab [r=adw a=sylvestre] - 2a8947c986ed
Bas SchoutenBug 1072404: Firefox may crash when the D3D device is removed while rendering. r=mattwoodrow a=sylvestre - 3d41bbe16481
Bas SchoutenBug 1074045: Turn OMTC back on on beta. r=nical a=sylvestre - b9e8ce2a141b
Jim MathiesBug 1068189 - Force disable browser.tabs.remote.autostart in non-nightly builds. r=felipe, a=sledru - d41af0c7fdaf
Randell JesupBug 1033066 - Never let AudioSegments underflow mDuration and cause OOM allocation. r=karlt, a=sledru - 82f4086ba2c7
Georg FritzscheBug 1070036 - Catch NS_ERROR_NOT_AVAILABLE during OpenH264Provider startup. r=irving, a=sledru - b6985e15046b
Nicolas SilvaBug 1061712 - Don't crash in DrawTargetDual::CreateSimilar if allocation fails. r=Bas, a=sledru - 69047a750833
Nicolas SilvaBug 1061699 - Only crash deBug builds if BorrowDrawTarget is called on an unlocked TextureClient. r=Bas, a=sledru - 4020480a6741
Aaron KlotzBug 1072752 - Make Chromium UI message loops for Windows call into WinUtils::WaitForMessage. r=jimm, a=sledru - 737fbc0e3df4
Florian QuèzeBug 1067367 - Tapping the icon of a second doorhanger reopens the first doorhanger if it was already open. r=Enn, a=sledru - 3ff9831143fd
Robert LongsonBug 1073924 - Hovering over links in SVG does not cause cursor to change. r=jwatt, a=sledru - 19338c25065c
Ryan VanderMeulenBacked out changeset d41af0c7fdaf (Bug 1068189) for reftest-ipc crashes/failures. - dabbfa2c0eac
Randell JesupBug 1069646 - Scale frame rate initialization in webrtc media_opimization. r=gcp, a=sledru - bc5451d18901
David KeelerBug 1053565 - Update minimum system NSS requirement in configure.in (it is now 3.17.1). r=glandium, a=sledru - 0780dce35e25

Planet MozillaReMo Camp 2014: Impact through action

For the last 3 years the council, peers and mentors of the Mozilla Reps program have been meeting annually at ReMo Camp, a 3-day meetup to check the temperature of the program and plan for the next 12 months. This year’s Camp was particularly special because for the first time, Mitchell Baker, Mark Surman and Mary Ellen Muckerman participated in it. With such a great mix of leadership both at the program level and at the organization, it was clear this ReMo Camp would be our most interesting and productive one.
The meeting spanned 3 days:
Day 1:
The Council and Peers got together to add the finishing touches and tweaks to the program content and schedule but also to discuss the program’s governance structure. Council and Peers defined the different roles in the program that allow the Reps to keep each leadership body accountable and made sure there was general alignment. We will post a separate blog post on governance explaining the exact functions of the module owner, the peers, the council, mentors and Reps.
Day 2
The second day was very exciting and was coined the “challenges” day where we had Mitchell, Mark and Mary Ellen joining the Reps to work on 6 “contribution challenges”. These challenges are designed to be concrete initiatives that aim to have quick and concrete impact on Mozilla’s product goals with large scale volunteer participation. Mozillians around the globe work tireless to push the Mozilla mission forward and one of the most powerful ways of doing so is by improving our products. We worked on 6 specific areas to have an impact and identify the next steps. There’s a lot of excitement already and the Reps program will play a central role as a platform to mobilize and empower local communities participating in these challenges. More on this shortly…
Day 3
The last day of the was entirely dedicated to the Reps program. We had so many things to talk about, so many ideas and alas the day only has so many hours, so we focused on three thematic pillars: impact, mentorship training and getting stuff done. The council and peers had spent Friday setting those priorities, the rationale being that Mozilla Reps leadership is very good at identifying what needs to get done, and not as good with follow-through. The sessions on “impact” were prioritized over others as we wanted to figure out how to best enable/empower Reps to have an impact and follow up with all the great plans we do. Impact was broken down into three thematic buckets:

Accountability: how to we keep Reps accountable for what the have signed up for?

Impact measurement: how do we measure the impact of all the wonderful things we do?

Recognition: how do we recognize in a more systematic and fair way our volunteers who are going out of their way?

After the impact discussion, we changed gears and moved to the Mentorship training. During the preparations leading to ReMo Camp most of the mentors asked for training. Our mentors are really committed to helping Reps on the ground to do a great job, so the council and the peers facilitated a mentorship training divided in 5 different stations. We got a lot of great feedback and we’ll be producing videos with the materials of the training so that any mentor (or interested Rep) has access to this content. We will be also rolling out Q&A sessions for each mentorship station. Stay tuned if you want to learn more about mentorship and the Reps program in general.

The third part of Day 3 was “getting stuff done” a session where we identified 10 concrete tasks (most of them pending from the last ReMo Camp) that we could actually get done by the end of the day.

The overall take-away from this Camp was that instead of designing grand ambitious plans we need to be more agile and sometimes be more realistic with what work we can get accomplished. Ultimately, it will help us get more stuff done more quickly. That spirit of urgency and agility permeated the entire weekend, and we hope to be able to transmit this feeling to each and every Rep.

There wasn’t enough time, but we spent it in the best possible way. Having the Mozilla leadership with us was incredibly empowering and inspiring. The Reps have organized themselves and created this powerful platform. Now it’s time to focus our efforts. The weekend in Berlin proved that the Reps are a cohesive group of volunteer leaders with a lot of experience and the eyes and ears of Mozilla in every corner of the world. Now let’s get together and committing to doing everything we set ourselves to do before ReMo Camp 2015.

Planet MozillaTelemetry meets Clojure.

tldr: Data related telemetry alerts (e.g. histograms or main-thread IO) are now aggregated by medusa, which allows devs to post, view and filter alerts. The dashboard allows to subscribe to search criterias or individual metrics.

As mentioned in my previous post, we recently switched to a dashboard generator “iacomus” to visualize the data produced by some of our periodic map-reduce jobs. Given that each dashboard has some metadata that describes the datasets it handles, it became possible to write a regression detection algorithm for all our dashboards that use the iacomus data-format.

The algorithm generates a time-series for each possible combination of the filtering and sorting criterias of a dashboard, compares the latest data-point to the distribution of the previous N, and generates an alert if it detects an outlier. Stats 101.

Alerts are aggregated collected by medusa, which provides a RESTful API to submit alerts and exposes a dashboard that allows users to view and filter alerts using regular expressions and subscribe to alerts.

Coding the aggregator and regression detector in Clojure[script] has been a lot of fun. I found particularly attracting the fact that Clojure doesn’t have any big web framework a la Ruby or Python that forces you in one specific mindset. Instead one can roll his own using a wide set of libraries, like:

  • HTTP-Kit, an even-driven HTTP client/server
  • Compojure, a routing library
  • Korma, a SQL DSL
  • Liberator, RESTful resource handlers
  • om, React.js interface for Clojurescript
  • secretary, a client-side routing library

The ability to easily compose functionality from different libraries is exceptionally well expressed by Alan Perlis: “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”. And so as it happens instead of each library having its own set of independent abstractions and data-structures, Clojure libraries tend to use mostly just list, vectors, sets and maps which greatly simplify interoperability.

Lisp gets criticized for its syntax, or lack thereof, but I don’t feel that’s fair. Using any editor that inserts and balances parentheses for you does the trick. I also feel like I didn’t have to run a background thread in my mind to think if what I was writing would please the compiler or not, unlike in Scala for instance. Not to speak of the ability to use macros which allows one to easily extend the compiler with user-defined code. The expressiveness of Clojure means also that more thought is required per LOC but that might be just a side-effect of not being a full-time functional programmer.

What I do miss in the clojure ecosystem is a strong set of tools for statistics and machine learning. Incanter is a wonderful library but coming from a R and python/scipy background there is still a lot of catching up to do.


Planet MozillaWhat David Did During Q3

September is ending, and with it Q3 of 2014. It’s time for a brief report, so here is what happened during the summer.

Session Restore

After ~18 months working on Session Restore, I am progressively switching away from that topic. Most of the main performance issues that we set out to solve have been solved already, we have considerably improved safety, cleaned up lots of the code, and added plenty of measurements.

During this quarter, I have been working on various attempts to optimize both loading speed and saving speed. Unfortunately, both ongoing works were delayed by external factors and postponed to a yet undetermined date. I have also been hard at work on trying to pin down performance regressions (which turned out to be external to Session Restore) and safety bugs (which were eventually found and fixed by Tim Taubert).

In the next quarter, I plan to work on Session Restore only in a support role, for the purpose of reviewing and mentoring.

Also, a rant The work on Session Restore has relied heavily on collaboration between the Perf team and the FxTeam. Unfortunately, the resources were not always available to make this collaboration work. I imagine that the FxTeam is spread too thin onto too many tasks, with too many fires to fight. Regardless, the symptom I experienced is that during the course of this work, both low-priority, high-priority and safety-critical patches have been left to rot without reviews, despite my repeated requests, for 6, 8 or 10 weeks, much to the dismay of everyone involved. This means man·months of work thrown to /dev/null, along with quarterly objectives, morale, opportunities, contributors and good ideas.

I will try and blog about this, eventually. But please, in the future, everyone: remember that in the long run, the priority of getting reviews done (or explaining that you’re not going to) is a quite higher than the priority of writing code.

Async Tooling

Many improvements to Async Tooling landed during Q3. We now have the PromiseWorker, which simplifies considerably the work of interacting between the main thread and workers, for both Firefox and add-on developers. I hear that the first add-on to make use of this new feature is currently being developed. New features, bugfixes and optimizations landed for OS.File. We have also landed the ability to watch for changes in a directory (under Windows only, for the time being).

Sadly, my work on interactions between Promise and the Test Suite is currently blocked until the DevTools team manages to get all the uncaught asynchronous errors under control. It’s hard work, and I can understand that it is not a high priority for them, so in Q4, I will try to find a way to land my work and activate it only for a subset of the mochitest suites.

Places

I have recently joined the newly restarted effort to improve the performance of Places, the subsystem that handles our bookmarks, history, etc. For the moment, I am still getting warmed up, but I expect that most of my work during Q4 will be related to Places.

Shutdown

Most of my effort during Q3 was spent improving the Shutdown of Firefox. Where we already had support for shutting down asynchronously JavaScript services/consumers, we now also have support for native services and consumers. Also, I am in the process of landing Telemetry that will let us find out the duration of the various stages of shutdown, an information that we could not access until now.

As it turns out, we had many crashes during asynchronous shutdown, a few of them safety-critical. At the time, we did not have the necessary tools to determine to prioritize our efforts or to find out whether our patches had effectively fixed bugs, so I built a dashboard to extract and display the relevant information on such crashes. This proved a wise investment, as we spent plenty of time fighting AsyncShutdown-related fires using this dashboard.

In addition to the “clean shutdown” mechanism provided by AsyncShutdown, we also now have the Shutdown Terminator. This is a watchdog subsystem, launched during shutdown, and it ensures that, no matter what, Firefox always eventually shuts down. I am waiting for data from our Crash Scene Investigators to tell us how often we need this watchdog in practice.

Community

I lost track of how many code contributors I interacted with during the quarter, but that represents hundreds of e-mails, as well as countless hours on IRC and Bugzilla, and a few hours on ask.mozilla.org. This year’s mozEdu teaching is also looking good.

We also launched FirefoxOS in France, with big success. I found myself in a supermarket, presenting the ZTE Open C and the activities of Mozilla to the crowds, and this was a pleasing experience.

For Q4, expect more mozEdu, more mentoring, and more sleepless hours helping contributors debug their patches :)


Planet MozillaMulti- and conditional dispatch in traits

I’ve been working on a branch that implements both multidispatch (selecting the impl for a trait based on more than one input type) and conditional dispatch (selecting the impl for a trait based on where clauses). I wound up taking a direction that is slightly different from what is described in the trait reform RFC, and I wanted to take a chance to explain what I did and why. The main difference is that in the branch we move away from the crate concatenability property in exchange for better inference and less complexity.

The various kinds of dispatch

The first thing to explain is what the difference is between these various kinds of dispatch.

Single dispatch. Let’s imagine that we have a conversion trait:

<figure class="code">
1
2
3
trait Convert<Target> {
    fn convert(&self) -> Target;
}
</figure>

This trait just has one method. It’s about as simple as it gets. It converts from the (implicit) Self type to the Target type. If we wanted to permit conversion between int and uint, we might implement Convert like so:

<figure class="code">
1
2
impl Convert<uint> for int { ... } // int -> uint
impl Convert<int> for uint { ... } // uint -> uint
</figure>

Now, in the background here, Rust has this check we call coherence. The idea is (at least as implemented in the master branch at the moment) to guarantee that, for any given Self type, there is at most one impl that applies. In the case of these two impls, that’s satisfied. The first impl has a Self of int, and the second has a Self of uint. So whether we have a Self of int or uint, there is at most one impl we can use (and if we don’t have a Self of int or uint, there are zero impls, that’s fine too).

Multidispatch. Now imagine we wanted to go further and allow int to be converted to some other type MyInt. We might try writing an impl like this:

<figure class="code">
1
2
struct MyInt { i: int }
impl Convert<MyInt> for int { ... } // int -> MyInt
</figure>

Unfortunately, now we have a problem. If Self is int, we now have two applicable conversions: one to uint and one to MyInt. In a purely single dispatch world, this is a coherence violation.

The idea of multidispatch is to say that it’s ok to have multiple impls with the same Self type as long as at least one of their other type parameters are different. So this second impl is ok, because the Target type parameter is MyInt and not uint.

Conditional dispatch. So far we have dealt only in concrete types like int and MyInt. But sometimes we want to have impls that apply to a category of types. For example, we might want to have a conversion from any type T into a uint, as long as that type supports a MyGet trait:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
trait MyGet {
    fn get(&self) -> MyInt;
}

impl<T> Convert<MyInt> for T
    where T:MyGet
{
    fn convert(&self) -> MyInt {
        self.get()
    }
}
</figure>

We call impls like this, which apply to a broad group of types, blanket impls. So how do blanket impls interact with the coherence rules? In particular, does the conversion from T to MyInt conflict with the impl we saw before that converted from int to MyInt? In my branch, the answer is “only if int implements the MyGet trait”. This seems obvious but turns out to have a surprising amount of subtlety to it.

Crate concatenability and inference

In the trait reform RFC, I mentioned a desire to support crate concatenability, which basically means that you could take two crates (Rust compilation units), concatenate them into one crate, and everything would keep building. It turns out that the coherence rules already basically guarantee this without any further thought – except when it comes to inference. That’s where things get interesting.

To see what I mean, let’s look at a small example. Here we’ll use the same Convert trait as we saw before, but with just the original set of impls that convert between int and uint. Now imagine that I have some code which starts with a int and tries to call convert() on it:

<figure class="code">
1
2
3
4
5
6
trait Convert<T> { fn convert(&self) -> T; }
impl Convert<uint> for int { ... }
impl Convert<int> for uint { ... }
...
let x: int = ...;
let y = x.convert();
</figure>

What can we say about the type of y here? Clearly the user did not specify it and hence the compiler must infer it. If we look at the set of impls, you might think that we can infer that y is of type uint, since the only thing you can convert a int into is a uint. And that is true – at least as far as this particular crate goes.

However, if we consider beyond a single crate, then it is possible that some other crate comes along and adds more impls. For example, perhaps another crate adds the conversion to the MyInt type that we saw before:

<figure class="code">
1
2
struct MyInt { i: int }
impl Convert<MyInt> for int { ... } // int -> MyInt
</figure>

Now, if we were to concatenate those two crates together, then this type inference step wouldn’t work anymore, because int can now be converted to either uint or MyInt. This means that the snippet of code we saw before would probably require a type annotation to clarify what the user wanted:

<figure class="code">
1
2
let x: int = ...;
let y: uint = x.convert();
</figure>

Crate concatenation and conditional impls

I just showed that the crate concatenability principle interferes with inference in the case of multidispatch, but that is not necessarily bad. It may not seem so harmful to clarify both the type you are converting from and the type you are converting to, even if there is only one type you could legally choose. Also, multidispatch is fairly rare; most traits has a single type that decides on the impl and then all other types are uniquely determined. Moreover, with the associated types RFC, there is even a syntactic way to express this.

However, when you start trying to implement conditional dispatch that is, dispatch predicated on where clauses, crate concatenability becomes a real problem. To see why, let’s look at a different trait called Push. The purpose of the Push trait is to describe collection types that can be appended to. It has one associated type Elem that describes the element types of the collection:

<figure class="code">
1
2
3
4
5
trait Push {
    type Elem;

    fn push(&mut self, elem: Elem);
}
</figure>

We might implement Push for a vector like so:

<figure class="code">
1
2
3
4
5
impl<T> Push for Vec<T> {
    type Elem = T;

    fn push(&mut self, elem: T) { ... }
}
</figure>

(This is not how the actual standard library works, since push is an inherent method, but the principles are all the same and I didn’t want to go into inherent methods at the moment.) OK, now imagine I have some code that is trying to construct a vector of char:

<figure class="code">
1
2
3
4
let mut v = Vec::new();
v.push('a');
v.push('b');
v.push('c');
</figure>

The question is, can the compiler resolve the calls to push() here? That is, can it figure out which impl is being invoked? (At least in the current system, we must be able to resolve a method call to a specific impl or type bound at the point of the call – this is a consequence of having type-based dispatch.) Somewhat surprisingly, if we’re strict about crate concatenability, the answer is no.

The reason has to do with DST. The impl for Push that we saw before in fact has an implicit where clause:

<figure class="code">
1
2
3
impl<T> Push for Vec<T>
    where T : Sized
{ ... }
</figure>

This implies that some other crate could come along and implement Push for an unsized type:

<figure class="code">
1
impl<T> Push for Vec<[T]> { ... }
</figure>

Now, when we consider a call like v.push('a'), the compiler must pick the impl based solely on the type of the receiver v. At the point of calling push, all we know is that is the type of v is a vector, but we don’t know what it’s a vector of – to infer the element type, we must first resolve the very call to push that we are looking at right now.

Clearly, not being able to call push without specifying the type of elements in the vector is very limiting. There are a couple of ways to resolve this problem. I’m not going to go into detail on these solutions, because they are not what I ultimately opted to do. But briefly:

  • We could introduce some new syntax for distinguishing conditional dispatch vs other where clauses (basically the input/output distinction that we use for type parameters vs associated types). Perhaps a when clause, used to select the impl, versus a where clause, used to indicate conditions that must hold once the impl is selected, but which are not checked beforehand. Hard to understand the difference? Yeah, I know, I know.
  • We could use an ad-hoc rule to distinguish the input/output clauses. For example, all predicates applied to type parameters that are directly used as an input type. Limiting, though, and non-obvious.
  • We could create a much more involved reasoning system (e.g., in this case, Vec::new() in fact yields a vector whose types are known to be sized, but we don’t take this into account when resolving the call to push()). Very complicated, unclear how well it will work and what the surprising edge cases will be.

Or… we could just abandon crate concatenability. But wait, you ask, isn’t it important?

Limits of crate concatenability

So we’ve seen that crate concatenability conflicts with inference and it also interacts negatively with conditional dispatch. I now want to call into question just how valuable it is in the first place. Another way to phrase crate concatenability is to say that it allows you to always add new impls without disturbing existing code using that trait. This is actually a fairly limited guarantee. It is still possible for adding impls to break downstream code across two different traits, for example. Consider the following example:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
struct Player { ... }
trait Cowboy {
    // draw your gun!
    fn draw(&self);
}
impl Cowboy for Player { ...}

struct Polygon { ... }
trait Image {
    // draw yourself (onto a canvas...?)
    fn draw(&self);
}
impl Image for Polygon { ... }
</figure>

Here you have two traits with the same method name (draw). However, the first trait is implemented only on Player and the other on Polygon. So the two never actually come into conflict. In particular, if I have a player player and I write player.draw(), it could only be referring to the draw method of the Cowboy trait.

But what happens if I add another impl for Image?

<figure class="code">
1
impl Image for Player { ... }
</figure>

Now suddenly a call to player.draw() is ambiguous, and we need to use so-called “UFCS” notation to disambiguate (e.g., Player::draw(&player)).

(Incidentally, this ability to have type-based dispatch is a great strength of the Rust design, in my opinion. It’s useful to be able to define method names that overlap and where the meaning is determined by the type of the receiver.)

Conclusion: drop crate concatenability

So I’ve been turning these problems over for a while. After some discussions with others, aturon in particular, I feel the best fix is to abandon crate concatenability. This means that the algorithm for picking an impl can be summarized as:

  1. Search the impls in scope and determine those whose types can be unified with the current types in question and hence could possibly apply.
  2. If there is more than one impl in that set, start evaluating where clauses to narrow it down.

This is different from the current master in two ways. First of all, to decide whether an impl is applicable, we use simple unification rather than a one-way match. Basically this means that we allow impl matching to affect inference, so if there is at most one impl that can match the types, it’s ok for the compiler to take that into account. This covers the let y = x.convert() case. Second, we don’t consider the where clauses unless they are needed to remove ambiguity.

I feel pretty good about this design. It is somewhat less pure, in that it blends the role of inputs and outputs in the impl selection process, but it seems very usable. Basically it is guided only by the ambiguities that really exist, not those that could theoretically exist in the future, when selecting types. This avoids forcing the user to classify everything, and in particular avoids the classification of where clauses according to when they are evaluated in the impl selection process. Moreover I don’t believe it introduces any significant compatbility hazards that were not already present in some form or another.

Planet MozillaMozilla Mercurial Statistics

I recently gained SSH access to Mozilla's Mercurial servers. This allows me to run some custom queries directly against the data. I was interested in some high-level numbers and thought I'd share the results.

hg.mozilla.org hosts a total of 3,445 repositories. Of these, there are 1,223 distinct root commits (i.e. distinct graphs). Altogether, there are 32,123,211 commits. Of those, there are 865,594 distinct commits (not double counting commits that appear in multiple repositories).

We have a high ratio of total commits to distinct commits (about 37:1). This means we have high duplication of data on disk. This basically means a lot of repos are clones/forks of existing ones. No big surprise there.

What is surprising to me is the low number of total distinct commits. I was expecting the number to run into the millions. (Firefox itself accounts for ~240,000 commits.) Perhaps a lot of the data is sitting in Git, Bitbucket, and GitHub. Sounds like a good data mining expedition...

Planet MozillaMy Q2-2014 report

Summary of what I did last quarter (regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded) .

Australis release

At the end of April, we shipped Firefox 29 which was our first major redesign of the Firefox user interface since Firefox 4 (released in 2011). The code name for that was Australis and that meant replacing a lot of content on mozilla.org to introduce this new UI and the new features that go with it. That also means that we were able to delete a lot of old content that now had become really obsolete or that was now duplicated on our support site.

Since this was a major UI change, we decided to show an interactive tour of the new UI to both new users and existing users upgrading to the new version. That tour was fully localized in a few weeks time in close to 70 languages, which represents 97.5% of our user base. For the last locales not ready on time, we either decided to show them a partially translated site (some locales had translated almost everything or some of the non-translated strings were not very visible to most users, such as alternative content to images for screen readers) or to let the page fall back to the best language available (like Occitan falling back to French for example).

Mozilla.org was also updated with 6 new product pages replacing a lot of old content as well as updates to several existing pages. The whole site was fully ready for the launch with 60 languages 100% ready and 20 partially ready, all that done in a bit less than 4 weeks, parallel to the webdev integration work.

I am happy to say that thanks to our webdev team, our amazing l10n community and with the help of my colleagues Francesco Lodolo (also Italian localizer) and my intern Théo Chevalier (also French localizer), we were able to not only offer a great upgrading experience for the quasi totality of our user base, we were also able to clean up a lot of old content, fix many bugs and prepare the site from an l10n perspective for the upcoming releases of our products.

Today, for a big locale spanning all of our products and activities, mozilla.org is about 2,000 strings to translate and maintain (+500 since Q1), for a smaller locale, this is about 800 strings (+200 since Q1). This quarter was a significant bump in terms of strings added across all locales but this was closely related to the Australis launch, we shouldn't have such a rise in strings impacting all locales in the next quarters.

Transvision releases

Last quarter we did 2 releases of Transvision with several features targeting out 3 audiences: localizers, localization tools, current and potential Transvision developers.

For our localizers, I worked on a couple of features, one is quick filtering of search results per component for Desktop repositories (you search for 'home' and with one click, you can filter the results for the browser, for mail or for calendar for example). The other one is providing search suggestions when your search yields no results with the best similar matches ("your search for 'lookmark' yielded no result, maybe you were searching for 'Bookmark'?").

For the localization tools community (software or web apps like Pontoon, Mozilla translator, Babelzilla, OmegaT plugins...), I rewrote entirely our old Json API and extended it to provide more services. Our old API was initially created for our own purposes and basically was just giving the possibility to get our search results as a Json feed on our most popular views. Tools started using it a couple of years ago and we also got requests for API changes from those tool makers, therefore it was time to rewrite it entirely to make it scalable. Since we don't want to break anybody's workflow, we now redirect all the old API calls to the new API ones. One of the significant new service to the API is a translation memory query that gives you results and a quality index based on the Levenshtein distance with the searched terms. You can get more information on the new API in our documentation.

I also worked on improving our internal workflow and make it easier for potential developers wanting to hack on Transvision to install and run it locally. That meant that now we do continuous integration with Travis CI (all of our unit tests are ran on each commit and pull request on PHP 5.4 and 5.5 environments), we have made a lot of improvements to our unit tests suite and coverage, we expose to developers peak memory usage and time per request on all views so as to catch performance problems early, and we also now have a "dev" mode that allows getting Transvision installed and running on the PHP development server in a matter of minutes instead of hours for a real production mode. One of the blockers for new developers was the time required to install Transvision locally. Since it is a spidering tool looking for localized strings in Mozilla source repositories, it needed to first clone all the repositories it indexes (mercurial/git/svn) which is about 20GB of data and takes hours even with a fast connection. We are now providing a snapshot of the final extracted data (still 400MB ;)) every 6 hours that is used by the dev install mode.

Check the release notes for 3.3 and 3.4 to see what other features were added by the team (/ex: on demand TMX generation or dynamic Gaia comparison view added by Théo, my intern).

Web dashboard / Langchecker

The main improvement I brought to the web dashboard is probably this quarter the deadline field to all of our .lang files, which allows to better communicate the urgency of projects and for localizers are an extra parameter allowing them to prioritize their work.

Theo's first project for his internship was to build a 'project' view on the web dashboard that we can use to get an overview of the translation of a set of pages/files, this was used for the Australis release (ex: http://l10n.mozilla-community.org/webdashboard/?project=australis_all) but can be used to any other project we want to define , here is an example for the localization of two Android Add-ons I did for the World Cup that we did and tracked with .lang files.

We brought other improvements to our maintenance scripts for example to be able to "bulk activate" a page for all the locales ready, we improved our locamotion import scripts, started adding unit tests etc. Generally speaking, the Web dashboard keeps improving regularly since I rewrote it last quarter and we regularly experiment using it for more projects, especially for projects which don't fit in the usual web/product categories and that also need tracking. I am pretty happy too that now I co-own the dashboard with Francesco who brings his own ideas and code to streamline our processes.

Théo's internship

I mentionned it before, our main French localizer Théo Chevalier, is doing an internship with me and Delphine Lebédel as mentors, this is the internship that ends his 3rd year of engineering (in a 5 years curriculum). He is based in Montain View, started early April and will be with us until late July.

He is basically working on almost all of the projects I, Delphine and Flod work on.

So far, apart from regular work as an l10n-driver, he has worked for me on 3 projects, the Web Dashboard projects view, building TMX files on demand on Transvision and the Firefox Nightly localized page on mozilla.org. This last project I haven't talked about yet and he blogged about it recently, in short, the first page that is shown to users of localized builds of Firefox Nightly can now be localized, and by localized we don't just mean translated, we mean that we have a community block managed by the local community proposing Nightly users to join their local team "on the ground". So far, we have this page in French, Italian, German and Czech, if your locale workflow is to translate mozilla-central first, this is a good tooll for you to reach a potential technical audience to grow your community .

Community

This quarter, I found 7 new localizers (2 French, 1 Marahati, 2 Portuguese/Portugal, 1 Greek, 1 Albanian) to work with me essentially on mozilla.org content. One of them, Nicolas Delebeque, took the lead on the Australis launch and coordinated the French l10n team since Théo, our locale leader for French, was starting his internship at Mozilla.

For Transvision, 4 people in the French community (after all, Transvision was created initially by them ;)) expressed interest or small patches to the project, maybe all the efforts we did in making the application easy to install and hack are starting to pay, we'll probably see in Q3/Q4 :)

I spent some time trying to help rebuild the Portugal community which is now 5 people (instead of 2 before), we recently resurrected the mozilla.pt domain name to actually point to a server, the MozFR one already hosting the French community and WoMoz (having the French community help the Portuguese one is cool BTW). A mailing list for Portugal was created (accessible also as nntp and via google groups) and the #mozilla-portugal IRC channel was created. This is a start, I hope to have time in Q3 to help launch a real Portugal site and help them grow beyond localization because I think that communities focused on only one activity have no room to grow or renew themselves (you also need coding, QA, events, marketing...).

I also started looking at Babelzilla new platform rewrite project to replace the current aging platform (https://github.com/BabelZilla/WTS/) to see if I can help Jürgen, the only Babelzilla dev, with building a community around his project. Maybe some of the experience I gained through Transvision will be transferable to Babelzilla (was a one man effort, now 4 people commit regularly out of 10 committers). We'll see in the next quarters if I can help somehow, I only had time to far to install the app locally.

In terms of events, this was a quiet quarter, apart from our l10n-drivers work week, the only localization event I was in was the localization sprint over a whole weekend in the Paris office. Clarista, the main organizer blogged about it in French, many thanks to her and the whole community that came over, it was very productive, we will definitely do it again and maybe make it a recurring event.

Summary

This quarter was a good balance between shipping, tooling and community building. The beginning of the quarter was really focused on shipping Australis and as usual with big releases, we created scripts and tools that will help us ship better and faster in the future. Tooling and in particular Transvision work which is probably now my main project, took most of my time in the second part of the quarter.

Community building was as usual a constant in my work, the one thing that I find more difficult now in this area is finding time for it in the evening/week end (when most potential volunteers are available for synchronous communication) basically because it conflicts with my family life a bit. I am trying to be more efficient recruiting using asynchronous communication tools (email, forums…) but as long as I can get 5 to 10 additional people per quarter to work with me, it should be fine with scaling our projects.

Planet MozillaJetpack Pro Tip - JPM --prefs

JPM allows you to dynamically set preferences which can be used when an add-on developer uses jpm run or jpm test. This new --prefs feature that I added yesterday because Firefox DevTools requested it JPM 0.0.16.

With --prefs you can point to a json file, which should include the an object with keys for each pref that you want set and the values of these keys should be the desired value for your pref setting, here is a json file example jpm test --prefs ~/firefox-prefs.json:

{
  "extensions.test.pref": true
}

This would be the static way to add prefs, if you want dynamic prefs then you can use a CommonJS file, with jpm test --prefs ~/firefox-prefs.js, where the ~/firefox-prefs.js looks something like this:

var prefs = {};
prefs["extensions.test.time"] = Date.now();
module.exports = prefs;

Planet MozillaProcessing Jetpack

This is the first post of my new Jetpacks Labs series, which is a project that I am working on in my personal time.

I think Processing is a great language because it is very simple and good and what it was meant to do. I’ve only had a little time to try hacking on some Processing arto projects, and it’s been a lot of fun (I will post those scripts when they are done). However, using the Java Processing client was not such a pleasent experience, and I thought that making a Firefox add-on, using Processing-js with the same features would not be hard. This is partly what led me to write about my Art Tools for Firefox idea in Feburary.

This week I found some time to hack a prototype together, and it’s working pretty well now, you can find the source on Github at jetpack-labs/processing-jetpack.

At the moment this add-on is using Scratchpad as an editor, but in the future I want to use the WebIDE. Also at the moment I’ve only added a “Processing Start” menuitem, and there should also be pause and stop menuitems, and there should be corresponding buttons for these actions. All of this and more are features that need to be added, and on top of that I would like to integrate this add-on with openprocessing.org, so if you’re intertested in the project, this is my request for contributors :)

There is a lot of work to do here still.

Planet MozillaA day in the curl project

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

Planet MozillaNew to Bugzilla

I believe it was a few years ago, possibly more, when someone (was it Josh Matthews? David Eaves) added a feature to Bugzilla that indicated when a person was “New to Bugzilla”. It was a visual cue next to their username and its purpose was to help others remember that not everyone in the Bugzilla soup is a veteran, accustomed to our jargon, customs, and best practices. This visual cue came in handy three weeks ago when I encouraged 20 new contributors to sign up for Bugzilla. 20 people who have only recently begun their journey towards becoming Mozilla contributors, and open source mavens. In setting them loose upon our bug tracker I’ve observed two things:

ONE: The “New to Bugzilla” flag does not stay up long enough. I’ll file a bug on this and look into how long it currently does stay up, and recommend that if possible we should have it stay up until the following criteria are met:
* The person has made at least 10 comments
* The person has put up at least one attachment
* The person has either reported, resolved, been assigned to, or verified at least one bug

TWO: This one is a little harder – it involves more social engineering. Sometimes people are might be immune to the “New to Bugzilla” cue or overlook it which has resulted in some cases there have been responses to bugs filed by my cohort of Ascenders where the commenter was neither helpful nor forwarding the issue raised. I’ve been fortunate to be in-person with the Ascend folks and can tell them that if this happens they should let me know, but I can’t fight everyone’s fights for them over the long haul. So instead we should build into the system a way to make sure that when someone who is not New to Bugzilla replies immediately after a “New to Bugzilla” user there is a reminder in the comment field – something along the lines of “You’re about to respond to someone who’s new around here so please remember to be helpful”. Off to file the bugs!

Planet MozillaThis Week In Releng - Sept 21st, 2014

Major Highlights:

  • shipped 10 products in less than one day

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Planet MozillaThis Week In Releng - Sept 7th, 2014

Major Highlights

  • big time saving in releases thanks to:
    • Bug 807289 - Use hardlinks when pushing to mirrors to speed it up

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Planet MozillaThe Curtis Report 2014-09-26

So my last report failed to mention something important. There is a lot I do that is not on this report. This only covers note worthy items outside of run the business (RTB) activities. I do a good deal of bug handing, input, triage and routing to get things to the right people, remove bad/invalid or mis tagged items. Answer emails on projects and other items etc. Just general workstuff. Last week had lots of vendor stuff (as noted below) and while kind of RTB it’s usually not this heavy and we had 2 rush ones so I felt they worthy of note.

What I did this week

  • kit herder community stuff
  • [vendor redacted] communications
  • [vendor redacted] review followup
  • [vendor 2 redacted] rush review started
  • Tribe pre-planning for next month
  • [vender redacted] follow ups
  • triage security bugs
  • DerbyCon prep / registration
  • bitcoin vendor prep work
  • SeaSponge mentoring

Meetings Attended

Mon

  • impromptu [vendor redacted] review discussion
  • status meeting for [vendor redacted] security testing
  • Monday meeting

Tue

  • cloud services team (sort of)

Wed

  • impromptu [vendor redacted] standup
  • MWoS SeaSponge Weekly team meeting
  • Cloud Services Show & Tell
  • Mozillians Town Hall – Brand Initiatives (Mozilla + Firefox)
  • Web Bug Triage

Thu

  • security open mic

Fri-Sun

Non Work

  • deal with deer damage to car

Planet Mozilla21 emerging themes for Web Literacy Map 2.0

Over the past few weeks I’ve interviewed various people to gain their feedback on the current version of Mozilla’s Web Literacy Map. There was a mix of academics, educational practitioners, industry professionals and community members.* I’ve written up the interviews on a tumblr blog and the audio repository can be found at archive.org.

I wanted to start highlighting some of the things a good number of them talked about in terms of the Web Literacy Map and its relationship with Webmaker (and the wider Mozilla mission)

Cat eating popcorn

Introduction

I used five questions to loosely structure the interviews:

  1. Are you currently using the Web Literacy Map (v1.1)? In what kind of context?
  2. What does the Web Literacy Map do well?
  3. What’s missing from the Web Literacy Map?
  4. What kinds of contexts would you like to use an updated (v2.0) version of the Web Literacy Map?
  5. Who would you like to see use/adopt the Web Literacy Map?

How much we stuck to the questions in this order depended on the interviewee. Some really wanted to talk about their context. Others wanted to dwell on more conceptual aspects. Either way, it was interesting to see some themes emerge.

Emerging themes

I’m still synthesizing the thoughts contained within 18+ hours of audio, but here are the headlines so far…

1. The ‘three strands’ approach works well

The strands currently named Exploring / Building / Connecting seem to resonate with lots of people. Many called it out specifically as a strength of the Web Literacy Map, saying that it enables people to orient themselves reasonably quickly.

2. Without context, newbies can be overwhelmed

While many people talked about how useful the Web Literacy Map is as a ‘map of the territory’ giving an at-a-glance overview, some interviewees mentioned that the Web Literacy Map should really be aimed at mentors, educators, and other people who have already got some kind of mental model. We should be meeting end users where they are with interesting activities rather than immediately presenting them with a map that reinforces their lack of skills/knowledge.

3. Shared vocabulary is important

New literacies can be a contested area. One interviewee in particular talked about how draining it can be to have endless discussions and debates about definitions and scope. Several people, especially those using it in workshops, talked about how useful the Web Literacy Map is in developing a shared vocabulary and getting down to skill/knowledge development.

4. The ‘Connecting’ strand has some issues

Although interviewees agreed there were no ‘giant gaping holes’ in the Web Literacy Map, many commented on the third, ‘Connecting’ strand. Some mentioned that it seemed a bit too surface-level. Some wanted a more in-depth treatment of licensing issues under ‘Open Practices’. Others thought that the name ‘Connecting’ didn’t really capture what the competencies in that column are really about. Realistically, most people will be meeting the competencies in this strand through social media. There isn’t enough focus on this, nor on ‘personal branding’, thought some people.

5. Clear focus on learning through making/doing

Those interested in the pedagogical side of things zeroed in on the verb-based approach to the Web Literacy Map. They appreciated that, along with the Discover / Make / Teach flow on each competency page, users of webmaker.org are encouraged to learn through making and doing, rather than simply being tested on facts.

6. Allows other organizations to see how their work relates to Mozilla’s mission

Those using this out ‘in the field’ (especially those involved in Hive Learning Networks talked about how the Web Literacy Map is a good conversation-starter. They mentioned the ease with which most other organizations they work with can map their work onto ours, once they’ve seen it. These organizations can then use it as a sense-check to see how they fit into a wider ecosystem. It allows them to quickly understand the difference between the ‘learn to code’ movement and the more nuanced, holistic approach advocated by Mozilla.

7. It doesn’t really look like a ‘map’

Although interviewees were happy with the word ‘Map’ (much more so than the previous ‘Standard’), many thought we may have missed a trick by not actually presenting it as a map. Some thought that the Web Literacy Map is currently presented in a too clear-cut way, and that we should highlight some of the complexity. There were a few ideas how to do so, although one UX designer warned against surfacing this too much, lest we end up with a ‘plate of spaghetti’. Nevertheless, there was a feeling that riffing on the ‘map’ metaphor could lead to more of an ‘exploratory’ approach.

8. Lacking audience definition

There was a generally-positive sentiment about the Web Literacy Map structuring the Webmaker Resource section, although interviewees were a bit unsure about audience definition. The Web Literacy Map seems to be more of a teaching tool rather than a learning tool. It was suggested that we might want to give Mentors and Learners a different view. Mentors could start with the more abstract competencies, whereas the Learners could start with specific, concrete, interest-based activities. Laura Hilliger’s Web Literacy Learning Pathways prototype was mentioned on multiple occasions.

9. Why is this important?

Although the Web Literacy Map makes sense to westerners in developed countries, there was a feeling among some interviewees that we don’t currently ‘make the case’ for the web. Why is it important? Why should people pay to get online? What benefits does it bring? We need to address this question before, or perhaps during, their introduction to the competencies included in the Web Literacy Map.

10. Arbitrary separation of ‘Security’ and ‘Privacy’ competencies

At present, ‘Privacy’ is a competency under the ‘Exploring’ strand, and ‘Security’ is a competency under the ‘Connecting’ strand. However, there’s a lot of interplay, overlap, and connections between the two. Although interviewees thought that they should be addressed explicitly, there was a level of dissatisfaction with the way it’s currently approached in the Web Literacy Map.

11. Better localization required

Those I interviewed from outside North America and the UK expressed some frustration at the lack of transparency around localization. One in particular had tried to get involved, but became demotivated by a lack of response when posing suggestions and questions via Transifex. Another mentioned that it was important not to focus on translation from English to other languages, but to generate local content. The idea of badges for localization work was mentioned on more than one occasion.

12. The Web Literacy Map should be remixable

Although many interviewees approached it from different angles, there was a distinct feeling that the Web Literacy Map should somehow be remixable. Some used a GitHub metaphor to talk of the ‘main branch’ and ‘forks’. Others wanted a ‘Remix’ button next to the map in a similar vein to Thimble and Popcorn Maker resources. This would allow for multiple versions of the map that could be contextualized and localized while still maintaining a shared vocabulary and single point of reference.

13. Tie more closely to the Mozilla Mission

One of the things I wanted to find out through gentle probing during this series of interviews was whether we should consider re-including the fourth ‘Protecting’ strand we jettisoned before reaching v1.0. At the time, we thought that ‘protecting the web’ was too political and Mozilla-specific to include in what was then a Web Literacy ‘Standard’. However, a lot has changed in a year - both with Mozilla and with the web. Although I got the feeling that interviewees were happy to tie the Web Literacy Map more closely to the Mozilla Mission, there wasn’t overall an appetite for an additional column. Instead, people talked about ‘weaving’ it throughout the other competencies.

14. Use cross-cutting themes to connect news events to web literacy

When we developed the first version of the Web Literacy Map, we didn’t include ‘meta-level’ things such as ‘Identity’ and ‘storytelling’. Along with ‘mobile’, these ideas seem too large or nebulous to be distinct competencies. It was interesting, therefore, to hear some interviewees talk of hooking people’s interest via news items or the zeitgeist. The topical example given the timing of the interviewees tended to be interesting people in ‘Privacy’ and ‘Security’ via the iCloud celebrity photo leaks.

15. Develop user stories

Some interviewees felt that the Web Literacy Map currently lacks a ‘human’ dimension that we could rectify through the inclusion of some case studies showing real people who have learned a particular skill or competency. These could look similar to the UX Personas work.

16. Improve the ‘flow’ of webmaker.org for users

This is slightly outside the purview of the Web Literacy Map per se, but enough interviewees brought it up to surface it here. The feeling is that the connection between Webmaker Tools, the Web Literacy Map, and Webmaker badges isn’t clear. There should be a direct and obvious link between them. For instance, web literacy badges should be included in each competency page. Some even suggested a learner dashboard similar to the one Jess Klein proposed back in 2012.

17. Bake web literacy into Firefox

This, again, is veering away from the Web Literacy Map itself, but many interviewees mentioned how Mozilla should ‘differentiate’ Firefox within the market by allowing you to develop your web literacy skills ‘in the wild’. Some had specific examples of how this could work (“Hey, you just connected to a website using HTTPS, want to learn more?”) while others just had a feeling we should join things up a bit better.

18. Identify ‘foundational’ competencies

Although we explicitly avoided doing this with the first version of the Web Literacy Map, for some interviewees, having a set of ‘foundational’ competencies would be a plus point. It would give a starting point for those new to the area, and allow us to assume a baseline level from which the other competencies could be developed. We could also save the ‘darker’ aspects of the web for later to avoid scaring people off.

19. Avoid scope creep

Many interviewees warned against ‘scope creep’, or trying to cram too much into the Web Literacy Map. On the whole, there were lots of people I spoke to who like it just the way it is, with one saying that it would be relevant for a ‘good few years yet’. One of the valuable things about the Web Literacy Map is that it has a clear focus and scope. We should ensure we maintain that, was the general feeling. There’s also a feeling that it has a ‘strong understanding of technology’ that should be watered-down.

20. Version control

If we’re updating the Web Literacy Map, users need to know which version they’re viewing - and how to access previous versions. This is so they can know how up-to-date the current version is. We should also allow them to view previous iterations that they may have used to build a curriculum still being used by other organizations.

21. Use as a funnel to wider Mozilla projects

We currently have mozilla.org/contribute and webmaker.org/getinvolved, but some interviewees thought that we could guide people who keep selecting certain competencies towards different Mozilla areas - for example OpenNews or Open Science. The latter is also developing its own version of the Web Literacy Map, so that could be a good link. Also, even more widely, Open Hatch provide Open Source ‘missions’ that we could make use of.


*Although I was limited by my language and geographic location, I’m pretty happy with the range of views collected. Instead of a dry, laboratory-like study looking for statistical significance, I decided to focus on people I knew would have good insights, and with whom I could have meaningful conversations. Over the next couple of weeks I’m going to create a survey for community members to get their thoughts on some of the more concrete proposals I’ll make for Web Literacy Map 2.0.


Comments? Feedback? I’m @dajbelshaw on Twitter, or you can email me: doug@mozillafoundation.org.

Planet MozillaUsing Flexbox in web applications

Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in horizontally or vertically”. I’ve used it in three projects so far:

  • Centering the timer interface in my meditation app, so that it scales nicely from a 320×480 FirefoxOS device all the way up to a high definition monitor
  • Laying out the chart / sidebar elements in the Eideticker dashboard so that maximum horizontal space is used
  • Fixing various problems in the Treeherder UI on smaller screens (see bug 1043474 and its dependent bugs)

When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.

Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. :) They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.

To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.

Planet MozillaWhy I feel like an Open Source Failure

I presented a version of this talk at the Supporting Cultural Heritage Open Source Software (SCHOSS) Symposium in Atlanta, GA in September 2014. This talk was generously sponsored by LYRASIS and the Andrew Mellon Foundation.


I often feel like an Open Source failure.

I haven’t submitted 500 patches in my free time, I don’t spend my after-work hours rating html5 apps, and I was certainly not a 14 year old Linux user. Unlike the incredible group of teenaged boys with whom I write my Mozilla Communities newsletter and hang out with on IRC, I spent most of my time online at that age chatting with friends on AOL Instant Messenger and doing my homework.

I am a very poor programmer. My Wikipedia contributions are pretty sad. I sometimes use Powerpoint. I never donated my time to Open Source in the traditional sense until I started at Mozilla as a GNOME OPW intern and while the idea of data gets me excited, the thought of spending hours cleaning it is another story.

I was feeling this way the other day and chatting with a friend about how reading celebrity news often feels like a better choice after work than trying to find a new open source project to contribute to or making edits to Wikipedia. A few minutes later, a message popped up in my inbox from an old friend asking me to help him with his application to library school.

I dug up my statement of purpose and I was extremely heartened to read my words from three years ago:

I am particularly interested in the interaction between libraries and open source technology… I am interested in innovative use of physical and virtual space and democratic archival curation, providing free access to primary sources.

It felt good to know that I have always been interested in these topics but I didn’t know what that would look like until I discovered my place in the open source community. I feel like for many of us in the cultural heritage sector the lack of clarity about where we fit in is a major blocker, and I do think it can be associated with contribution to open source more generally. Douglas Atkin, Community Manager at Airbnb, claims that the two main questions people have when joining a community are “Are they like me? And will they like me?”. Of course, joining a community is a lot more complicated than that, but the lack of visibility of open source projects in the cultural heritage sector can make even locating a project a whole lot more complicated.

As we’ve discussed in this working group, the ethics of cultural heritage and Open Source overlap considerably and

the open source community considers those in the cultural heritage sector to be natural allies.

In his article, “Who are you empowering?” Hugh Rundle writes: (I quote this article all the time because I believe it’s one of the best articles written about library tech recently…)

A simple measure that improves privacy and security and saves money is to use open source software instead of proprietary software on public PCs.

Community-driven, non-profit, and not good at making money are just some of the attributes that most cultural heritage organizations and open source project have in common, and yet, when choosing software for their patrons, most libraries and cultural heritage organizations choose proprietary systems and cultural heritage professionals are not the strongest open source contributors or advocates.

The main reasons for this are, in my opinion:


1. Many people in cultural heritage don’t know what Open Source is.

In a recent survey I ran of the Code4Lib and UNC SILS listservs, nearly every person surveyed could accurately respond to the prompt “Define Open Source in one sentence” though the responses varied from community-based answers to answers solely about the source code.

My sample was biased toward programmers and young people (and perhaps people who knew how to use Google because many of the answers were directly lifted from the first line of the Wikipedia article about Open Source, which is definitely survey bias,) but I think that it is indicative of one of the larger questions of open source.

Is open source about the community, or is it about the source code?

There have been numerous articles and books written on this subject, many of which I can refer you to (and I am sure that you can refer me to as well!) but this question is fundamental to our work.

Many people, librarians and otherwise, will ask: (I would argue most, but I am operating on anecdotal evidence)

Why should we care about whether or not the code is open if we can’t edit it anyway? We just send our problems to the IT department and they fix it.

Many people in cultural heritage don’t have many feelings about open source because they simply don’t know what it is and cannot articulate the value of one over the other. Proprietary systems don’t advertise as proprietary, but open source constantly advertises as open source, and as I’ll get to later, proprietary systems have cornered the market.

This movement from darkness to clarity brings most to mind a story that Kathy Lussier told about the Evergreen project, where librarians who didn’t consider themselves “techy” jumped into IRC to tentatively ask a technical question and due to the friendliness of the Evergreen community, soon they were writing the documentation for the software themselves and were a vital part of their community, participating in conferences and growing their skills as contributors.

In this story, the Open Source community engaged the user and taught her the valuable skill of technical documentation. She also took control of the software she uses daily and was able to maintain and suggest features that she wanted to see. This situation was really a win-win all around.

What institution doesn’t want to see their staff so well trained on a system that they can write the documentation for it?


2. The majority of the market share in cultural heritage is closed-source, closed-access software and they are way better at advertising than Open Source companies.

Last year, my very wonderful boss in the cataloging and metadata department of the University of North Carolina at Chapel Hill came back from ALA Midwinter with goodies for me: pens and keychains and postits and tote bags and those cute little staplers. “I only took things from vendors we use,” she told me.

Linux and Firefox OS hold 21% of the world’s operating system marketshare. (Interestingly, this is more globally than IOS, but still half that of Windows. On mobile, IOS and Android are approximately equal.)

Similarly, free, open source systems for cultural heritage are unfortunately not a high percentage of the American market. Wikipedia has a great list of proprietary and open source ILSs and OPACs, the languages they’re written in, and their cost. Marshall Breeding writes that FOSS software is picking up some market share, but it is still “the alternative” for most cultural heritage organizations.

There are so many reasons for this small market share, but I would argue (as my previous anecdote did for me,) that a lot of it has to do with the fact that these proprietary vendors have much more money and are therefore a lot better at marketing to people in cultural heritage who are very focused on their work. We just want to be able to install the thing and then have it do the thing well enough. (An article in Library Journal in 2011 describes open source software as: “A lot of work, but a lot of control.”)

As Jack Reed from Stanford and others have pointed out, most of the cost of FOSS in cultural heritage is developer time, and many cultural heritage institutions believe that they don’t have those resources. (John Brice’s example at the Meadville Public Library proves that communities can come together with limited developers and resources in order to maintain vital and robust open source infrastructures as well as significantly cut costs.)

I learned at this year’s Wikiconference USA that academic publishers had the highest profit margin of any company in the country last year, ahead of Google and Apple.

The academic publishing model is, for more reasons than one, completely antithetical to the ethics of cultural heritage work, and yet they maintain a large portion of the cultural heritage market share in terms of both knowledge acquisition and software. Megan Forbes reminds us that the platform Collection Space was founded as the alternative to the market dominance of “several large, commercial vendors” and that cost put them “out of reach for most small and mid-sized institutions.”

Open source has the chance to reverse this vicious cycle, but institutions have to put their resources in people in order to grow.

While certain companies like OCLC are working toward a more equitable future, with caveats of course, I would argue that the majority of proprietary cultural heritage systems are providing inferior product to a resource poor community.


 3. People are tired and overworked, particularly in libraries, and to compound that, they don’t think they have the skills to contribute.

These are two separate issues, but they’re not entirely disparate so I am going to tackle them together.

There’s this conception outside of the library world that librarians are secret coders just waiting to emerge from their shells and start categorizing datatypes instead of MARC records (this is perhaps a misconception due to a lot of things, including the sheer diversity of types of jobs that people in cultural heritage fill, but hear me out.)

When surveyed, the skill that entering information science students most want to learn is “programming.” However, the majority of MLIS programs are still teaching Microsoft Word and beginning html as technology skills.

Learning to program computers takes time and instruction and while programs like Women who Code and Girl Develop It can begin educating librarians, we’re still faced with a workforce that’s over 80% female-identified that learned only proprietary systems in their work and a small number of technology skills in their MLIS degrees.

Library jobs, and further, cultural heritage jobs are dwindling. Many trained librarians, art historians, and archivists are working from grant to grant on low salaries with little security and massive amounts of student loans from both undergraduate and graduate school educations. If they’re lucky to get a job, watching television or doing the loads of professional development work they’re expected to do in their free time seems a much better choice after work than continuing to stare at a computer screen for a work-related task or learn something completely new. For reference: an entry-level computer programmer can expect to make over $70,000 per year on average. An entry-level librarian? Under $40,000. I know plenty of people in cultural heritage who have taken two jobs or jobs they hate just to make ends meet, and I am sure you do too.

One can easily say, “Contributing to open source teaches new skills!” but if you don’t know how to make non-code contributions or the project is not set up to accept those kinds of contributions, you don’t see an immediate pay-off in being involved with this project, and you are probably not willing to stay up all night learning to code when you have to be at work the next day or raise a family. Programs like Software Carpentry have proven that librarians, teachers, scientists, and other non-computer scientists are willing to put in that time and grow their skills, so to make any kind of claim without research would be a reach and possibly erroneous, but I would argue that most cultural heritage organizations are not set up in a way to nurture their employees for this kind of professional development. (Not because they don’t want to, necessarily, but because they feel they can’t or they don’t see the immediate value in it.)

I could go on and on about how a lot of these problems are indicative of cultural heritage work being an historically classed and feminized professional grouping, but I will spare you right now, although you’re not safe if you go to the bar with me later.

In addition, many open source projects operate with a “patches welcome!” or “go ahead, jump in!” or “We don’t need a code of conduct because we’re all nice guys here!” mindset, which is not helpful to beginning coders, women, or really, anyone outside of a few open source fanatics.

I’ve identified a lot of problems, but the title of this talk is “Creating the Conditions for Open Source Community” and I would be remiss if I didn’t talk about what works.

Diversification, both in terms of types of tasks and types of people and skillsets as well as a clear invitation to get involved are two absolute conditions for a healthy open source community.

Ask yourself the questions: Are you a tight knit group with a lot of IRC in-jokes that new people may not understand? Are you all white men? Are you welcoming? Paraphrasing my colleague Sean Bolton, the steps to an inviting community is to build understanding, build connections, build clarity, build trust, build pilots, which creates a build win-win.

As communities grow, it’s important to be able to recognize and support contributors in ways that feel meaningful. That could be a trip to a conference they want to attend, a Linkedin recommendation, a professional badge, or a reference, or best yet: you could ask them what they want. Our network for contributors and staff is adding a “preferred recognition” system. Don’t know what I want? Check out my social profile. (The answer is usually chocolate, but I’m easy.)

Finding diverse contribution opportunities has been difficult for open source since, well, the beginning of open source. Even for us at Mozilla, with our highly diverse international community and hundreds of ways to get involved, we often struggle to bring a diversity of voices into the conversation, and to find meaningful pathways and recognition systems for our 10,000 contributors.

In my mind, education is perhaps the most important part of bringing in first-time contributors. Organizations like Open Hatch and Software Carpentry provide low-cost, high-value workshops for new contributors to locate and become a part of Open Source in a meaningful and sustained manner. Our Webmaker program introduces technical skills in a dynamic and exciting way for every age.

Mentorship is the last very important aspect of creating the conditions for participation. Having a friend or a buddy or a champion from the beginning is perhaps the greatest motivator according to research from a variety of different papers. Personal connection runs deep, and is a major indicator for community health. I’d like to bring mentorship into our conversation today and I hope that we can explore that in greater depth in the next few hours.

With mentorship and 1:1 connection, you may not see an immediate uptick in your project’s contributions, but a friend tells a friend tells a friend and then eventually you have a small army of motivated cultural heritage workers looking to take back their knowledge.

You too can achieve on-the-ground action. You are the change you wish to see.

Are you working in a cultural heritage institution and are about to switch systems? Help your institution switch to the open source solution and point out the benefits of their community. Learning to program? Check out the Open Hatch list of easy bugs to fix! Are you doing patron education? Teach them Libre Office and the values around it. Are you looking for programming for your library? Hold a Wikipedia edit-a-thon. Working in a library? Try working open for a week and see what happens. Already part of an open source community? Mentor a new contributor or open up your functional area for contribution.

It’s more than just “if you build it, they will come.”

If you make open source your mission, people will want to step up to the plate.

To close, I’m going to tell a story that I can’t take credit for, but I will tell it anyway.

We have a lot of ways to contribute at Mozilla. From code to running events to learning and teaching the Web, it can be occasionally overwhelming to find your fit.

A few months ago, my colleague decided to create a module and project around updating the Mozilla Wiki, a long-ignored, frequently used, and under-resourced part of our organization. As an information scientist and former archivist, I was psyched. The space that I called Mozilla’s collective memory was being revived!

We started meeting in April and it became clear that there were other wiki-fanatics in the organization who had been waiting for this opportunity to come up. People throughout the organization were psyched to be a part of it. In August, we held a fantastically successful workweek in London, reskinned the wiki, created a regular release cycle, wrote a manual and a best practice guide, and are still going strong with half contributors and half paid-staff as a regular working group within the organization. Our work has been generally lauded throughout the project, and we’re working hard to make our wiki the resource it can be for contributors and staff.

To me, that was the magic of open source. I met some of my best friends, and at the end of the week, we were a cohesive unit moving forward to share knowledge through our organization and beyond. And isn’t that a basic value of cultural heritage work?

I am still an open source failure. I am not a code fanatic, and I like the ease-of-use of my used IPhone. I don’t listen to techno and write Javscript all night, and I would generally rather read a book than go to a hackathon.

And despite all this, I still feel like I’ve found my community.

I am involved with open source because I am ethically committed to it, because I want to educate my community of practice and my local community about what working open can bring to them.

When people ask me how I got involved with open source, my answer is: I had a great mentor, an incredible community and contributor base, and there are many ways to get involved in open source.

While this may feel like a new frontier for cultural heritage, I know we can do more and do better.

Open up your work as much as you can. Draw on the many, many intelligent people doing work in the field. Educate yourself and others about the value that open source can bring to your institution. Mentor someone new, even if you’re shy. Connect with the community and treat your fellow contributors with respect.Who knows?

You may get an open source failure like me to contribute to your project.

Planet Mozillafwunit: Unit Tests for your Network

I find your lack of unit tests ... disturbing

It's established fact by now that code should be tested. The benefits are many:

  • Exercising the code;
  • Reducing ambiguity by restating the desired behavior (in the implementation, in the tests, and maybe even a third time in the documentation); and
  • Verifying that the desired behavior remains unchanged when the code is refactored.

System administrators are increasingly thinking of infrastructure as code and reaping the benefits of testing, review, version control, collaboration, and so on. In the networking world, this typically implies "software defined networking" (SDN), a substantial change from the typical approach to network system configuration.

At Mozilla, we haven't taken the SDN plunge yet, although there are plans in the works. In the interim, we maintain very complex firewall configurations by hand. Understanding how all of the rules fit together and making manual changes is often difficult and error-prone. Furthermore, after years of piece-by-piece modifications to our flows, the only comprehensive summary of our network flows are the firewall configurations themselves. And those are not very readable for anyone not familiar with firewalls!

The difficulty and errors come from the gap between the request for a flow and the final implementation, perhaps made across several firewalls. If everyone -- requester and requestee -- had access to a single, readable document specifying what the flows should look like, then requesets for modification could be more explicit and easier to translate into configuration. If we have a way to verify automatically that the firewall configurations match the document, then we can catch errors early, too.

I set about trying to find a way to implement this. After experimenting with various ways to write down flow definitions and parse them, I realized that the verification tests could be the flow document. The idea is to write a set of tests, in Python since it's the lingua franca of Mozilla, which can be read by both the firewall experts and the users requesting a change to the flows. To change flows, change the tests -- a diff makes the request unambiguous. To verify the result, just run the tests.

fwunit

I designed fwunit to support this: unit tests for flows. The idea is to pull in "live" flow configurations and then write tests that verify properties of those configurations. The tool supports reading Juniper SRX configurations as well as Amazon AWS security groups for EC2 instances, and can be extended easily. It can combine rules from several sources (for example, firewalls for each datacenter and several AWS VPCs) using a simple description of the network topology.

As a simple example, here's a test to make sure that the appropriate VLANs have access to the DeployStudio servers:

def test_install_build():
    rules.assertPermits(
        test_releng_scl3 + try_releng_scl3 + build_releng_scl3,
        deploystudio_servers,
        'deploystudio')

The rules instance there is a compact representation of all allowed network flows, deduced from firewall and AWS configurations with the fwunit command line tool. The assertPermits method asserts that the rules permit traffic from the test, try, and build VLANs to the deploystudio servers, using the "deploystudio" application. That all reads pretty naturally from the Python code.

At Mozilla

We glue the whole thing together with a shell script that pulls the tests from our private git repository, runs fwunit to get the latest configuration information, and then runs the tests. Any failures are reported by email, at which point we know that our document (the tests) doesn't match reality, and can take appropriate action.

We're still working on the details of the process involved in changing configurations -- do we update the tests first, or the configuration? Who is responsible for writing or modifying the tests -- the requester, or the person making the configuration change? Whatever we decide, it needs to maximize the benefits without placing undue load on any of the busy people involved in changing network flows.

Benefits

It's early days, but this approach has already paid off handsomely.

  • As expected, it's a readable, authoritative, verifiable account of our network configuration. Requirements met -- aweseome!
  • With all tests in place, netops can easily "refactor" the configurations, using fwunit to verify that no expected behavior has changed. We've deferred a lot of minor cleanups as high-risk with low reward; easy verification should substantially reduce that risk.
  • Just about every test I've written has revealed some subtle misconfiguration -- either a flow that was requested incorrectly, or one that was configured incorrectly. These turn into flow-request bugs that can be dealt with at a "normal" pace, rather than the mad race to debug and fix that would occur later, when they impacted production operations.

Get Involved

I'm a Mozillan, so naturally fwunit is open source and designed to be useful to more than just Mozilla. If this sounds useful, please use it, and I'd love to hear from you about how I might make it work better for you. If you're interested in hacking on the software, there are a number of open issues in the github repo just waiting for a pull request.

Planet MozillaTips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

Planet MozillaJetpack Pro Tip - Using The Add-on Debugger With JPM

Did you know that there is an Add-on Debugger in Firefox? good for you!

Now with JPM using the Add-on Debugger is even easier. To use the add-on debugger automatically when using jpm you simply need to add a --debug option.

So the typical:

jpm run -b nightly

Would become:

jpm run -b nightly --debug

Planet MozillaMe and Open Badges – Different, but the same

Hi there, if you read this blog it’s probably for one of three things,

1) my investigation of the life of Isham Randolph, the chief engineer of the Chicago Sanitary and Ship canal.
2) you know me and you want to see what I’m doing but you haven’t discovered Twitter or Facebook yet.
3) Open Badges.

This is a quick update for everyone in that third group, the Open Badges crew. I have some news.

When I joined the Open Badges project nearly three years ago, I knew this was something that once I joined, I wouldn’t leave. The idea of Open Badges hits me exactly where I live, at the corner of ‘life long learning’ and ‘appreciating people for who they are’. I’ve been fortunate that my love of life long learning and self-teaching led me down a path where I get to do what I love as my career. Not everyone is that fortunate. I see Open Badges as a way to make my very lucky career path the norm instead of the exception. I believe in the project, I believe in the goals and I’m never going to not work toward bringing that kind of opportunity to everyone regardless of the university they attended or the degree hanging on their wall.

This summer has been very exciting for me. I joined the Badge Alliance, chaired the BA standard working group and helped organize the first BA Technology Council. At the same time, I was a mentor for Chicago’s Tech Stars program and served as an advisor to a few startups in different stages of growth. The Badge Alliance work has been tremendously satisfying, the standard working group is about to release the first cycle report, and it’s been great to see our accomplishments all written in one place. We’ve made a lot of progress in a short amount of time. That said, my role at the Alliance has been focused on standards growth, some evangelism and guiding a small prototyping project. As much as I loved my summer, the projects and work don’t fit the path I was on. I’ve managed engineering teams for a while now, building products and big technology architectures. The process of guiding a standard is something I’m very interested in, but it doesn’t feel like a full-time job now. I like getting my hands dirty (in Emacs), I want to write code and direct some serious engineer workflow.

Let’s cut to the chase – after a bunch of discussions with Sunny Lee and Erin Knight, two of my favorite people in the whole world, I’ve decided to join Earshot, a Chicago big data / realtime geotargeted social media company, as their CTO. I’m not leaving the Badge Alliance. I’ll continue to serve as the BA director of technology, but as a volunteer. Earshot is a fantastic company with a great team. They understand the Open Badges project and want me to continue to support the Badge Alliance. The Badge Alliance is a great team, they understand that I want to build as much as I want to guide. I’m so grateful to everyone involved for being supportive of me here, I can think of dozens of ways this wouldn’t have worked out. Just a bit of life lesson – as much as you can, work with people who really care about you, it leads to situations like this, where everyone gets what they really need.

The demands of a company moving as fast as Earshot will mean that I’ll be less available, but no less involved in the growth of the Badge Alliance and the Open Badges project. From a tactical perspective, Sunny Lee will be taking over as chair of the standard working group. I’ll still be an active member. I’ll also continue to represent the BA (along with Sunny) in the W3C credentials community group.

If you have any questions, please reach out to me! I’ll still have my chris@badgealliance.org email address…use it!

Planet MozillaReconnecting at TEDxLinz – impressions, slides, resources

I just returned from Linz, Austria, where I spoke at TEDxLinz yesterday. After my stint at TEDxThessaloniki earlier in the year I was very proud to be invited to another one and love the variety of talks you encounter there.

TEDx_Linz_2014-5783

The overall topic of the event was “re-connect” and I was very excited to hear all the talks covering a vast range of topics. The conference was bilingual with German (well, Austrian) talks and English ones. Oddly enough, no speaker was a native English speaker.

TEDx_Linz_2014-5622

My favourite parts were:

  • Ingrid Brodnig talking about online hate and how to battle it
  • Andrea Götzelmann talking about re-integrating people into their home countries after emigrating. A heart-warming story of helping people out who moved out and failed just to return and succeed
  • Gergely Teglasy talking about creating a crowd-curated novel written on Facebook
  • Malin Elmlid of The Bread Exchange showing how her love of creating your own food got her out into the world and learn about all kind of different cultures. And how doing an exchange of goods and services beats being paid.
  • Elisabeth Gatt-Iro and Stefan Gatt showing us how to keep our relationships fresh and let love listen.
  • Johanna Schuh explaining how asking herself questions about her own behaviour rather than throwing blame gave her much more peace and the ability to go out and speak to people.
  • Stefan Pawel enlightening us about how far ahead Linz is compared to a lot of other cities when it comes to connectivity (150 open hot spots, webspace for each city dweller)

The location was the convention centre of a steel factory and the stage setup was great and not over the top. The audience was very mixed and very excited and all the speakers did a great job mingling. Despite the impressive track record of all of them there was no sense of diva-ism or “parachute presenting”.
I had a lovely time at the speaker’s dinner and going to and from the hotel.

The hotel was a special case in itself: I felt like I was in an old movie and instead of using my laptop I was tempted to grow a tufty beard and wear layers and layers of clothes and a nice watch on a chain.

hotel room

My talk was about bringing the social back into social media or – in other words – stopping to chase numbers of likes and inane comments and go back to a web of user generated content that was done by real people. I have made no qualms about it in the past that I dislike memes and animated GIFs cropped from a TV series of movie with a passion and this was my chance to grand-stand about it.

I wanted the talk to be a response to the “Look up” and “Look down” videos about social oversharing leading to less human interaction. My goal was to move the conversation into a different direction, explaining that social media is for us to put things we did and wanted to share. The big issue is that the addiction-inducing game mechanisms of social media platforms instead lead us to post as much as we can and try to be the most shared instead of the creators.

This also leads to addiction and thus to strange online behaviour up to over-sharing materials that might be used as blackmail opportunities against us.

My slides are on Slideshare.

Resources I covered in the talk:

Other than having a lot of fun on stage I also managed to tick some things off my bucket list:

TEDx_Linz_2014-5792

  • Vandalising a TEDx stage
  • Being on stage with my fly open
  • Using the words “sweater pillows” and “dangly bits” in a talk

I had a wonderful time all in all and I want to thank the organisers for having me, the audience for listening, the other speakers for their contribution and the caterers and volunteers for doing a great job to keep everybody happy.

TEDx_Linz_2014-5806

Planet MozillaThe Sheppy Report: September 26, 2014

I can’t believe another week has already gone by. This is that time of the year where time starts to scream along toward the holidays at a frantic pace.

What I did this week

  • I’ve continued working heavily on the server-side sample component system
    • Implemented the startup script support, so that each sample component can start background services and the like as needed.
    • Planned and started implementation work on support for allocating ports each sample needs.
    • Designed tear-down process.
  • Created a new “download-desc” class for use on the Firefox landing page on MDN. This page offers download links for all Firefox channels, and this class is used to correct a visual glitch. The class has not as yet been placed into production on the main server though. See this bug to track landing of this class.
  • Updated the MDN administrators’ guide to include information on the new process for deploying changes to site CSS now that the old CustomCSS macro has been terminated on production.
  • Cleaned up Signing Mozilla apps for Mac OS X.
  • Created Using the Mozilla build VM, based heavily on Tim Taubert’s blog post and linked to it from appropriate landing pages.
  • Copy-edited and revised the Web Bluetooth API page.
  • Deleted a crufty page from the Window API.
  • Meetings about API documentation updates and more.

Wrap up

That’s a short-looking list but a lot of time went into many of the things on that list; between coding and research for the server-side component service and experiments with the excellent build VM (I did in fact download it and use it almost immediately to build a working Nightly), I had a lot to do!

My work continues to be fun and exciting, not to mention outright fascinating. I’m looking forward to more, next week.

Planet MozillaMinor changes are coming to typed arrays in Firefox and ES6

JavaScript has long included typed arrays to efficiently store numeric arrays. Each kind of typed array had its own constructor. Typed arrays inherited from element-type-specific prototypes: Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, and so on. Each of these prototypes contained useful methods (set, subarray) and properties (buffer, byteOffset, length, byteLength) and inherited from Object.prototype.

This system is a reasonable way to expose typed arrays. Yet as typed arrays have grown, it’s grown unwieldy. When a new typed array method or property is added, distinct copies must be added to Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, &c. Likewise for “static” functions like Int8Array.from and Float64Array.from. These distinct copies cost memory: a small amount, but across many tabs, windows, and frames it can add up.

A better system

ES6 changes typed arrays to fix these issues. The typed array functions and properties now work on any typed array.

var f32 = new Float32Array(8); // all zeroes
var u8 = new Uint8Array([0, 1, 2, 3, 4, 5, 6, 7]);
Uint8Array.prototype.set.call(f32, u8); // f32 contains u8's values

ES6 thus only needs one centrally-stored copy of each function. All functions move to a single object, denoted %TypedArray%.prototype. The typed array prototypes then inherit from %TypedArray%.prototype to expose them.

assertEq(Object.getPrototypeOf(Uint8Array.prototype),
         Object.getPrototypeOf(Float64Array.prototype));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array.prototype)),
         Object.prototype);
assertEq(Int16Array.prototype.subarray,
         Float32Array.prototype.subarray);

ES6 also changes the typed array constructors to inherit from the %TypedArray% constructor, on which functions like Float64Array.from and Int32Array.of live. (Neither function yet in Firefox, but soon!)

assertEq(Object.getPrototypeOf(Uint8Array),
         Object.getPrototypeOf(Float64Array));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array)),
         Function.prototype);

I implemented these changes a few days ago in Firefox. Grab a nightly build and test things out with a new profile.

Conclusion

In practice this won’t affect most typed array code. Unless you depend on the exact [[Prototype]] sequence or expect typed array methods to only work on corresponding typed arrays (and thus you’re deliberately extracting them to call in isolation), you probably won’t notice a thing. But it’s always good to know about language changes. And if you choose to polyfill an ES6 typed array function, you’ll need to understand %TypedArray% to do it correctly.

Internet Explorer blogHTML5 Audio and Video Improvements for Windows Phone 8.1

Internet Explorer on Windows Phone 8.1 adds new media features that greatly expand its support for HTML5 audio and video. Audio and video elements are now fully supported, including inline playback of video content, and adaptive streaming based on the latest Web specifications is supported as well. These new features provide Web site developers the tools they need to provide compelling media experiences, and make IE an ideal browser for mobile media applications.

Support for HTML5 Audio Video Specifications

Internet Explorer in Windows Phone 8.1 now provides full support for HTML5 media. Videos can play natively in the browser without plug-ins, and HTML5 media element events, attributes and properties are fully supported as well. Multiple audio elements can play simultaneously on a single page, making it possible to use HTML5 audio with games or interactive Web apps. And video playback works on a 512MB device. This is a mobile browser, after all!

One newly supported attribute is the Controls Attribute defined in the HTML5 specification. By using it, developers may elect to use the standard IE media playback controls rather than create custom ones.

<video src="demo.mp4" controls autoplay>
    HTML5 Video is required for this example
</video>

Selecting Default Media Transport Controls Using the HTML5 Controls Attribute

Inline or Full screen playback

By default, videos in IE on Windows Phone 8.1 will play inline in the Web page. The standard playback controls include a full screen button, so that users can choose to play full screen whenever they want.

By default, videos in IE on Windows Phone 8.1 will play inline in the webpage.  The standard playback controls include a full screen button, so that users can choose to play full screen whenever they want.

Inline Video Playback with Standard Media Controls including Full Screen Button

Inline video is a great fit for Web pages where video is accompanied by other content (article text, user comments, etc…) that users may want to view while the video is playing. Or sites might want to use a video element to play an animation within the context of other content. These scenarios are easily possible on Windows Phone 8.1 using inline video playback coupled with the HTML5 Video attributes such as controls, mute, autoplay and loop.

Some site designers might prefer to have video playback go directly to full screen. With IE on Windows Phone 8.1, they can use the FullSceen API to provide a true full screen-only experience. For example, Vimeo chose this approach in implementing their support for IE’s HTML5 features. They use the Full Screen API to go directly into full screen video playback when their custom “Play” button is pressed.

Vimeo inline video
User taps on custom Play button
Video plays in full screen by default
Video plays in full screen by default

Media Source Extensions and MPEG-DASH

With the release of Windows 8.1 in 2013, Internet Explorer 11 expanded HTML5 media features to provide plug-in free audio and video streaming that achieved what we called Professional Quality Video – Web video that is just as great for personal videos as it is for premium TV and movies. Internet Explorer 11 achieved this support with a combination of new Web APIs (Media Source Extensions and Encrypted Media Extensions) and standardized formats (MPEG-DASH).

Now Internet Explorer on Windows Phone 8.1 supports Media Source Extensions as well. With it, sites will be able to deliver adaptive streaming videos using the same MPEG-DASH framework to Windows Phone 8.1 devices.

You can try adaptive streaming yourself by playing this Big Buck Bunny Video on our Test Drive site using Internet Explorer on Windows Phone 8.1. The video includes a slider control that lets you change video bitrate on the fly and see the effect on the rendered image a few seconds later.

To create your own adaptive streaming solution Web site, consider using the dash.js framework as a starting point.

Closed Captioning

Last, but not least, Internet Explorer on Windows Phone 8.1 also supports Closed Captioning using the W3C TTML Simple Delivery profile. This is the same caption format introduced with Professional Quality Video on Internet Explorer 11. Captioned videos you target at Internet Explorer 11 will now play on IE on Windows Phone 8.1 as well!

Call to Action

These improvements to the HTML5 audio and video platform for Windows Phone 8.1 make it possible to build great audio and video experiences for both the web browser and web apps.

Here are a few best practices to make sure your videos work correctly on Windows Phone 8.1:

  • Avoid using an <img> overlaid on the video element to provide still preview of the upcoming video. This can work if video is played using an external app, but results in video playing behind the image with inline playback. Instead, use the HTML5 poster attribute to show an image before the video is loaded.
  • Remember to use the controls attribute to use the default media transport controls if you are not building your own. Without these, your users won’t be able to control video playback.
  • Make use of the FullSceen API to give users a full screen option with custom controls, or to launch your videos directly to full screen.
  • Consider implementing MPEG-DASH for adaptive streaming using dash.js as a starting point.

As always, please send feedback to @IEDevChat or file a bug on Connect.

— Jerry D. Smith, Senior Program Manager, Internet Explorer

— Dhruv Chadha, Program Manager, Internet Explorer

Planet MozillaHTML, not just for desktops at Congreso Universitario Móvil

My translator, and me

My translator, and me

At the beginning of the month, I was in Mexico to represent Mozilla at Congreso Universitario Móvil, an annual conference at the Universidad Nacional Autónoma de México. Unfortunately, I did not have the time to visit a lot Mexico City, but I had an amazing full day at the event. I started the fourth day of the conference with a keynote on Firefox OS.

 

You can also watch the recording of my session. The sound is not that good: the room was echoing a lot, but I promised to the attendees that no matter what, I’ll publish it online.

 

The attendees had so much interest in the Open Web that instead of taking a couple of minutes for the questions, I did a full one hours Q&A right after the keynote. They were supposed to have a one-hour break before the next session, but they used that time to learn more about Firefox OS, which is great. I think we would have been good to continue on the questions, but I had to stop them as it was time to start the three hours hackathons. I started the hackathon with explanations about the next hours we spent together, but also on how to build, and debug your application.

 

I also recorded that presentation, which took more than fifteen minutes as it include explanation about the hackathon itself. Again, we were in the same room, so the sound is not optinal.

 

Attendees worked hard to port their actual web application using HTML, CSS, and JavaScript to make them work on our platform. After the hacking part of the day, I did three interviews with local medias, and one is already online on Excélsior (in Spanish – English version translated by Google). The one with Reforma, and Financiero Bloomberg TV will follow soon. Overall, I had an amazing time again in Mexico, and I was amazed by the interest about HTML, the Open Web, and Firefox OS. Keep the great apps coming Mexico!


--
HTML, not just for desktops at Congreso Universitario Móvil is a post on Out of Comfort Zone from Frédéric Harper

Planet MozillaPrevent Territoriality

Watch out for participants who try to stake out exclusive ownership of certain areas of the project, and who seem to want to do all the work in those areas, to the extent of aggressively taking over work that others start. Such behavior may even seem healthy at first. After all, on the surface it looks like the person is taking on more responsibility, and showing increased activity within a given area. But in the long run, it is destructive. When people sense a “no trespassing” sign, they stay away. This results in reduced review in that area, and greater fragility, because the lone developer becomes a single point of failure. Worse, it fractures the cooperative, egalitarian spirit of the project. The theory should always be that any developer is welcome to help out on any task at any time.

— Karl Fogel, Producing Open Source Software

Planet MozillaBerlin Web Audio Hack Day 2014

As with the Extensible Web Summit, we wrote some notes collaboratively. Here are the notes for the Web Audio Hackday!

We started the day with me being late because I took a series of badly timed bad decisions and that ended up in me taking the wrong untergrund lines. In short: I don’t know how to metro in Berlin in the mornings and I’m still so sorry.

I finally arrived to Soundcloud’s offices, and it was cool that Jan was still doing the presentations, so Tiffany gave me a giant glass of water and I almost drank it all while they finished. Then I set up my computer and proceeded to give my talk/workshop!

It was an improved and revised version of the beta-talk I gave at Mozilla London past past week:


Note to self: maybe remove red banners behind me if wearing a red shirt, so as not to blend with them

Sadly it wasn’t recorded and I didn’t screencast it either, so you’ll have to make do with the slides and the code for the slides (which includes the examples). Or maybe wait until I maybe run this workshop again (which I have already been asked to do!)

Jordan Santell and the Web Audio Editor in Firefox Devtools

Then Jordan (of dancer.js and component.fm fame) talked about the fancy new Web Audio Editor which is one of the latest tools to join the Firefox Devtools collection of awesome—and it just appeared in Firefox Stable (32) so you don’t even need to run Beta, Aurora or Nightly to use it! (I talked a bit about it already).

You can use the editor to visualise the audio graph, change values of the nodes and also detect if you have a memory leak when allocating nodes (which is something that is part of the normal workflow of working with Web Audio).

There was a nice plug to Are We Dubstep Yet?, the minisite I am building to keep track of bugs in the Web Audio Editor. Yay plugs!

are we dubstep yet?

Jordan’s slides are here. You can also watch his JSConf talk where he introduced an early version of the tools!

Chris Wilson and the Web MIDI API

Finally the mighty Chris Wilson explained how the Web MIDI API works and made some demos using a few and assorted MIDI devices he had brought with him: a keyboard, pads, a DJ deck controller…!

It’s interesting that most of the development of the Web MIDI implementation seems to be happening in Japan, so they are super original in their examples.

Chris’ slides on Web MIDI and other audio in general slides.

Hacking + Hacks!

I think we had lunch then… and then it was HACK TIME! But before actually getting started, some people pitched their idea to see if someone else wanted to collaborate with them and hack together. I think that was a really neat idea :-)

Myself, I spent the hack time…

  • reconnecting with old acquaintances
  • answering questions! but very few of them and none of them were the usual “but why doesn’t my oscillator start anymore?” but more interesting ones, so that was cool!
  • asking questions! to Chris mostly–one cannot ask questions to a spec editor in person every day!
  • and even started a hack which I didn’t finish: visualising custom periodic waves for use with an Oscillator Node, given the harmonics array. I gave myself the idea while I was doing the workshop, which is a terrible thing to do to myself, as I was distracting myself and wanted to hack on that instead of finishing the workshop. My brain probably hates itself, or me in general.

Also this was really cool:

I’m always super aware that weird sounds might be coming out of any of the devices in my desk when I’m testing web audio stuff, so it was fun to see I’m not the only one feeling that way :D

After hack time, the hacks were presented:

These are the people that submitted a hack, in the same order they appear in the video. Not all of them have published their hack code so if you are one of those, please do and write a comment so I can update this post!

  1. Jelle Akkerman (github, twitter) – NoOsc was an experiment using NoFlo, trying to build something very visual and cool, super suitable for live-acts. I really liked the idea!
  2. Guillaume Marty (github, twitter) – a BPM detection algorithm, using the OfflineAudioContext
  3. Erik Woitschig (twitter) – Using SoundCloud as sample database
  4. Daniel Roth, Jonathan Lundin (twitter, github), Felix Niklas (twitter, github) – Oscillator reacting to mobile phone gyroscope – it sounded really nice and I liked that the same code worked even in iPads. Yay Web Audio!
  5. Chris Greeff (twittergithub), Nick Lockhart (twitterhttps://github.com/N1ck) – Beaty Bird – source code (Second prize)
  6. Lisa Passing (githubtwitter) – One Hand Soundgame – source code (Third prize)
  7. Thomas Fett (twittergithub) – Remix at once – source code (Fourth prize)
  8. Evan Sonderegger (twittergithub) – Vector Scope in Web Audio API (First prize)

The hardware prizes were sponsored by Mozilla. And the software prizes by Bitwig.

The unofficial/community Web Audio logo

We also publicised a thing that Martin Holzhauer had worked on: the unofficial/community Web Audio logo!

Web Audio logo

Here’s the SVG. Many thanks to Martin for putting it all together!

As far as we know there is/was not an official logo. I totally love this one as it kind of matches the various JS* aesthetics and it is immediately understandable–most of the W3C api icons are just too fancy for anyone to grasp what they actually mean. Sure they look cool, but they do not work as a logo from a purely functional perspective.

And now, what?

Well, the Web Audio Conference is next January in Paris. They’re still accepting submissions for papers until next month, so why don’t you go and submit something? :-)

Hopefully see you there!

flattr this!

Planet MozillaChanging networks with Firefox running

Short recap: I work on network code for Mozilla. Bug 939318 is one of “mine” – yesterday I landed a fix (a patch series with 6 individual patches) for this and I wanted to explain what goodness that should (might?) come from this!

diffstat

diffstat reports this on the complete patch series:

29 files changed, 920 insertions(+), 162 deletions(-)

The change set can be seen in mozilla-central here. But I guess a proper description is easier for most…

The bouncy road to inclusion

This feature set and associated problems with it has been one of the most time consuming things I’ve developed in recent years, I mean in relation to the amount of actual code produced. I’ve had it “landed” in the mozilla-inbound tree five times and yanked out again before it landed correctly (within a few hours), every time of course reverted again because I had bugs remaining in there. The bugs in this have been really tricky with a whole bunch of timing-dependent and race-like problems and me being unfamiliar with a large part of the code base that I’m working on. It has been a highly frustrating journey during periods but I’d like to think that I’ve learned a lot about Firefox internals partly thanks to this resistance.

As I write this, it has not even been 24 hours since it got into m-c so there’s of course still a risk there’s an ugly bug or two left, but then I also hope to fix the pending problems without having to revert and re-apply the whole series…

Many ways to connect to networks

Firefox Nightly screenshotIn many network setups today, you get an environment and a network “experience” that is crafted for that particular place. For example you may connect to your work over a VPN where you get your company DNS and you can access sites and services you can’t even see when you connect from the wifi in your favorite coffee shop. The same thing goes for when you connect to that captive portal over wifi until you realize you used the wrong SSID and you switch over to the access point you were supposed to use.

For every one of these setups, you get different DHCP setups passed down and you get a new DNS server and so on.

These days laptop lids are getting closed (and the machine is put to sleep) at one place to be opened at a completely different location and rarely is the machine rebooted or the browser shut down.

Switching between networks

Switching from one of the networks to the next is of course something your operating system handles gracefully. You can even easily be connected to multiple ones simultaneously like if you have both an Ethernet card and wifi.

Enter browsers. Or in this case let’s be specific and talk about Firefox since this is what I work with and on. Firefox – like other browsers – will cache images, it will cache DNS responses, it maintains connections to sites a while even after use, it connects to some sites even before you “go there” and so on. All in the name of giving the users an as good and as fast experience as possible.

The combination of keeping things cached and alive, together with the fact that switching networks brings new perspectives and new “truths” offers challenges.

Realizing the situation is new

The changes are not at all mind-bending but are basically these three parts:

  1. Make sure that we detect network changes, even if just the set of available interfaces change. Send an event for this.
  2. Make sure the necessary parts of the code listens and understands this “network topology changed” event and acts on it accordingly
  3. Consider coming back from “sleep” to be a network changed event since we just cannot be sure of the network situation anymore.

The initial work has been made for Windows only but it allows us to smoothen out any rough edges before we continue and make more platforms support this.

The network changed event can be disabled by switching off the new “network.notify.changed” preference. If you do end up feeling a need for that, I really hope you file a bug explaining the details so that we can work on fixing it!

Act accordingly

So what is acting properly? What if the network changes in a way so that your active connections suddenly can’t be used anymore due to the new rules and routing and what not? We attack this problem like this: once we get a “network changed” event, we “allow” connections to prove that they are still alive and if not they’re torn down and re-setup when the user tries to reload or whatever. For plain old HTTP(S) this means just seeing if traffic arrives or can be sent off within N seconds, and for websockets, SPDY and HTTP2 connections it involves sending an actual ping frame and checking for a response.

The internal DNS cache was a bit tricky to handle. I initially just flushed all entries but that turned out nasty as I then also killed ongoing name resolves that caused errors to get returned. Now I instead added logic that flushes all the already resolved names and it makes names “in transit” to get resolved again so that they are done on the (potentially) new network that then can return different addresses for the same host name(s).

This should drastically reduce the situation that could happen before when Firefox would basically just freeze and not want to do any requests until you closed and restarted it. (Or waited long enough for other timeouts to trigger.)

The ‘N seconds’ waiting period above is actually 5 seconds by default and there’s a new preference called “network.http.network-changed.timeout” that can be altered at will to allow some experimentation regarding what the perfect interval truly is for you.

Firefox BallInitially on Windows only

My initial work has been limited to getting the changed event code done for the Windows back-end only (since the code that figures out if there’s news on the network setup is highly system specific), and now when this step has been taken the plan is to introduce the same back-end logic to the other platforms. The code that acts on the event is pretty much generic and is mostly in place already so it is now a matter of making sure the event can be generated everywhere.

My plan is to start on Firefox OS and then see if I can assist with the same thing in Firefox on Android. Then finally Linux and Mac.

I started on Windows since Windows is one of the platforms with the largest amount of Firefox users and thus one of the most prioritized ones.

More to do

There’s separate work going on for properly detecting captive portals. You know the annoying things hotels and airports for example tend to have to force you to do some login dance first before you are allowed to use the internet at that location. When such a captive portal is opened up, that should probably qualify as a network change – but it isn’t yet.

Planet MozillaBuilding Boot2Gecko(B2G) on Ubuntu

Update: This documentation is out-of-date: Please read developer.mozilla.org/en-US/Firefox_OS/Building for latest information

You heard about Mozilla Boot2Gecko(B2G) mobile operating system. Boot2Gecko's Gaia user interface is developed entirely using HTML, CSS and Javascript web technologies. If you are an experienced web developer you can contribute to Gaia UI and develop new Boot2Gecko applications with ease. In this post I'll explain how to setup the Boot2Gecko (B2G) development environment on your personal computer.


You can run Boot2Gecko(B2G) inside an emulator or inside a Firefox web browser. Using Boot2Gecko(B2G) with QEMU emulator is very resource intensive, so we will focus on the later in this post. I'll assume you are comfortable with Mozilla build environment. So, get that pot of coffee brewing and prepare for a long night of hacking.


Building Boot2Gecko(B2G) Firefox App


Before you start, let us make sure that you have all the prerequisites for building Firefox on your computer. Please have a look at the build prerequisites for your Linux, Window and OSX operating system.




# Let get the source code 
# Download mozilla-central repository 

$ hg clone http://hg.mozilla.org/mozilla-central mozilla-central

# Download the Gaia source code 

$ git clone https://github.com/mozilla-b2g/gaia gaia

# Change directory and create our profile 

$ cd gaia 

$ make profile 


# Change directory into your mozilla-central directory

$ cd mozilla-central


# Create a .mozconfig file inside your mozilla-central directory: 

$ nano .mozconfig 
mk_add_options MOZ_OBJDIR=../b2g-build
mk_add_options MOZ_MAKE_FLAGS="-j9 -s"
ac_add_options --enable-application=b2g
ac_add_options --disable-libjpeg-turbo
ac_add_options --enable-b2g-ril
ac_add_options --with-ccache=/usr/bin/ccache 

# Build the Firefox B2G app and wait for the build to finish 

$ make -f client.mk build 


# Create a simple b2g bash script to launch B2G app; change paths you suit your environment
# Note: Have to use to -safe-mode option due to bug on my Ubuntu box 
 
#!/bin/sh 
export B2G_HOMESCREEN=http://homescreen.gaiamobile.org/
/home/arky/src/b2g-build/dist/bin/b2g -profile /home/arky/src/gaia/profile




If everything goes well. You should have boot2gecko running inside a firefox now.


Boot2Gecko running inside firefox on Ubuntu


Customizing Firefox B2G App

For better Boot2Gecko (B2G) experience, we will customize Firefox features offline cache and touch events using a custom Firefox profile.



Create a Custom Firefox Profile

You can use dist/bin/b2g -ProfileManager option to launch Firefox Profile Manager. Create a new profile called 'b2g'. Now we can add customizations to this new profile.


On Linux computers, the profile is created under ~/.mozilla/b2g/ directory. You can find the information about location of firefox profiles for your operating system here.



You launch B2G with your new custom profile using the '-P' option. Modify your B2G bash script and add the custom profile option. dist/bin/b2g -P b2g


Disable offline cache

Create a user.js file inside your custom 'b2g' firefox profile directory. Add the following line to disable offline cache.

user_pref('browser.cache.offline.enable', false);


Enabling Touch events

Add the following line in your user.js file inside your custom 'b2g' Firefox profile directory to enable touch events.

 user_pref('dom.w3c_touch_events.enabled', true);



That's it. You now have a Boot2Gecko(B2G) running inside Firefox on your computer. Happy Hacking!

Planet MozillaShellshock IOC search using MIG

Shellshock is being exploited. People are analyzing malwares dropped on systems using the bash vulnerability.

I wrote a MIG Action to check for these IOCs. I will keep updating it with more indicators as we discover them. To run this on your Linux 32 or 64 bits system, download the following archive: mig-shellshock.tar.xz 

Download the archive and run mig-agent as follow:

$ wget https://jve.linuxwall.info/ressources/taf/mig-shellshock.tar.xz
$ sha256sum mig-shellshock.tar.xz 
0b527b86ae4803736c6892deb4b3477c7d6b66c27837b5532fb968705d968822  mig-shellshock.tar.xz
$ tar -xJvf mig-shellshock.tar.xz 
shellshock_iocs.json
mig-agent-linux64
mig-agent-linux32
$ ./mig-agent-linux64 -i shellshock_iocs.json
This will output results in JSON format. If you grep for "foundanything" and both values return "false", it means no IOC was found on your system. If you get "true", look at the detailed results in the JSON output to find out what exactly was found.
$ ./mig-agent-linux64 -i shellshock_iocs.json|grep foundanything
    "foundanything": false,
    "foundanything": false,
The full action for MIG is below. I will keep updating it over time, I recommend you use the one below instead of the one in the archive.
{
"name": "Shellshock IOCs (nginx and more)",
"target": "os='linux' and heartbeattime \u003e NOW() - interval '5 minutes'",
"threat": {
"family": "malware",
"level": "high"
},
"operations": [
{
"module": "filechecker",
"parameters": {
"searches": {
"iocs": {
"paths": [
"/usr/bin",
"/usr/sbin",
"/bin",
"/sbin",
"/tmp"
],
"sha256": [
"73b0d95541c84965fa42c3e257bb349957b3be626dec9d55efcc6ebcba6fa489",
"ae3b4f296957ee0a208003569647f04e585775be1f3992921af996b320cf520b",
"2d3e0be24ef668b85ed48e81ebb50dce50612fb8dce96879f80306701bc41614",
"2ff32fcfee5088b14ce6c96ccb47315d7172135b999767296682c368e3d5ccac",
"1f5f14853819800e740d43c4919cc0cbb889d182cc213b0954251ee714a70e4b"
],
"regexes": [
"/bin/busybox;echo -e '\\\\147\\\\141\\\\171\\\\146\\\\147\\\\164'"
]
}
}
}
},
{
"module": "netstat",
"parameters": {
"connectedip": [
"108.162.197.26",
"162.253.66.76",
"89.238.150.154",
"198.46.135.194",
"166.78.61.142",
"23.235.43.31",
"54.228.25.245",
"23.235.43.21",
"23.235.43.27",
"198.58.106.99",
"23.235.43.25",
"23.235.43.23",
"23.235.43.29",
"108.174.50.137",
"201.67.234.45",
"128.199.216.68",
"75.127.84.182",
"82.118.242.223",
"24.251.197.244",
"166.78.61.142"
]
}
}
],
"description": {
"author": "Julien Vehent",
"email": "ulfr@mozilla.com",
"revision": 201409252305
},
"syntaxversion": 2
}

Dev.OperaBetter @font-face with Font Load Events

@font-face is an established staple in the diet of almost half of the web. According to the HTTP Archive, 47% of web sites make a request for at least one custom web font. What does this mean for a casual browser of the web? In this article, I make the argument that current implementations of @font-face are actually harmful to the performance and usability of the web. These problems are exacerbated by the fact that developers have started using @font-face for two completely different use cases: content fonts and icon fonts, which should be handled differently. But there is hope. We can make small changes to how these fonts load to mitigate those drawbacks and make the web work better for everyone.

First—let’s discuss what @font-face gets right.

Initiating a Font Download

What happens when you slap a fancy new @font-face custom web font into your CSS? As it turns out—not much. Just including a @font-face block doesn’t actually initiate a download of the remote font file from the server in almost all browsers (except IE8).

/* Does not download */
@font-face {
	font-family: 'open_sansregular';
	src: /* This article does not cover @font-face syntax */;
}

So, how does one go about initiating a font download? Peep your eyes on this source:

<!-- Initiates download in Firefox, IE 9+ -->
<div style="font-family: open_sansregular"></div>

<!-- Initiates download in Chrome, Safari (WebKit/Blink et al) -->
<div style="font-family: open_sansregular">Content.</div>

This means that WebKit and Blink are smart enough to know that even if a node exists in the document that uses our new font-family but the node is empty—the font does not download. This is great!

What if we create the nodes dynamically in JavaScript?

/* Does not download */
var el = document.createElement('div');
el.style.fontFamily = 'open_sansregular';

/* Initiates download in Firefox, IE 9+ */
document.body.appendChild(el);

/* Initiates download in WebKit/Blink */
el.innerHTML = 'Content.';

All but IE8 wait until the new node has been appended into the document (is not detached) and as previously mentioned, WebKit/Blink browsers even wait until the node has text content.

Now we know what @font-face got right. Now let’s get our hands dirty.

Request in Flight

What happens to our content while our little @font-face request is in flight? To the elements affected by the new font-family, most browsers actually hide their fallback text. When the request completes, the text is shown with the new font-family. This is sometimes referred to as the Flash of Invisible Text, or FOIT.

Since @font-face is largely used for content fonts the FOIT seems counterintuitive, given that the alternative has better perceived performance and the web has historically favored progressive rendering. However, this behavior’s use with icon fonts is useful, given that some code points in icon fonts are mapped to existing Unicode glyphs or using the free-for-all Private Use Area. For example, U+F802 is a pencil icon in OS X Safari and Opera, but a generic default Unicode square in Firefox and iOS Safari. Worse, the Private Use Area is chock-full of multicolor emoji on iOS Safari. You don’t want an unpredictable fallback to show while the icon is loading.

<figure class="figure"> Multicolor Emoji Characters in the Private Use Area on iOS Safari <figcaption class="figure__caption">Multicolor Emoji Characters in the Private Use Area on iOS Safari</figcaption> </figure>

Conversely, Internet Explorer (including Windows Phone 8) just lays all its cards on the table and always shows the fallback font. In my opinion, this is the correct default behavior for content fonts, but is (again) undesirable for icon fonts.

Remember when the text used to load before the images did?

— @aanand May 10, 2014

Timeouts

In order to walk the perceived performance vs. usability tightrope, some browsers decided to introduce a timeout to @font-face requests. This can often result in elements flashing fallback font families after a certain time period. This is commonly referred to as a Flash of Unstyled Text, or FOUT, but might be more accurately referred to as the Flash of Fallback Text.

In Chrome (36+), Opera (23+), and Firefox there is a three second timeout, after which the fallback font is shown. The timeout is a great benefit for use with content fonts, but for icon fonts this can have an undesirable effect.

If the @font-face request doesn’t complete in a browser that doesn’t have a timeout (Mobile Safari, Safari, Android, Blackberry), the content never shows. Never. Worse, in Safari, if the font loads after 60 seconds, the response content is thrown away. Nothing is shown. It’s important to recognize that font requests should not be a single point of failure for your content.

The Stop Button

Ok, so the @font-face request hangs. Can’t the user just press the stop button? Actually, no. In all browsers, hitting the stop button had no positive effect on @font-face requests.

Some browsers (Safari 7, Mobile Safari 7, Firefox) pretend as if the stop button had never been triggered, with the exception of Chrome. If you hit the stop button after the three-second timeout in Chrome, it re-hides the fallback text and waits an additional three seconds.

Worse, other browsers (Mobile Safari 6.1, Blackberry 7, Android 2.3, 4.2) accept the Stop button but don’t show any fallback content, ever. Your only recourse in this situation is to reload the entire page.

Ideally, the fallback font should be immediately shown if the stop button is pressed. Disregarding Internet Explorer which always shows a fallback font, none of the tested web browsers got this right.

Font Loading Events

We need more control over our @font-face requests. The two main use cases: prevailing content fonts and not-to-be-forgotten icon fonts require much different loading behavior even in the face of increasingly divergent default browser behavior.

One way we can regain control over the loading behavior is to use font load events. The most promising font loading event solution is a native one: the CSS Font Loading Module; which is already implemented and available in Chrome and Opera.

document.fonts.load('1em open_sansregular')
	.then(function() {
		var docEl = document.documentElement;
		docEl.className += ' open-sans-loaded';
	});

By placing a JS-assigned class around any use of our custom @font-face, we regain control over the fallback experience.

.open-sans-loaded h1 {
	font-family: open_sansregular;
}

Using the above CSS and JS for content fonts, we can show the fallback text while the font request is in flight. If you want to use it for icon fonts, you can easily modify the approach to hide the fallback text avoiding the timeout FOUT as well.

If a user hits the stop button while the text is loading, it may not stop the @font-face from loading and triggering the font event, but at least a fallback font is always shown in all supported browsers.

A Cross-Browser Solution

The above solution works great for Chrome and Opera that support the native API, but what about other browsers? Of course, if you’re already using TypeKit’s webfontloader on your page, you could reuse that—but as of the time this article was written it does not reuse the native API where supported (and is somewhat large to use solely for this purpose—currently 7.1 KB after minification and gzip).

Alternatively, you can use the FontFaceOnload utility, which reuses the native API where supported. It is not a one-to-one polyfill for the CSS Font Loading API and as such the syntax is different:

FontFaceOnload('open_sansregular', {
	success: function() {
		var docEl = document.documentElement;
		docEl.className += ' open-sans-loaded';
	}
});

If you’d like a full one-to-one polyfill of the CSS Font Loading API, you can follow along with Bram Stein’s in-progress FontLoader polyfill.

Conclusion

Content fonts and icon fonts must be treated differently in order to effectively use them in our pages. In order to make our content usable as soon as possible to our visitors, we must embrace fallback fonts. In order to remove the confusion from sometimes unpredictable icon fonts, we must hide fallback fonts. I hope you’ll consider these inconsistencies and attempt to solve them in your web pages—your users will be happier for it.

Addendum: Browser Support

When the article mentions “all browsers” above, it includes this list:

  • Firefox 28
  • Internet Explorer 8, 9, 10, 11
  • Windows Phone 8

and WebKit/Blink:

  • Google Chrome 37
  • Opera 23
  • Mobile Safari 6.1, 7
  • Safari 7
  • Android 2.3, 4.2, 4.4
  • Blackberry 7

Web Browsers purposefully excluded: no @font-face support:

Planet MozillaMaking mozharness easier to hack on and try support

Yesterday, we presented a series of proposed changes to Mozharness at the bi-weekly meeting.

We're mainly focused on making it easier for developers and allow for further flexibility.
We will initially focus on the testing side of the automation and make ground work for other further improvements down the line.

The set of changes discussed for this quarter are:

  1. Move remaining set of configs to the tree - bug 1067535
    • This makes it easier to test harness changes on try
  2. Read more information from the in-tree configs - bug 1070041
    • This increases the number of harness parameters we can control from the tree
  3. Use structured output parsing instead of regular where it applies - bug 1068153
    • This is part of a larger goal where we make test reporting more reliable, easy to consume and less burdening on infrastructure
    • It's to establish a uniform criteria for setting a job status based on a test result that depends on structured log data (json) rather than regex-based output parsing
    • "How does a test turn a job red or orange?" 
    • We will then have a simple answer that is that same for all test harnesses
  4. Mozharness try support - bug 791924
    • This will allow us to lock which repo and revision of mozharnes is checked out
    • This isolates mozharness changes to a single commit in the tree
    • This give us try support for user repos (freedom to experiment with mozharness on try)


Even though we feel the pain of #4, we decided that the value gained for developers through #1 & #2 gave us immediate value while for #4 we know our painful workarounds.
I don't know if we'll complete #4 in this quarter, however, we are committed to the first three.

If you want to contribute to the longer term vision on that proposal please let me know.


In the following weeks we will have more updates with regards to implementation details.

Planet MozillaFirefox 33 beta6 to beta7

This beta has been driven by the NSS chemspill. We used this unexpected beta to test the behavior of 33 without OMTC under Windows.

  • 8 changesets
  • 232 files changed
  • 73163 insertions
  • 446 deletions

ExtensionOccurrences
cc73
h45
py23
c11
vcproj8
sh7
xcconfig6
mn6
pump4
mk4
cpp3
cbproj3
txt2
sln2
plist2
pbxproj2
m42
html2
def2
mm1
list1
+1
js1
in1
groupproj1
dep1
cmake1
am1
ac1

ModuleOccurrences
security151
security69
image4
widget1
modules1
+1
js1
gfx1

List of changesets:

Michael WuBug 1062886 - Fix one color padded drawing path. r=seth, a=sledru - 232c3b4708b9
Michael WuBug 1068230 - Don't use the gfxContext transform in intermediate surface. r=seth, a=sledru - bca0649c9b79
Douglas CrosherBug 1013996 - irregexp: Avoid unaligned accesses in ARM code. r=bhackett, a=sledru - 5e2a5b6c7a0d
Bas SchoutenBug 1030147 - Switch off OMTC on windows. r=milan, a=sylvestre - f631df57b34c
Steven MichaudBug 1056251 - Changing to a Firefox window in a different workspace does not focus automatically. r=masayuki a=lmandel - 7c118b1cf343
Kai EngertBug 1064636, upgrade to NSS 3.17.1 release, r=rrelyea, a=lmandel - fb8ff9258d02
Matt WoodrowBug 1030147 - Release the DrawTarget to drop the surface ref in ThebesLayerD3D9. r=Bas a=lmandel CLOSED TREE - 280407351f1b
L. David BaronBug 1064636 followup: Add new function to config/external/nss/nss.def r=khuey a=bustage CLOSED TREE - 2431af782661

Planet WebKitMeasuring ASM.JS performance

What is ASM.JS?

Now that mobile computers and cloud services become part of our lives, more and more developers see the potential of the web and online applications. ASM.JS, a strict subset of JavaScript, is a technology that provides a way to achieve near native speed in browsers, without the need of any plugin or extension. It is also possible to cross-compile C/C++ programs to it and running them directly in your browser.

In this post we will compare the JavaScript and ASM.JS performance in different browsers, trying out various kinds of web applications and benchmarks.

read more

Planet MozillaSo, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

Planet MozillaFrom the Furthest Edge to the Deepest Middle

In my role as Community Building Intern at Mozilla this summer, my goal has been to be explicit about how community building works so that people both internal and external to Mozilla can better understand and build upon this knowledge. This requires one of my favorite talents: connecting what emerges and making it a thing. We all experience this when we’ve been so immersed in something that we begin to notice patterns – our brains like to connect. One of my mentors, Dia Bondi, experienced this with her 21 Things, which she created during her time as a speech coach and still uses today in her work.

I set out to develop a mental model to help thing-ify this seemingly ambiguous concept of community building so that we all could collectively drive the conversation forward. (That might be the philosopher in me.) What emerged was this sort of fascinating overarching story: community building is connecting the furthest edge to the deepest middle (and making the process along that path easier). What I mean here is that the person with the largest of any form of distance must be able to connect to the hardest to reach person in the heart of the formal organization. For example, the 12 year old girl in Brazil who just taught herself some new JavaScript framework needs to be able to connect in some way to the module owner of that new JavaScript framework located in Finland because when they work together we all rise further together.

community building

The edge requires coordination from community. The center requires internal champions. The goal of community building is then to support community by creating structures that bridge community coordinators and internal champions while independently being or supporting the development of both. This structure allows for more action and creativity than no structure at all – a fundamental of design school.

Below is a model of community management. We see this theme of furthest edge to deepest middle. “It’s broken” is the edge. “I can do something about it” approaches the middle. This model shows how to take action and make the pathway from edge to middle easier.

community management

Community building is connecting the furthest edge to the deepest middle. It’s implicit. It’s obvious. But, when we can be explicit and talk about it we can figure out where and how to improve what works and focus less on what does not.


Planet MozillaA better way to input Vietnamese

Earlier this year I had the pleasure of implementing for Firefox OS an input method for Vietnamese (a language I have some familiarity with). After being dissatisfied with the Vietnamese input methods on other smartphones, I was eager to do something better.

I believe Firefox OS is now the easiest smartphone on the market for out-of-the-box typing of Vietnamese.

The Challenge of Vietnamese

Vietnamese uses the Latin alphabet, much like English, but it has an additional 7 letters with diacritics (Ă, Â, Đ, Ê, Ô, Ơ, Ư). In addition, each word can carry one of five tone marks. The combination of diacritics and tone marks means that the character set required for Vietnamese gets quite large. For example, there are 18 different Os (O, Ô, Ơ, Ò, Ồ, Ờ, Ỏ, Ổ, Ở, Õ, Ỗ, Ỡ, Ó, Ố, Ớ, Ọ, Ộ, Ợ). The letters F, J, W, and Z are unused. The language is (orthographically, at least) monosyllabic, so each syllable is written as a separate word.

This makes entering Vietnamese a little more difficult than most other Latin-based languages. Whereas languages like French benefit from dictionary lookup, where the user can type A-R-R-E-T-E and the system can from prompt for the options ARRÊTE or ARRÊTÉ, that is much less useful for Vietnamese, where the letters D-O can correspond to one of 25 different Vietnamese words (do, , , , dỗ, , dở, dỡ, dợ, đo, đò, đỏ, đó, đọ, đô, đồ, đổ, đỗ, đố, độ, đơ, đờ, đỡ, đớ, or đợ).

Other smartphone platforms have not dealt with this situation well. If you’ve tried to enter Vietnamese text on an iPhone, you’ll know how difficult it is. The user has two options. One is to use the Telex input method, which involves memorizing an arbitrary mapping of letters to tone marks. (It was originally designed as an encoding for sending Vietnamese messages over the Telex telegraph network.) It is user-unfriendly in the extreme, and not discoverable. The other option is to hold down a letter key to see variants with diacritics and tone marks. For example, you can hold down A for a second and then scroll through the 18 different As that appear. You do that every time you need to type a vowel, which is painfully slow.

Fortunately, this is not an intractable problem. In fact, it’s an opportunity to do better. (I can only assume that the sorry state of Vietnamese input on the iPhone speaks to a lack of concern about Vietnamese inside Apple’s hallowed walls, which is unfortunate because it’s not like there’s a shortage of Vietnamese people in San José.)

Crafting a Solution

To some degree, this was already a solved problem. Back in the days of typewriters, there was a Vietnamese layout called AĐERTY. It was based on the French AZERTY, but it moved the F, J, W, and Z keys to the periphery and added keys for Ă, Đ, Ơ, and Ư. It also had five dead keys. The dead keys contained:

  • a circumflex diacritic for typing the remaining letters (Â, Ê, and Ô);
  • the five tone marks; and
  • four glyphs each representing the kerned combination of the circumflex diacritic with a tone mark, needed where the two marks would otherwise overlap

Photo of a Vietnamese typewriter

My plan was to make a smartphone version of this typewriter. Already it would be an improvement over the iPhone. But since this is the computer age, there were more improvements I could make.

Firstly, I omitted F, J, W, and Z completely. If the user needs to type them — for a foreign word, perhaps — they can switch layouts to French. (Gaia will automatically switch to a different keyboard if you need to type a web address.) And obviously I could omit the glyphs that represent kerned pairs of diacritic & tone marks, since kerning is no longer a mechanical process.

The biggest change I made is that, rather than having keys for the five tone marks, words with tones appear as candidates after typing the letters. This has numerous benefits. It eliminates five weird-looking keys from the keyboard. It eliminates confusion about when to type the tone mark. (Tone marks are visually positioned in the middle of the word, but when writing Vietnamese by hand, tone marks are usually added last after writing the rest of the word.) It also saves a keystroke too, since we can automatically insert a space after the user selects the candidate. (For a word without a tone mark, the user can just hit the space bar. Think of the space bar as meaning “no tone”.)

This left just 26 letter keys plus one key for the circumflex diacritic. Firefox OS’s existing AZERTY layout had 26 letter keys plus one key for the apostrophe, so I put the circumflex where the apostrophe was. (The apostrophe is unused in Vietnamese.)

Screenshot of Vietnamese input method in use

In order to generate the tone candidates, I had to detect when the user had typed a valid Vietnamese syllable, because I didn’t want to display bizarre-looking nonsense as a candidate. Vietnamese has rules for what constitutes a valid syllable, based on phonotactics. And although the spelling isn’t purely phonetic (in particular, it inherits some peculiarities from Portuguese), it follows strict rules. This was the hardest part of writing the input method. I had to do some research about Vietnamese phonotactics and orthography. A good chunk of my code is dedicated to encoding these rules.

Knowing about the limited set of valid Vietnamese syllables, I was able to add some convenience to the input method. For example, if the user types V-I-E, a circumflex automatically appears on E because VIÊ is a valid sequence of letters in Vietnamese while VIE is not. If the user types T to complete the partial word VIÊT, only two tone candidates appear (VIẾT and VIỆT), because the other three tone marks can’t appear on a word ending with T.

Using it yourself

You can try the keyboard for yourself at Timothy Guan‑tin Chien’s website.

The keyboard is not included in the default Gaia builds. To include it, add the following line to the Makefile in the gaia root directory:

GAIA_KEYBOARD_LAYOUTS=fr,vi-Typewriter

The code is open source. Please steal it for your own open source project.


Planet MozillaJetpack Pro Tip - setImmediate and clearImmediate

Do you know about window.setImmediate or window.clearImmediate ?

Did you know that you can use these now with the Add-on SDK ?

We’ve managed to keep them a little secret, but they are awesome because setImmediate is much quicker than setTimeout(fn, 0) especially if it is used a lot as it would be if it were in a loop or if it were used recursively. This is well described in the notes in MDN on window.setImmediate.

To use these function with the Add-on SDK, do the following:

const { setImmediate, clearImmediate } = require("sdk/timers");

function doStuff() {}
let timerID = setImmediate(doStuff);  // to run `doStuff` in the next tick
clearImmediate(timerID)               // to cancel `doStuff`

Planet MozillaYou should use WebRTC for your 1-on-1 video meetings

Did you know that Firefox 33 (currently in Beta) lets you make a Skype-like video call directly from one running Firefox instance to another without requiring an account with a central service (such as Skype or Vidyo)?

This feature is built on top of Firefox’s WebRTC support, and it’s kind of amazing.

It’s pretty easy to use: just click on the toolbar button that looks like a phone handset or a speech bubble (which one you see depends which version of Firefox you have) and you’ll be given a URL with a call.mozilla.com domain name. [Update: depending on which beta version you have, you might need to set the loop.enabled preference in about:config, and possibly customize your toolbar to make the handset/bubble icon visible.] Send that URL to somebody else — via email, or IRC, or some other means — and when they visit that URL in Firefox 33 (or later) it will initiate a video call with you.

I’ve started using it for 1-on-1 meetings with other Mozilla employees and it works well. It’s nice to finally have an open source implementation of video calling. Give it a try!

Planet MozillaIntra-Paint: A new Daala demo from Jean-Marc Valin

Intra paint is not a technique that's part of the original Daala plan and, as of right now, we're not currently using it in Daala. Jean-Marc envisioned it as a simpler, potentially more effective replacement for intra-prediction. That didn't quite work out-- but it has useful and visually pleasing qualities that, of all things, make it an interesting postprocessing filter, especially for deringing.

Several people have said 'that should be an Instagram filter!' I'm sure Facebook could shake a few million loose for us to make that happen ;-)

Planet MozillaThe Great Add-on Bug Triage

The AMO team is meeting this week to discuss road maps and strategies and among the topics is our backlog of open bugs. Since mid-2011 there has averaged around 1200 bugs open at any one time.

Currently any interaction with AMO’s bugs is too time consuming: finding good first bugs, triaging existing bugs, organizing a chunk of bugs to fix in a milestone — they all require interacting with a list of 1200 bugs, many of which are years old and full of discussions by people who no longer contribute to the bugs. The small chunks of time I (and others) get to work on AMO are consumed by digging through these old bugs and trying to apply them to the current site.

In an effort to get this list to a manageable size the AMO team is aggressively triaging and closing bugs this week, hopefully ending the week with a realistic list of items we can hope to accomplish. With that list in hand we can prioritize the bugs, divide them into milestones, and begin to lobby for developer time.

Many of the bugs which are being closed are good ideas and we’d like to fix them, but we simply need to be realistic about what we can actually do with the resources we have. If you contribute patches to any of the bugs, please feel free to reopen them.

Thanks for being a part of AMO.

Planet MozillaDaala: Painting Images For Fun (and Profit?)

As a contribution to Monty's Daala demo effort, I decided to demonstrate a technique I've recently been developing for Daala: image painting. The idea is to represent images as directions and 1-D patterns.

Read more!

Planet MozillaHTML for the Mobile Web at All Things Open

allthingsopen

In about a month, I’ll speak at All Things Open in Raleigh, North Carolina. I’m quite excited as even if I never attended this event, I hear a lot of good things about it. Funny enough, I don’t go on stage quite often in the United States, so it’s a great opportunity to do so. What could be a better topic than talking about HTML for the Mobile Web at an event like this?

Firefox OS is a new operating system for mobile phones to bring web connectivity to those who can not get top-of-the-line smartphones. By harvesting the principles of what made the web great and giving developers access to the hardware directly through web standards it will be the step we need to make a real open and affordable mobile web a reality. In this talk, Frédéric Harper from Mozilla will show how Firefox OS works, how to build apps for it and how end users will benefit from this open alternative to other platforms.

It’s not too late to register for this event on October 22-23: they still have early birds tickets. See you there to share, and dicuss with you about open source, open tech and the open web!


--
HTML for the Mobile Web at All Things Open is a post on Out of Comfort Zone from Frédéric Harper

Planet MozillaWeekly review 2014-09-24

Highlights from this week

Until the end of the year, I will be working with Coop on automating as much of Build Duty as possible. Therefore for the next 3 months, I will be almost full time Build Duty (9:00 - 16:30 local time) with a bit of time afterwards for other things.

Bugs I created this week:

Other bugs I updated this week:

Planet MozillaHair today, gone tomorrow

I've been cutting my own hair since like 1991 or so with two exceptions: a professional haircut before my wedding and one before my wife's sister's wedding.

Back in 1991, my parents bought me a set of Wahl clippers. Over the years, I broke two of the combs and a few of the extensions. Plus it has a crack down the side of the plastic body. At one point, I was cutting hair for a bunch of people on my dorm floor in college. It's seen a lot of use in 23 years.

However, a month ago, it started shorting the circuit. There's a loose wire or frayed something or something something. Between that and the crack down the side of the plastic body, I figured it's time to retire them and get a new set. The new set arrived today.

23 years is a long time. I have very few things that I've had for a long time. I bought my bicycle in 1992 or so. I have a clock radio I got in the mid-80s. I have a solar powered calculator from 1990 or so (TI-36). Everything else seems to fail within 5 years: blenders, toaster ovens, rice cookers, drills, computers, etc.

I'll miss those clippers. I hope the new ones last as long.

Planet Mozillahappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1067381] Sorting by ID broken when changing multiple bugs
  • [1065398] Error when using checksetup.pl to create BMO database from scratch when Review extension enabled
  • [1064395] concatenate and slightly minify javascript files
  • [1068014] skip strptime() in datetime_from() if the date is in a standard format
  • [1054141] add the ability to filter on the user that made the change
  • [891199] clicking on needinfo flag/text should scroll you to the comment which set the flag
  • [1069504] Put My Dashboard in the drop down on the top-right
  • [1067410] Modification time wrong for deleted flags in review schema
  • [1067808] Review history page displays cancelled reviews as overdue
  • [1060728] Add perltidyrc that makes it easier to follow existing code standards to BMO repository
  • [1068328] needinfo flag shows up on attachment details page only when not doing “Edit as Comment”
  • [1037663] Make custom bug entry forms more discoverable
  • [1071926] Can’t unmentor a bug

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Planet Mozillaisfirefoxosaccessibleyet.com

isfirefoxosaccessibleyet.com

24 Sep 2014 - Toronto, ON

Eitan from Mozilla's accessibility team was nice enough to reserve a isfirefoxosaccessibleyet.com domain where we could track Firefox OS accessibility status easily and in the open. I am really happy to announce that isfirefoxosaccessibleyet.com is now ready to be seen in public.

There are several bits of information about each app (including the overall system) that you can find there: an overall accessibility score, a number of opened, resolved and currently in progress bugs. We also provide links to the actual bug lists handy in case you need to dig deeper.

For anyone who wants to help out:

  • If you are a user and want to file a bug, you should be able to find a link inside each app's section.
  • If you are a developer and want to hack on Gaia accessibility, each app section has an up-to-date lists if high priority and good first bugs.

Please feel free to check it out!

yzen

Planet MozillaJetpack Pro Tip - Require JSM

Did you know that you can require("some.jsm")?

This has been possible for some time now, thanks to @jsantell.

With the change above you can replace code like this:

const { Cu } = require("chrome");
const { AddonManager } = Cu.import("resource://gre/modules/AddonManager.jsm", {});

With code like this:

const { AddonManager } = require("resource://gre/modules/AddonManager.jsm");

Also, you can include JSMs in your lib/ folder and use them in chrome code and your add-on scope. So you can add a file like lib/some.jsm, and use it in your commonJS modules like so:

const { Something } = require("./some.jsm");

Then if you’d like to fetch the URI for lib/some.jsm in order to use it in a XUL document, or for some other reason, then you can use one of the techniques that I described yesterday to do this, with JPM you’d do this:

let someURI = require.resolve("./some.jsm");

With this you can import the jsm into XUL documents for one thing, you could also provide an api to other add-on’s this way, and many other things which I’ll save for another day.

Planet MozillaFirefox 33 beta5 to beta6

  • 21 changesets
  • 40 files changed
  • 570 insertions
  • 530 deletions

ExtensionOccurrences
java11
cpp9
js8
cc6
html2
h2
ini1
css1

ModuleOccurrences
mobile11
ipc7
toolkit6
dom3
docshell2
content2
widget1
mozglue1
modules1
layout1
image1
gfx1
browser1

List of changesets:

Drew WillcoxonBug 1068852 - Highlight search suggestions on hover/mouseover on about:home/about:newtab. r=MattN, a=sylvestre - a34329afda87
Dave TownsendBacking out Bug 893276 for causing Bug 1058840. a=lmandel - fa3e1469d0f1
Alex BardasBug 1042521 - Drop some cases when backslashes from urlbar input were converted to slashes on windows. r=bz, a=sledru - 73202bfb3f03
dominique vincentBug 1062904 - Null pointer check when saving an image. r=mfinkle, a=lmandel - 4bfa8b78669c
Jim ChenBug 1066175 - Use other means to handle uncaught exception when Gecko is unavailable. r=snorp, a=sledru - e1d77019dda9
Jim ChenBug 1066175 - Only crash when crash reporting annotation succeeds. r=snorp, a=sledru - 0cc0faf4524b
Ethan HuggBug 1049087 - Pre-populate the whitelist for screensharing in Fx33. r=jesup, a=sledru - 90713d332601
Markus StangeBug 1066934 - Don't allow the snapped scrollbar thumb to extend past the scrollbar bounds. r=roc, a=sledru - f14c89b414b6
Simon MontaguBug 1068218 - Don't pass lone surrogates to GetDirectionFromChar. r=ehsan, a=abillings - 389dd23d771c
Chenxia LiuBug 1062257 - Handle HomeFragment deletions by panel/type instead of universally. r=margaret, a=sledru - ae87b325401d
Jim ChenBug 1067513 - Import updated base::LazyInstance from upstream. r=bsmedberg, a=sledru - d6aa05e710f2
Michael ComellaBug 956858 - Make menu inaccessible during editing mode. r=wesj, a=sledru - 91f4e2aed979
JW WangBug 1067858 - Apply |AutoNoJSAPI| before calling mAudioChannelAgent->SetVisibilityState in order not to hit nsContentUtils::IsCallerChrome() in HTMLMediaElement::CanPlayChanged(). r=bz, a=sledru - de5e77b26504
David MajorBug 1046382 - Blocklist dtwxsvc.dll. r=bsmedberg, a=sledru - 68fdd69ee9bb
Milan SreckovicBug 1069582 - Check the signed value for < 0 instead. r=sfowler, a=sledru - f8eec8fe1b2b
Tim TaubertBug 1054099 - Remove use of gradients in new tab page. r=dao, a=lmandel - b73f15e656a1
Jan-Ivar BruaroeyBug 1070076 - Fix createOffer options arg legacy-syntax-warning to not trip on absent arg. r=jesup, a=sledru - 02eaea5dce76
Bobby HolleyBug 1051224 - Find a clever way to work around COW restrictions on beta. r=gabor,ochameau - 6cdc428e3e62
Alexandre PoirotBug 1051224 - Test console's cd() against sandboxed iframes. r=msucan a=sylvestre - 0ae1af037f6e
Matt WoodrowBug 1053934 - Don't use the cairo context to create similar surfaces since it might be in an error state. r=jrmuizel, a=sledru - 9337f5dcf107
Randell JesupBug 1062876 - Refactor window iteration code for MediaManager. r=jib, a=abillings - d508b53c3dee

Planet MozillaStop stripping (OS X builds), it leaves you vulnerable

While investigating some strange update requests on our new update server, I discovered that we have thousands of update requests from Beta users on OS X that aren’t getting an update, but should. After some digging I realized that most, if not all of these are coming from users who have installed one of our official Beta builds and subsequently stripped out the architecture they do not need from it. In turn, this causes our builds to report in such a way that we don’t know how to serve updates for them.

We’ll look at ways of addressing this, but the bottom line is that if you want to be secure: Stop stripping Firefox binaries!

Planet MozillaNew Features in Picasso

I’ve always been a big fan of Picasso, the Android image loading library by the Square folks. It provides some powerful features with a rather simple API.

Recently, I started working on a set of new features for Picasso that will make it even more awesome: request handlers, request management, and request priorities. These features have all been merged to the main repo now. Let me give you a quick overview of what they enable you to do.

Request Handlers

Picasso supports a wide variety of image sources, from simple resources to content providers, network, and more. Sometimes though, you need to load images in unconventional ways that are not supported by default in Picasso.

Wouldn’t it be nice if you could easily integrate your custom image loading logic with Picasso? That’s what the new request handlers are about. All you need to do is subclass RequestHandler and implement a couple of methods. For example:

public class PonyRequestHandler extends RequestHandler {
    private static final String PONY_SCHEME = "pony";

    @Override public boolean canHandleRequest(Request data) {
        return PONY_SCHEME.equals(data.uri.getScheme());
    }

    @Override public Result load(Request data) {
         return new Result(somePonyBitmap, MEMORY);
    }
}

Then you register your request handler when instantiating Picasso:

Picasso picasso = new Picasso.Builder(context)
    .addRequestHandler(new PonyHandler())
    .build();

Voilà! Now Picasso can handle pony URIs:

picasso.load("pony://somePonyName")
       .into(someImageView)

This pull request also involved rewriting all built-in bitmap loaders on top of the new API. This means you can also override the built-in request handlers if you need to.

Request Management

Even though Picasso handles view recycling, it does so in an inefficient way. For instance, if you do a fling gesture on a ListView, Picasso will still keep triggering and canceling requests blindly because there was no way to make it pause/resume requests according to the user interaction. Not anymore!

The new request management APIs allow you to tag associated requests that should be managed together. You can then pause, resume, or cancel requests associated with specific tags. The first thing you have to do is tag your requests as follows:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .tag(someTag)
       .into(someImageView)

Then you can pause and resume requests with this tag based on, say, the scroll state of a ListView. For example, Picasso’s sample app now has the following scroll listener:

public class SampleScrollListener implements AbsListView.OnScrollListener {
    ...
    @Override
    public void onScrollStateChanged(AbsListView view, int scrollState) {
        Picasso picasso = Picasso.with(context);
        if (scrollState == SCROLL_STATE_IDLE ||
            scrollState == SCROLL_STATE_TOUCH_SCROLL) {
            picasso.resumeTag(someTag);
        } else {
            picasso.pauseTag(someTag);
        }
    }
    ...
}

These APIs give you a much finer control over your image requests. The scroll listener is just the canonical use case.

Request Priorities

It’s very common for images in your Android UI to have different priorities. For instance, you may want to give higher priority to the big hero image in your activity in relation to other secondary images in the same screen.

Up until now, there was no way to hint Picasso about the relative priorities between images. The new priority API allows you to tell Picasso about the intended order of your image requests. You can just do:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .priority(HIGH)
       .into(someImageView);

These priorities don’t guarantee a specific order, they just tilt the balance towards higher-priority requests.


That’s all for now. Big thanks to Jake Wharton and Dimitris Koutsogiorgas for the prompt code and API reviews!

You can try these new APIs now by fetching the latest Picasso code on Github. These features will probably be available in the 2.4 release. Enjoy!

Planet MozillaFileblock

One of the things that I get asked the most is how to prevent a user from accessing the local file system from within Firefox. This generally means preventing file:// URLs from working, as well as removing the most common methods of opening files from the Firefox UI (the open file button, menuitem and shortcut). Because I consider this outside of the scope of the CCK2, I wrote an extension to do this and gave it out to anyone that asked. Unfortunately over time it started to have a serious case of feature creep.

Going forward, I've decided to go back to basics and just produce a simple local file blocking extension. The only features that it supports are whitelisting by directory and whitelisting by file extension. I've made that available here. There is a README that gives full information on how to use it.

For the other functionality that used to be a part of FileBlock, I'm going to produce a specific extension for each feature. They will probably be AboutBlock (for blocking specific about pages), ChromeBlock (for preventing the loading of chrome files directly into the browser) and SiteBlock (for doing simple whitelisting).

Hopefully this should cover the most common cases. Let me know if you think there is a case I missed.

Planet MozillaLicensing Policy Change: Tests are Now Public Domain

I’ve updated the Mozilla Foundation License Policy to state that:

PD Test Code is Test Code which is Mozilla Code, which does not carry an explicit license header, and which was either committed to the Mozilla repository on or after 10th September 2014, or was committed before that date but all contributors up to that date were Mozilla employees, contractors or interns. PD Test Code is made available under the Creative Commons Public Domain Dedication. Test Code which has not been demonstrated to be PD Test Code should be considered to be under the MPL 2.

So in other words, new tests are now CC0 (public domain) by default, and some or many old tests can be relicensed as well. (We don’t intend to do explicit relicensing of them ourselves, but people have the ability to do so in their copies if they do the necessary research.) This should help us share our tests with external standards bodies.

This was bug 788511.

Planet MozillaSurvey on FLOSS Contribution Policies

In the “dull but important” category: my friend Allison Randal is doing a survey on people’s attitudes to contribution policies (committer’s agreements, copyright assignment, DCO etc.) in free/libre/open source software projects. I’m rather interested in what she comes up with. So if you have a few minutes (it should take less than 5 – I just did it) to fill in her survey about what you think about such things, she and I would be most grateful:

http://survey.lohutok.net is the link. You want the “FLOSS Developer Contribution Policy Survey” – I’ve done the other one on Mozilla’s behalf.

Incidentally, this survey is notable as I believe it’s the first online multiple-choice survey I’ve ever taken where I didn’t think “my answer doesn’t fit into your narrow categories” about at least one of the questions. So it’s definitely well-designed.

Planet MozillaJetpack Pro Tip - Reusing files for content and chrome

I’ve seen this issue come up a lot. Where an add-on developer is trying to reuse a library file, like underscore in both their add-on code, and their content scripts.

Typically the SDK documentation will say to put all of the content scripts in your add-on’s data/ folder, and that is the best thing to do if the script is only going to be used as a content script, but if you want to use the file in your add-on scope too then the file should not be in the data/ folder, and it should be in your lib/ folder instead.

Once this is done, the add-on scope can easy require it, so all that is left is to figure out a uri for the file in your lib/ folder which can be used for content scripts. To do this there are two options, one of which only works on JPM.

JPM

The JPM solution is very simple (thanks to Jordan Santell for implementing this), it is:

let underscoreURI = require.resolve("./lib/underscore");

if the file is in lib/underscore, but it should only be there if you copied and pasted it there, which pros don’t do. Pros use NPM because they know underscore is there, so they just make that a dependency, by adding this to package.json:

{
  // ..
  "dependencies": {
    "underscore": "1.6.0"
  }
  //..
}

Then, simply use:

let underscoreURI = require.resolve("underscore");

CFX

WIth CFX you will have to copy and paste the file in to your lib/ folder, then you can get a URL for the file by doing this:

let underscoreURI = module.uri.replace("main.js", "underscore.js");

Assuming that the code above is evaluated in lib/main.js.

So you can see an issue with the above code is that you have to know the name of the file which this code is evaluated in, so another approach could be:

let underscoreURI = module.uri.replace(/\/[^\/]*$/, "/underscore.js");

Planet MozillaMon retour à HTML5mtl

HTML5mtl

Il y a déjà presque trois ans, Mathieu Chartier, Benoît Piette et moi nous rencontrions dans un restaurant pour discuter à propos d’HTML5 et de Montréal. De cette discussion est né le groupe HTML5mtl que nous avons fondé pour faire rayonner les nouveautés de HTML, ainsi que les bonnes pratiques. Il y a un peu plus d’un an, j’avais dû quitter, pour des raisons professionnelles, mes fonctions d’organisateurs au sein du groupe laissant Mathieu et Benoît à la barre. Ma passion du web n’ayant pas changé et travaillant toujours dans le milieu, c’est avec plaisir que je suis revenu en tant que seul organisateur du groupe depuis septembre: Benoît ayant quitté l’organisation du groupe entre temps et Mathieu quittant HTML5mtl aussi pour travailler sur un nouveau projet qui n’est pas encore divulgé.

De ce fait, j’ai déjà entrepris quelques modifications, qui de par mon expérience avec différents groupes d’utilisateurs, événements et mon travail s’avèreront, je l’espère, profitable pour la santé du groupe. La première action est un retour à la base: soit une rencontre par mois alentour des technologies web (HTML, CSS et JavaScript) sans nécessairement visé que les nouveautés de HTML5. Il n’y a aucun manque de sujets que nous pouvons aborder et la demande des développeurs de la région de Montréal est présente (presque 1000 membres): la première rencontre de la saison 2014-2015 a généré plus de 180 réservations pour la présentation que Pierre-Paul Lefebvre nous a offerte sur AngularJS lors d’une soirée en salle comble. Je n’en attends pas moins pour la prochaine soirée sur Nodejs avec Rami Sayar qui vient juste d’être annoncé. Ensuite, une identité visuelle a été créée pour se démarquer du logo générique d’HTML5 que nous utilisions: un merci tout spécial à Matthew Potter qui a créé ce dernier, que vous pouvez voir en haut de ce billet. Deux petites modifications ont aussi été apportées au fonctionnement de la soirée: elle débute et finit plus tôt. Cela donnera plus de temps pour rentrer chez vous voir vos enfants ou relaxer avant la prochaine journée et vous avez tout de même le temps de finir votre journée de travail ainsi que d’aller manger une bouchée avant de vous présenter au groupe. En plus de cela, vous remarquerez maintenant que l’audience visée sera mentionnée dès octobre et pour toute présentation qui suivront: plus de surprise avec du contenu trop avancé pour vous ou de somnolence pour une présentation de base qui ne vous intéresse pas. Une des dernières actions que j’ai entreprise est de rendre le groupe bilingue: je crois que ç’a toujours été le cas, car nous avons eu des présentations en anglais par le passé, mais ça n’avait jamais été mis de l’avant. Nous sommes à Montréal, partageons sur le web en étant tout aussi ouvert: vivre le “Montréal style” et bienvenue aux anglophones (il me reste encore des traductions à faire sur la page meetup).

Pour la suite, il me reste à trouver une solution pour éviter que les gens qui réservent ne se présentent pas: en moyenne de 20 à 40% des gens qui disent qu’ils seront présents ne le seront pas. De ce fait, les gens en liste d’attente, qui auraient pu et voulu assisté à l’événement manque une belle soirée. Les gens doivent se responsabiliser, mais je tenterais, par essai et erreur, de minimiser l’impact de ce fléau bien connu des événements gratuits. Je vais bien sûr reprendre contact avec les commanditaires que j’avais recrutés dans le passé et de ce fait, ouvre la porte à quiconque souhaite supporter le groupe tout en obtenant une visibilité hors du commun. N’hésitez pas à m’envoyer un courriel (lien contact ci-haut) et je vous ferais parvenir le document des commandites. Je suis aussi toujours à la recherche de nouvelle présentatrice et nouveau présentateur, donc que vous soyez novice en la matière (je peux vous aider) ou passé maître dans l’art de parler en public, veuillez aussi m’envoyer un courriel si vous désirez faire profiter les membres de votre savoir!

Je suis bien heureux d’être de retour à ce magnifique groupe qu’est HTML5mtl.


--
Mon retour à HTML5mtl is a post on Out of Comfort Zone from Frédéric Harper

Planet MozillaBatch inserts in Go using channels, select and japanese gardening

I was recently looking into the DB layer of MIG, searching for ways to reduce the processing time of an action running on several thousands agents. One area of the code that was blatantly inefficient concerned database insertions.

When MIG's scheduler receives a new action, it pulls a list of eligible agents, and creates one command per agent. One action running on 2,000 agents will create 2,000 commands in the ''commands'' table of the database. The scheduler needs to generate each command, insert them into postgres and send them to their respective agents.

MIG uses separate goroutines for the processing of incoming actions, and the storage and sending of commands. The challenge was to avoid individually inserting each command in the database, but instead group all inserts together into one big operation.

Go provides a very elegant way to solve this very problem.

At a high level, MIG Scheduler works like this:

  1. a new action file in json format is loaded from the local spool and passed into the NewAction channel
  2. a goroutine picks up the action from the NewAction channel, validates it, finds a list of target agents and create a command for each agent, which is passed to a CommandReady channel
  3. a goroutine listens on the CommandReady channel, picks up incoming commands, inserts them into the database and sends them to the agents (plus a few extra things)
The CommandReady goroutine is where optimization happens. Instead of processing each command as they come, the goroutine uses a select() statement to either pick up a command, or timeout after one second of inactivity.
// Goroutine that loads and sends commands dropped in ready state
// it uses a select and a timeout to load a batch of commands instead of
// sending them one by one
go func() {
    ctx.OpID = mig.GenID()
    readyCmd := make(map[float64]mig.Command)
    ctr := 0
    for {
        select {
        case cmd := <-ctx.Channels.CommandReady:
            ctr++
            readyCmd[cmd.ID] = cmd
        case <-time.After(1 * time.Second):
            if ctr > 0 {
                var cmds []mig.Command
                for id, cmd := range readyCmd {
                    cmds = append(cmds, cmd)
                    delete(readyCmd, id)
                }
                err := sendCommands(cmds, ctx)
                if err != nil {
                    ctx.Channels.Log <- mig.Log{OpID: ctx.OpID, Desc: fmt.Sprintf("%v", err)}.Err()
                }
            }
            // reinit
            ctx.OpID = mig.GenID()
            ctr = 0
        }
    }
}()

As long as messages are incoming, the select() statement will elect the case when a message is received, and store the command into the readyCmd map.

When messages stop coming for one second, the select statement will fall into its second case: time.After(1 * time.Second).

In the second case, the readyCmd map is emptied and all commands are sent as one operation. Later in the code, a big INSERT statement that include all commands is executed against the postgres database.

In essence, this algorithm is very similar to a Japanese Shishi-odoshi.

shishi-odoshi.gif

The current logic is not yet optimal. It does not currently set a maximum batch size, mostly because it does not currently need to. In my production environment, the scheduler manages  about 1,500 agents, and that's not enough to worry about limiting the batch size.

Planet MozillaThe Sheppy Report: September 19, 2014

I’ve been working on getting a first usable version of my new server-side sample server project (which remains unnamed as yet — let me know if you have ideas) up and running. The goals of this project are to allow MDN to host samples that require a server side component (for example, examples of how to do XMLHttpRequest or WebSockets), and to provide a place to host samples that require the ability to do things that we don’t allow in an <iframe> on MDN itself. This work is going really well and I think I’ll have something to show off in the next few days.

What I did this week

  • Caught up on bugmail and other messages that came in while I was out after my hospital stay.
  • Played with JSMESS a bit to see extreme uses of some of our new technologies in action.
  • Did some copy-editing work.
  • Wrote up a document for my own reference about the manifest format and architecture for the sample server.
  • Got most of the code for processing the manifests for the sample server modules and running their startup scripts written. It doesn’t quite work yet, but I’m close.
  • Filed a bug about implementing a drawing tool within MDN for creating diagrams and the like in-site. Pointed to draw.io as an example of a possible way to do it. Also created a developer project page for this proposal.
  • Exchanged email with Piotr about the editor revamp. It’s making good progress.

Wrap up

I’m really, really excited about the sample server work. With this up and running (hopefully soon), we’ll be able to create examples for technologies we were never able to properly demonstrate in the past. It’s been a long time coming. It’s also been a fun, fun project!

 

Planet MozillaThe Curtisk report: 2014-09-21

People wanna know what I do, so I am going to give this a shot, so each Monday I will make a post about the stuff I did in the previous week.

Idea shamlessly stolen from Eric Shepherd

What I did this week

  • MWoS: SeaSponge Project Proposal (Review)
  • Crusty Bugs data digging
  • Mozillians.org security review (move along)
  • Firefox OS Sec discussion
  • sec triage process massaging
  • Firefox OS Security coordination
  • Vendor site review
    • testing plan for vendor site testing
    • testing coordination with team and vendor
  • CBT Training survey
  • security scan of [redacted]

Meetings attended this week

Mon

  • Weekly Project Meeting
  • Web Bounty Triage

Tue

  • SecAutomation
  • Cloud Services Security Team

Wed

  • MWoS team Project meeting
  • Vendor testing call
  • Web Bug Triage

Thu

  • Security Open Mic
  • Grow Mozilla / Community Building
  • Computer Science Teachers Association (guest speaker)

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>