Planet MozillaHandling Read-Only Shared Memory Usage In rr

One of rr's main limitations is that it can't handle memory being shared between recorded processes and not-recorded processes, because writes by a not-recorded process can't be recorded and replayed at the right moment. This mostly hasn't been a problem in practice. On Linux desktops, the most common cases where this occurs (X, pulseaudio) can be disabled via configuration (and rr does this automatically). However, there is another common case --- dconf --- that isn't easily disabled via configuration. When applications read dconf settings for the first time, the dconf daemon hands them a shared-memory buffer containing a one-byte flag. When those settings change, the flag is set. Whenever an application reads cached dconf settings, it checks this flag to see if it should refetch settings from the daemon. This is very efficient but it causes rr replay to diverge if dconf settings change during recording, because we don't replay that flag change.

Fortunately I've been able to extend rr to handle this. When an application maps the dconf memory, we map that memory into the rr supervisor process as well, and then replace the application mapping with a "shadow mapping" that's only shared between rr and the application. Then rr periodically checks to see whether the dconf memory has changed; if it is, then we copy the changes to the shadow mapping and record that we did so. Essentially we inject rr into the communication from dconf to the application, and forward memory updates in a controlled manner. This seems to work well.

That "periodic check" is performed every time the recorded process completes a traced system call. That means we'll forward memory updates immediately when any blocking system call completes, which is generally what you'd want. If an application busy-waits on an update, we'll never forward it and the application will deadlock, but that could easily be fixed by also checking for updates on a timeout. I'd really like to have a kernel API that lets a process be notified when some other process has modified a chunk of shared memory, but that doesn't seem to exist yet!

This technique would probably work for other cases where shared memory is used as a one-way channel from not-recorded to recorded processes. Using shared memory as a one-way channel from recorded to not-recorded processes already works, trivially. So this leaves shared memory that's read and written from both sides of the recording boundary as the remaining hard (unsupported) case.

Planet MozillaSupported Official

Recently a bug was reported for the Bank of America (BOA) website. The problem is that BOA is alerting Firefox users on OSX that the site may not work for the browser they are using. We have a really nice contact at BOA so I reached out this morning to find out what’s up. He said that Firefox for OSX has dipped below 1% of their overall traffic. They have a policy that any browser with this low of traffic is not officially supported.

Some random questions came to my head while I scratched it:

OK, so what about Firefox on Windows? Yep that’s supported.
Is Chrome OSX supported? Yeah, obvi.
What could be different between Windows and OSX Firefox versions? Not much.
What does Officially Supported mean? Now we’re getting deep.
Does it require more testing for QA? Probably not, if it works in Windows.
What about customer service? Well they already support browsers in OSX and they know how Firefox works.
Do they prevent Firefox OSX from using the site? Nah.
Will users see that alert as a reason to use another browser, assuming Firefox is less secure? My brain hurts.

At the end of the day Firefox in OSX will continue to work fine on this website, but users will be nagged at every visit. Not a great experience and not great for the web. Let's hope that they reconsider this policy over time.

Planet MozillaRelEng & RelOps Weekly highlights - June 27, 2016

<figure class="alignright">Gardens outside of Buckingham Palace</figure>The work week in London has come and gone. So nice to visit England before everything went to hell.

In better news, Anthony Miyaguchi has returned for his second internship with releng. He’ll be working with Jordan on migrating Fennec nightly builds to TaskCluster (TC).

Improve Release Pipeline:

njira fixed a few paper cuts on the Balrog Admin UI.

Ben, Nick, and Benson identified the remaining blockers before we migrate Balrog to CloudOps infrastructure, which is expected to be completed in the next couple of weeks.

Varun completed his GSoC project, which will make nightly build submission to Balrog much more efficient.

Aki got a rough instance of the signing server running in docker: https://github.com/escapewindow/docker-signing-server

Aki got build-tools tests passing in python3. The scripts still need some py3 fixes. https://github.com/escapewindow/build-tools/tree/py3

Callek has stood up Linux l10n try builds in TC. This was one a the few outstanding pieces blocking us from getting nightlies working in TC.

gbrown has ASAN and Android 4.3 tests (both opt and debug) running at tier 2 in TC.

Operational:

Andrei Obreja (aobreja on IRC) has joined the buildduty team at SoftVision. He replaces Vlad who has moved on to another opportunity.

Release:

Callek is in the releaseduty driver seat for the Firefox 48 release cycle. We’re currently at beta 3.

See you next week!

Planet WebKitImproved Font Loading

Web fonts are becoming increasingly popular all over the web. Using web fonts allows a site author to include and use specific fonts along with the assets of their websites. Browsers download the fonts with the rest of the site and use them when they finish downloading. Unfortunately, not all web fonts download instantaneously, which means there is some time during the load where the browser must show the webpage without the font. Many web fonts are quite large, causing a long delay until the browser can use them. WebKit has mitigated this in two main areas: improving font download policies, and making fonts download faster.

Default Font Loading Policy

The browser should attempt to show a web page in the best way possible despite not being able to use a font due to an ongoing download. WebKit will always end up using the downloaded font when it is available; however, some fonts may require quite some time to complete downloading. For these fonts, WebKit will show elements with invisible text during the delay. Older versions of WebKit would continue to show this invisible text until the font completes downloading. Instead, newer versions of WebKit will show this invisible text for a maximum of 3 seconds, at which point it will switch to a local font chosen from the element’s style before finally switching to the downloaded font when the download completes.

<figure class="widescreen mattewhite"><video controls="controls" src="https://webkit.org/wp-content/uploads/Load2.mov"></video></figure>

This new policy minimizes a flash of fallback text for fast-downloading fonts without making long-downloading fonts illegible indefinitely. The best part is that websites don’t have to do anything to adopt this new policy; all font loads will use it by default.

CSS Font Loading API

Different web authors may want different policies for headline fonts or for body fonts. With the CSS Font Loading API, new in WebKit, web authors have complete control over their own font loading policy.

This API exposes two main pieces to JavaScript: the FontFace interface and the FontFaceSet interface. FontFace represents the same concept as a CSS @font-face rule, so it contains information about a font like its url, weight, unicode-range, family, etc. The document’s FontFaceSet contains all the fonts the webpage can use for rendering. Each document has its own FontFaceSet, accessible via document.fonts.

Using these objects to implement a particular font loading policy is straightforward. Once a FontFace has been constructed, the load() method initiates asynchronous downloading of the font data. It returns a Promise that is fulfilled when the download succeeds or rejected when the download fails. By chaining a handler to the returned promise, JavaScript can take action when the download finishes. Therefore, a website could implement a policy of using a fallback font immediately with no timeout by using the following code:

<div id="target" style="font-family: MyFallbackFont;">hamburgefonstiv</div>
let fontFace = new FontFace("MyWebFont", "url('MyWebFont.woff2') format('woff2'),
    url('MyWebFont.woff') format('woff')");
fontFace.load().then(function(loadedFontFace) {
    document.fonts.add(loadedFontFace);
    document.getElementById("target").style.fontFamily = "MyWebFont";
});

In this example, the element is styled declaratively to use MyFallbackFont. Then, JavaScript runs and starts requesting MyWebFont. If the font download is successful, the newly downloaded font is added to the document’s FontFaceSet and the element’s style is modified to use this new font.

WOFF 2

No matter which font loading policy is used, the user’s experience will be best when fonts download quickly. WebKit on macOS Sierra, iOS 10, and GTK supports a new, smaller font file format named “WOFF 2.” The fonts are smaller because they use advanced Brotli compression, which means that WOFF 2 fonts can download significantly faster than the same font represented with other common formats like WOFF, OpenType, and TrueType. WebKit also recognizes the format("woff2") identifier in the src property inside the CSS @font-face rule. Because not all browsers support WOFF 2 yet, it is possible to elegantly fall back to another format if the browser doesn’t support WOFF 2, like so:

@font-face {
    font-family: MyWebFont;
    src: url("MyWebFont.woff2") format("woff2"),
         url("MyWebFont.woff") format("woff");
}

In this example, browsers will use the format CSS function to selectively download whichever fonts they support. Because browsers will prefer fonts appearing earlier in the list over fonts appearing later, browsers encountering this code will always prefer the WOFF 2 font over the WOFF font if they support it.

unicode-range

Bandwidth is expensive (especially to mobile devices), so minimizing or eliminating font downloads is incredibly valuable. WebKit will now honor the unicode-range CSS property for the purposes of font downloading. Consider the following content:

@font-face {
    font-family: MyWebFont;
    src: url("MyWebFont-extended.woff2") format("woff2"),
         url("MyWebFont-extended.woff") format("woff");
    unicode-range: U+110-24F;
}

@font-face {
    font-family: MyWebFont;
    src: url("MyWebFont.woff2") format("woff2"),
         url("MyWebFont.woff") format("woff");
    unicode-range: U+0-FF;
}
<div style="font-family: MyWebFont;">hamburgefonstiv</div>

In this content, none of the characters styled with MyWebFont fall within the unicode-range of the first CSS @font-face rule, which means that there is no reason to download it at all! It’s important to remember, however, that styling any character with a web font will always cause the last matching @font-face to be downloaded in order to correctly size the containing element according to the metrics of the font. Therefore, the most-common CSS @font-face rule should be listed last. Using unicode-range to partition fonts can eliminate font downloads entirely, thereby saving your users’ bandwidth and battery.

Using these techniques can dramatically improve users’ experiences on your website when using web fonts. For more information, contact me at @Litherum, or Jonathan Davis, Apple’s Web Technologies Evangelist, at @jonathandavis or web-evangelist@apple.com.

Planet MozillaFirefox 48 Beta 3 Testday Results

Hi everyone!

Last Friday, June 24th, we held Firefox 48 Beta 3 Testday.  It was a successful event (please see the results section below) so a big Thank You goes to everyone involved.

First of all, many thanks to our active contributors: Iryna Thompson, Paarttipaabhalaji, Karthikeya LK, Prasanth P, Ashly Rose Mathew.M, Moin Shaikh, Bhuvana Meenakshi.K, Pushanshu Avinash Sharma, Ilse Macías, Nazir Ahmed Sabbir, Maruf Rahman, Rezaul Huque Nayeem, Md. Rahimul Islam, Akash, Sayed Ibn Masud, Zayed News, Saddam Hossain, Sufi Ahmed Hamim, Md.Majedul islam, Md.Tarikul Islam Oashi, Tanvir Rahman, Sajedul Isalm, Forhad Hossain, Sudipto Krishna Dutta, Mohammad Maruf Islam, Fahim, Khalid Syfullah Zaman, T.M. Sazzad Hossain.

Secondly, a big thank you to all our active moderators.

Results:

  • several testcases executed for the New Awesomebar feature
  • 2 bugs verified: 12177691238181
  • 1 bug triaged: 1281457

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaHieroglyph and Tinkerer Dependencies

Hieroglyph and Tinkerer Dependencies

In setting up virtualenvs for my slides and blog repos on my new laptop, I’ve been reminded that a variety of Sphinx-based tools require system dependencies as well as the ones in their virtualenvs.

Hieroglyph dependency issues

The error resulting from pip install -r requirements.txt ended with:

Command ".../virtualenv/bin/python2 -u -c
"import setuptools,
tokenize;__file__='/tmp/pip-build-lzbk_r/Pillow/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-BNDc_6-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/slides/rustcommunity/v/include/site/python2.7/Pillow"
failed with error code 1 in /tmp/pip-build-lzbk_r/Pillow/

Its fix, from stackoverflow, was:

$ sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
$ pip install -r requirements.txt

Tinkerer depencencies, too!

pip install -r requirements.txt over in my blog repo yielded:

Command ".../virtualenv/bin/python2 -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-NVLSBY/lxml/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-qD5QIe-record/install-record.txt
--single-version-externally-managed --compile --install-headers
/home/edunham/repos/site/v/include/site/python2.7/lxml" failed with error code
1 in /tmp/pip-build-NVLSBY/lxml/

The fix is again to install the missing system deps, on Ubuntu:

$ sudo apt-get install libxml2-dev libxslt-dev
$ pip install -r requirements.txt

That’s it! I’m writing this down for SEO on the specific errors at hand, since the first several useful hits are currently stackoverflow.

If you’re a pip developer reading this, please briefly contemplate whether it’d be worthwhile to have some built-in mechanism to translate common dependency errors to the appropriate system package names needed based on the OS on which the command is run.

Planet MozillaThis Week in Rust 136

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, llogiq, and brson.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

User jkcclemens suggested his own bins crate that lets us programmatically create pastebins and is now our Crate of the Week! Thanks, jkcclemens!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

76 pull requests were merged in the last two weeks.

New Contributors

  • Alexander Stocko
  • cgswords
  • Fabian Vogt
  • Joseph Dunne
  • Mitsunori Komatsu
  • Nathan Moos
  • Nikhil Shagrithaya
  • Paul Jarrett

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

<ketralnis> Rust is also really phobic of heap allocations […] <Xion> Yes, Rust encourages everyone to be a full stack developer :)

Thanks to Matt Brubeck for the suggestion.

Submit your quotes for next week!

Planet Mozilla“Distributed” ER#8 now available!

“Distributed” Early Release #8 is now publicly available, about Book Cover for Distributed6 weeks after the last EarlyRelease came out.

This ER#8 includes a significant reworking and trimming of both Chapter 1 (“The Real Cost of an Office”) and also Chapter 5 (“Organizational Pitfalls to Avoid”). I know that might not sound glamorous but it was a lot of slow, careful, detailed work which I believe makes these chapters better and also helps with the structure of the overall book.

You can buy ER#8 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already bought any of the previous ERs should have already been prompted with a free update to ER#8 – if you didn’t get updated, please let me know so I can investigate! And yes, you’ll get updated when ER#9 comes out.

Thanks again to everyone for their ongoing encouragement and feedback so far. Each piece of great feedback makes me wonder how I missed such obvious errors before and also makes me happy, as each fix helps make this book better. Keep letting me know what you think! It’s important this book be interesting, readable and practical – so if you have any comments, concerns, etc., please email me. Yes, I will read and reply to each email personally! To make sure that any feedback doesn’t get lost or caught in spam filters, please email comments to feedback at oduinn dot com. I track all feedback and review/edit/merge as fast as I can.

Thank you to everyone who has already sent me feedback/opinions/corrections – all really helpful.

John.
=====
ps: For the curious, here is the current list of chapters and their status:

Chapter 1 The Real Cost of an Office – AVAILABLE
Chapter 2 Distributed Teams Are Not New – AVAILABLE
Chapter 3 Disaster Planning – AVAILABLE
Chapter 4 Diversity
Chapter 5 Organizational Pitfalls to Avoid – AVAILABLE
Chapter 6 Physical Setup – AVAILABLE
Chapter 7 Video Etiquette – AVAILABLE
Chapter 8 Own Your Calendar – AVAILABLE
Chapter 9 Meetings – AVAILABLE
Chapter 10 Meeting Moderator – AVAILABLE
Chapter 11 Single Source of Truth
Chapter 12 Email Etiquette – AVAILABLE
Chapter 13 Group Chat Etiquette – AVAILABLE
Chapter 14 Culture, Conflict and Trust
Chapter 15 One-on-Ones and Reviews – AVAILABLE
Chapter 16 Hiring, Onboarding, Firing, Reorgs,
Layoffs and other Departures – AVAILABLE
Chapter 17 Bring Humans Together – AVAILABLE
Chapter 18 Career Path – AVAILABLE
Chapter 19 Feed Your Soul – AVAILABLE
Chapter 20 Final Chapter
Appendix A The Bathroom Mirror Test – AVAILABLE
Appendix B How NOT to Work – AVAILABLE
Appendix C Further Reading – AVAILABLE
=====

Planet MozillaThis Week In Servo 69

In the last week, we landed 84PRs in the Servo organization’s repositories.

One of our contributors, Florian Duraffourg, landed a patch recently that switched our public domain list matching source and algorithm, resulting in a huge speedup in page load times (~25%!). Shing Lyu tracked down and measured this unexpectedly-large gain through a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!

We’re excited to announce that Connor Brewster (cbrewster on IRC and ConnorGBrewster on github) is the newest addition to the list of Servo reviewers!

This upcoming week is the final week of June! So, expect the first tech preview of Servo and browser.html to be announced on this blog shortly…

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • nox upgraded our Rust version
  • eddyb landed some great work to migrate us off of use of return_address, which has been removed from Rust
  • nox continued the migration off of syntax plugins to get WebRender building against stable Rust
  • jack fixed linking problems with our CEF build
  • vvuk landed fixes to get more of our Windows dependencies building with cmake and outside of msys
  • SimonSapin has fixed the style crate to nearly build with a stable version of the Rust compiler
  • avadacatavra prevented potential IRCxit by those members of our community with UK keyboards when typing issue numbers in IRC chat
  • jdm fixed a source of undefined behaviour by preventing unwinding from Rust code through C stack frames
  • Ivan Yang added a reload keyboard shortcut
  • carols10cents documented the 0.9 -> 1.0 migration for rust-url users
  • Ms2ger removed the layout->script dependency, enabling better CPU usage while building libscript
  • emilio implemented type-based validation of numerous WebGL values
  • jdm added a signal handler that reports a backtrace if a segfault occurs
  • anesshusa ensured that the packaging-related code doesn’t regress in the future

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

No screenshots this week.

Planet MozillaTest

Upgrade test!

Planet MozillaMoving forward with Mozilla Participation Leaders

Core mozillians are more important than ever for Mozilla to succeed in 2016, Participation team keeps working to provide leadership opportunities and guidance on impactful initiatives. We are centralizing communications in this discourse category.

leadership-summit

Leadership cohort in Singapore – Photo by Christos Bacharakis (by-nc-sa)

This a post originally published on Mozilla Discourse, please add you comments there.

During last months, the Participation team together with a big group of core mozillians organized and delivered a set of initiatives and gatherings to support Mozilla’s mission and goals.

Mozfest, Orlando All Hands, Leadership Summit and London All Hands were some of the key events where we worked together, grow as contributors and evolve our leadership and mobilization path inside the community.

Currently we are working on a set of initiatives to keep this work going:

  • RepsNext: The Reps program is evolving to become a volunteer participation platform, really focused on leadership and community mobilization. Be more inclusive to welcome any core mozillian interested in this growth path is going to be a key focus.
  • We are developing a leadership toolkit to be able to provide concrete resources to improve mozillians skills that are going to be important for supporting Mozilla in the next years.
  • In order to deliver these skills, we are creating a coaching team with the help of new Reps mentors so we can train mozillians to deliver this knowledge to other volunteers and their local communities.
  • Refreshing the participation buffet, in order to provide clear guidance on the focus initiatives that are more relevant to support Mozilla this year and how to get involved. This will be also provided through coaching.
  • Systematize community gatherings strategy in order to provide skills and action-focused guidance to regional and functional communities around the world, as well as adapting our strategy taking in consideration local needs and ideas.

What’s next?

Beginning in July we are going to start seeing some of the previous ideas being implemented and communicated, and we want core mozillians to be part of it. A lot of mozillians have demonstrated great leadership inside Mozilla and there is no way Mozilla can succeed without their help, ideas and support, and we are inviting them to join the group. We are centralizing communications in this discourse category.

Keep it rocking the free web!

 

Planet MozillaONOS Project Presentation

ONOS Project Presentation The event aims to educate the general public about the ONOS project and give them the opportunity to meet members of ON.Lab, the non profit...

Planet MozillaProgress to TenFourFox 45: milestone 2 (plus: get your TALOS on or else, and Let's Engulf Comodo)

After a highly prolonged porting phase, TenFourFox 43 finally starts up, does browsery things and doesn't suck. Changesets are available on SourceForge for your amusement. I am not entirely happy with this release, though; our PowerPC JavaScript JIT required substantial revision, uncovering another bug which will be fixed in 38.10 (this one is an edge case but it's still wrong), and there is some glitch in libnestegg that keeps returning the audio sampling rate on many WebM videos as 0.000. I don't think this is an endian problem because some videos do play and I can't figure out if this is a legitimate bug or a compiler freakout, so right now there is a kludge to assume 22.050kHz when that happens. The video is otherwise parseable and that gets it to play, but I find this solution technically disgusting, so I'm going to ponder it some more in the meantime. We'll see if it persists in 45. On the other hand, we're also able to load the JavaScript Internationalization API now instead of using our compat shim, which should fix several bugs and add-on compatibility issues.

Anyway, the next step is to port 45 using the 43 sets, and that's what I'll be working on over the next several weeks. I'm aiming for the first beta in mid-July, so stay tuned.

For those of you who have been following the Talos POWER8 workstation project (the most powerful and open workstation-class Power Architecture system to date; more info here and here), my contacts inform me that the fish-or-cut-bait deadline is approaching where Raptor needs to determine if the project is financially viable with the interest level so far received. Do not deny me my chance to give them my money for the two machines I am budgeting (a kidneystone) for. Do not foresake me, O my audience. I will find thee and smite thee. Sign up, thou cowards, and make this project a reality. Let's get that Intel crap you don't actually control off thy desks. You can also check out using the Talos to run x86 applications through QEMU, making it the best of both worlds, as demonstrated by a video on their Talos pre-release page.

Last but not least, increasingly sketchy certificate authority and issuer Comodo, already somewhat of a pariah for previously dropping their shorts, has decided to go full scumbag and is trying to trademark "Let's Encrypt." Does that phrase seem familiar to you? It should, because "Let's Encrypt" is (and has been for some time) a Mozilla-sponsored free and automated certificate authority trying to get certificates in the hands of more people so that more websites can be protected by strong encryption. As their FAQ says, "Anyone who owns a domain name can use Let's Encrypt to obtain a trusted certificate at zero cost."

Methinks Comodo is hoping to lawyer Let's Encrypt out of existence because they believe a free certificate issuer will be a huge impact to their business model. Well, yes, that's probably true, which makes me wonder what would happen if Mozilla threatened to pull the Comodo CA root out of Firefox in response. Besides, based on this petulant and almost certainly specious legal action and their previous poor security history, the certificate authority pool could definitely use a little chlorine anyhow.

Planet MozillaProject Fear

I’ve been campaigning a bit on the EU Referendum. (If you want to know why I think the UK should leave, here are my thoughts.) Here’s the leaflet my wife and I have been stuffing into letterboxes in our spare moments for the past two weeks:

vote-leave-leaflet

And here’s the leaflet in our area being distributed today by one of the Labour local councillors and the Remain campaign:

remain-leaflet

Says it all.

Planet MozillaWhat’s Up with SUMO – 23rd June

Hello, SUMO Nation!

Did you miss us? WE MISSED YOU! It’s good to be back, even if we had quite a fateful day today… The football fever in Europe reaching new heights, some countries wondering aloud if they want to keep being a part of the EU, and a Platform meeting to inform you about the current state of our explorations. Busy times – let’s dive straight into some updates:

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 29th of June!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

  • for Android
    • No big news for now.

Once again – it’s good to be back, and we’re looking forward to a great end of June and kick-off in July with you all. Keep rocking the helpful web!

PS. If you’re a football fan, let’s talk about it in our forums!

Planet MozillaWeb QA Weekly Team Meeting, 23 Jun 2016

Web QA Weekly Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Planet MozillaReps weekly, 23 Jun 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaImplementing Media Queries in an editor

You're happy as a programmer ? You think you know the Web ? You're a browser implementor ? You think dealing with Media Queries is easy ? Do the following first:

Given a totally arbitrary html document with arbitrary stylesheets and arbitrary media constraints, write an algo that gives all h1 a red foreground color when the viewport size is between min and max, where min is a value in pixels (0 indicating no min-width in the media query...) , and max is a value in pixels or infinite (indicating no max-width in the media query).  You can't use inline styles, of course. !important can be used ONLY if it's the only way of adding that style to the document and it's impossible otherwise. Oh, and you have to handle the case where some stylesheets are remote so you're not allowed to modify them because you could not reserialize them :-)

What's hard? Eh:

  • media attributes on stylesheet owner node
  • badly ordered MQs
  • intersecting (but not contained) media queries
  • default styles between MQs
  • remote stylesheets
  • so funny to deal with the CSS OM where you can't insert a rule before or after another rule but have to deal with rule indices...
  • etc.

My implementation in BlueGriffon is almost ready. Have fun...

Planet MozillaCFPs Made Easier

CFPs Made Easier

Check out this post by Lucy Bain about how to come up with an idea for what to talk about at a conference. I blogged last year about how I turn abstracts into talks, as well. Now that the SeaGL CFP is open, it’s time to look in a bit more detail about the process of going from a talk idea to a compelling abstract.

In this post, I’ll walk you through some exercises to clarify your understanding of your talk idea and find its audience, then help you use that information to outline the 7 essential parts of a complete abstract.

Getting ready to write your abstract

Your abstract is a promise about what your talk will deliver. Have you ever gotten your hopes up for a talk based on its abstract, then attended only to hear something totally unrelated? You can save your audience from that disappointment by making sure that you present what your abstract says you will.

For both you and your audience to get the most out of your talk, the following questions can help you refine your talk idea before you even start to write its abstract.

Why do you love this idea?

Start working on your abstract by taking some quick notes on why you’re excited about speaking on this topic. There are no wrong answers! Your reasons might include:

  • Document a topic you care about in a format that works well for those who learn by listening and watching
  • Impress a potential employer with your knowledge and skills
  • Meet others in the community who’ve solved similar problems before, to advise you
  • Recruit contributors to a project
  • Force yourself to finish a project or learn more detail about a tool
  • Save novices from a pitfall that you encountered
  • Travel to a conference location that you’ve always wanted to visit
  • Build your resume
  • Or something else entirely!

Starting out by identifying what you personally hope to gain from giving the talk will help ensure that you make the right promises in your abstract, and get the right people into the room.

What’s your idea’s scope?

Make 2 quick little lists:

  • Topics I really want this presentation to cover
  • Topics I do not want this presentation to cover

Once you think that you have your abstract all sorted out, come back to these lists and make sure that you included enough topics from the first list, and excluded those from the second.

Who’s the conference’s target audience?

Keynotes and single-track conferences are special, but generally your talk does not have to appeal to every single person at the conference.

Write down all the major facts you know about the people who attend the conference to which you’re applying. How young or old might they be? How technically expert or inexperienced? What are their interests? Why are they there?

For example, here are some statements that I can make about the audience at SeaGL:

  • Expertise varies from university students and random community members to long-time contributors who’ve run multiple FOSS projects.
  • Age varies from a few school-aged kids (usually brought by speakers and attendees) to retirees.
  • The audience will contain some long-term FOSS contributors who don’t program, and some relatively expert programmers who might have minimal involvement in their FOSS communities
  • Most attendees will be from the vicinity of Seattle. It will be some attendees’ first tech conference. A handful of speakers are from other parts of the US and Canada; international attendees are a tiny minority.
  • The audience comes from a mix of socioeconomic backgrounds, and many attendees have day jobs in fields other than tech.
  • Attendees typically come to SeaGL because they’re interested in FOSS community and software.

Where’s your niche?

Now that you’ve taken some guesses about who will be reading your abstract, think about which subset of the conference’s attendees would get the most benefit out of the topic that you’re planning to talk about.

Write down which parts of the audience will get the most from your talk – novices to open source? Community leaders who’ve found themselves in charge of an IRC channel but aren’t sure how to administer it? Intermediate Bash users looking to learn some new tricks?

If your talk will appeal to multiple segments of the community (developers interested in moving into DevOps, and managers wondering what their operations people do all day?), write one question that your talk will answer for each segment.

You’ll use this list to customize your abstract and help get the right people into the room for your talk.

Still need an idea?

Conferences with as broad an audience as SeaGL often offer an introductory track to help enthusiastic newcomers get up to speed. If you have intermediate skills in a technology like Bash, Git, LaTeX, or IRC, offer an introductory talk to help newbies get started with it! Can you teach a topic that you learned recently in a way that’s useful to newbies?

If you’re an expert in a field that’s foreign to most attendees (psychology? beekeeping? Cray Supercomputer assembly language?), consider an intersection talk: “What you can learn from X about Y”. Can you combine your hobby, background, or day job with a theme from the conference to come up with something unique?

The Anatomy of an Abstract

There are many ways to structure a good abstract. Here’s how I like to structure them:

  • Set the scene with an introductory sentence that reminds your target audience of your topic’s relevance to them. Some of mine have included:

    • “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.”
    • “Git is the most popular source code management and version control system in the open source community.”
    • “When you’re new to programming, or self-taught with an emphasis on those topics that are directly relevant to your current project, it’s easy to skip learning about analyzing the complexity of algorithms.”
  • Ask some questions, which the talk promises to answer. These questions should be asked from the perspective of your target audience, which you identified earlier. This is the least essential piece of an abstract, and can be skipped if you make sure your exposition clearly shows that you understand your target audience in some other way. Here are a couple of questions I’ve used in abstracts that were accepted to conferences:

    • “Do you know how to control what information people can discover about you on an IRC network?”
    • “Is the project of your dreams ignoring your pull requests?”
  • Drop some hints about the format that the talk will take. This shows the selection commitee that you’ve planned ahead, and helps audience members select sessiosn that’re a good fit for their learning styles. Useful words here include:

    • “Overview of”
    • “Case study”
    • “Demonstrations of”
    • “Deep dive into”
    • “Outline X principles for”
    • “Live coding”
  • Identify what knowledge the audience will need to get the talk’s benefit, if applicable. Being specific about this helps welcome audience members who’re undecided about whether the talk is applicable to them. Useful phrases include:

    • “This talk will assume no background knowledge of...”
    • “If you’ve used ____ to ____, ...”
    • “If you’ve completed the ____ tutorial...”
  • State a specific benefit that audience members will get from having attended the talk. Benefits can include:

    • “Halve your Django website’s page load times”
    • “Get help on IRC”
    • “Learn from ____‘s mistakes”
    • “Ask the right questions about ____
  • Reinforce and quantify your credibility. If you’re presenting a case study into how your company deployed a specific tool, be sure to mention your role on the team! For instance, you might say:

    • “Presented by [the original author | a developer | a maintainer | a long-term user] of [the project], this talk will...”
  • End with a recap of the talk’s basic promise, and welcome audience members to attend.

These pieces of information don’t have to each be in their own sentence – for instance, the credibility reinforcement and talk format hint often fit together nicely.

Once you’ve got all of the essential pieces of an abstract, munge them around until it sounds like concise, fluent English. Get some feedback on helpmeabstract.com if you’d like assistance!

Give it a title

Naming things is hard. Here are some assorted tips:

  • Keep it under about 50 characters, or it might not fit on the program
  • Be polite. Rude puns or metaphors might be eye-catching, but probably violate your conference or community’s code of conduct, and will definitely alienate part of your prospective audience.
  • For general talks, it’s hard to go wrong with “Intro to ___” or “___ for ___ users”.
  • The form “[topic]: A [history|overview|melodrama|case study|love story]” is generally reliable. Well, I’m kidding about “melodrama” and “love story”... Mostly.
  • Clickbait is underhanded, but it works. “___ things I wish I’d known about ___”, anyone?

Good luck, and happy conferencing!

Planet MozillaFriend of Add-ons: Yuki Hiroshi

piro-photoPlease meet our newest Friend of Add-ons: Yuki “Piro” Hiroshi. A longtime add-on developer with 37 extensions and counting (he’s most proud of Tree Style Tab and Second Search), Hiroshi also recently filed more than two dozen high-impact WebExensions bugs.

Hiroshi recently recounted his experience porting one of his XUL add-ons to WebExtensions in the hopes that he could help support fellow add-on developers through the transition. He likens XUL to an “experimental laboratory” that over the past decade allowed us to explore the possibilities of a customized web browser. But now, Hiroshi says, we need to “go for better security and stability” and embrace forward-thinking API’s that will cater to building richer user experiences.

While add-ons technology is evolving, Hiroshi’s motivation to create remains the same. “It’s an emotional reason,” he says, which took root when he first discovered the power of a Gecko engine that allowed him to transform himself from being a mere hobbyist to a true developer. “Mozilla is a symbol of liberty for me,” Hiroshi explains. “It’s one of the legends of the early days of the web.”

When he’s not authoring add-ons, Hiroshi enjoys reading science fiction and manga. A recent favorite is The Hyakumanjo Labyrinth, a “bizarre adventure story” that takes place on an infinity field beyond space and time within an old Japanese apartment building.

Do you contribute to AMO in some fashion? If so, don’t forget to add your contributions to our Recognition page!

Planet MozillaPlayCanvas Is Impressive

I've been experimenting on my children with different ways to introduce them to programming. We've tried Stencyl, Scratch, JS/HTML, Python, and CodeAcademy with varying degrees of success. It's difficult because, unlike when I learned to program 30 years ago, it's hard to quickly get results that compare favourably with a vast universe of apps and content they've already been exposed to. Frameworks and engines face a tradeoff between power, flexibility and ease-of-use; if it's too simple then it's impossible to do what you want to do and you may not learn "real programming", but if it's too complex then it may just be too hard to do what you want to do or you won't get results quickly.

Recently I discovered PlayCanvas and so far it looks like the best approach I've seen. It's a Web-based 3D engine containing the ammo.js (Bullet) physics engine, a WebGL renderer, a WYSIWYG editor, and a lot more. It does a lot of things right:

  • Building in a physics engine, renderer and visual editor gives a very high level of abstraction that lets people get impressive results quickly while still being easy to understand (unlike, say, providing an API with 5000 interfaces, one of which does what you want). Stencyl does this, but the other environments I mentioned don't. But Stencyl is only 2D; supporting 3D adds significant power without, apparently, increasing the user burden all that much.
  • Being Web-based is great. There's no installation step, it works on all platforms (I guess), and the docs, tutorials, assets, forkable projects, editor and deployed content are all together on the Web. I suspect having the development platform be the same as the deployment platform helps. (The Stencyl editor is Java but its deployed games are not, so WYS is not always WYG.)
  • Performance is good. The development environment works well on a mid-range Chromebook. Deployed games work on a new-ish Android phone.
  • So far the implementation seems robust. This is really important; system quirks and bugs make learning a lot harder, because novices can't distinguish their own bugs from system bugs.
  • The edit-compile-run cycle is reasonably quick, at least for small projects. Slow edit-compile-run cycles are especially bad for novices who'll be making a lot of mistakes.
  • PlayCanvas is programmable via a JS component model. You write JS components that get imported into the editor and are then attached to scene-graph entities. Components can have typed parameters that appear in the editor, so it's pretty easy to create components reusable by non-programmers. However, for many behaviors (e.g. autonomously-moving objects) you probably need to write code --- which is a good thing. It's a bit harder than Scratch/Stencyl but since you're using JS you have more power and develop more reusable skills, and cargo-culting and tweaking scripts works well. You actually have access to the DOM if you want although mostly you'd stick to the PlayCanvas APIs. It looks like you could ultimately do almost anything you want, e.g. add multiplayer support and voice chat via WebRTC.
  • PlayCanvas has WebVR support though I haven't tried it.
  • It's developed on github and MIT licensed so if something's broken or missing, someone can step in and fix it.

So far I'm very impressed and my child is getting into it.

Planet MozillaAdd-ons Update – Week of 2016/06/22

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1432 listed add-ons were reviewed:

  • 1354 (95%) were reviewed in fewer than 5 days.
  • 45 (3%) were reviewed between 5 and 10 days.
  • 33 (2%) were reviewed after more than 10 days.

There are 61 listed add-ons awaiting review.

You can read about the recent improvements in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Compatibility Communications

Most of you should have received an email from us about the future compatibility of your add-ons. You can use the compatibility tool to enter your add-on ID and get some info on what we think is the best path forward for your add-on. This tool only works for listed add-ons.

To ensure long-term compatibility, we suggest you start looking into WebExtensions, or use the Add-ons SDK and try to stick to the high-level APIs. There are many XUL add-ons that require APIs that aren’t available in either of these options, which is why we ran a survey so we know which APIs we should look into adding to WebExtensions. You can read about the survey results here.

We’re holding regular office hours for Multiprocess Firefox compatibility, to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Firefox 48 Compatibility

The compatibility blog post for Firefox 48 is up, and the bulk validation will be run shortly.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 48.

Planet WebKitRelease Notes for Safari Technology Preview Release 7

Safari Technology Preview Release 7 is now available for download. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. Release 7 of Safari Technology Preview covers WebKit revisions 201541–202085.

JavaScript

  • Implemented options argument to addEventListener (r201735, r201757)
  • Updated JSON.stringify to correctly transform numeric array indices (r201674)
  • Improved the performance of Encode operations (r201756)
  • Addressed issues with Date setters for years outside of 1900-2100 (r201586)
  • Fixed an issue where reusing a function name as a parameter name threw a syntax error (r201892)
  • Added the error argument for window.onerror event handlers (r202023)
  • Improved performance for accessing dictionary properties (r201562)
  • Updated Proxy.ownKeys to match recent changes to the spec (r201672)
  • Prevented RegExp unicode parsing from reading an extra character before failing (r201714)
  • Updated SVGs to report their memory cost to the JavaScript garbage collector (r201561)
  • Improved the sampling profiler to protect itself against certain forms of sampling bias that arise due to the sampling interval being in sync with some other system process (r202021)
  • Fixed global lexical environment variables scope for functions created using the Function constructor (r201628)
  • Fixed parsing super when the default parameter is an arrow function (r202074)
  • Added support for trailing commas in function parameters and arguments (r201725)

CSS

  • Added the unprefixed version of the pseudo element ::placeholder (r202066)
  • Fixed a crash when computing the style of a grid with only absolute-positioned children (r201919)
  • Fixed computing a grid container’s height by accounting for the horizontal scrollbar (r201709)
  • Fixed placing positioned items on the implicit grid (r201545)
  • Fixed rendering for the text-decoration-style values: dashed and dotted (r201777)
  • Fixed support for using border-radius and backdrop-filter properties together (r201785)
  • Fixed clipping for border-radius with different width and height (r201868)
  • Fixed CSS reflections for elements with WebGL (r201639)
  • Fixed CSS reflections for elements with a backdrop-filter property (r201648)
  • Improved the Document’s font selection lifetime in preparation for the CSS Font Loading API (r201799)
  • Improved memory management for CSS value parsing (r201608)
  • Improved font face rule handling for style change calculations (r201971, r202085)
  • Fixed multiple selector rule behavior for keyframe animations (r201818)
  • Fixed applying CSS variables correctly for writing-mode properties (r201875)
  • Added experimental support for spring() based CSS animations (r201759)
  • Changed the initial value of background-color to transparent per specs (r201666)

Web APIs

Web Inspector

  • Added ⌘T keyboard shortcut to open the New Tab tab (r201692, r201762)
  • Added the ability to show and hide columns in data grid tables (r202009, r202081)
  • Fixed an error when trying to delete nodes with children (r201843)
  • Added a Top Functions view for Call Trees in the JavaScript & Events timeline (r202010, r202055)
  • Added gaps to the overview and category graphs in the Memory timeline where discontinuities exist in the recording (r201686)
  • Improved the performance of DOM tree views (r201840, r201833)
  • Fixed filtering to apply to new records added to the data grid (r202011)
  • Improved snapshot comparisons to always compare the later snapshot to the earlier snapshot no matter what order they were selected (r201949)
  • Improved performance when processing many DOM.attributeModified messages (r201778)
  • Fixed the 60fps guideline for the Rendering Frames timeline when switching timeline modes (r201937)
  • Included the exception stack when showing internal errors in Web Inspector (r202025)
  • Added ⌘P keyboard shortcut for quick open (r201891)
  • Removed Text → Content subsection from the Visual Styles Sidebar when not necessary (r202073)
  • Show <template> content that should not be hidden as Shadow Content (r201965)
  • Fixed elements in the Elements tab losing focus when selected by the up or down key (r201890)
  • Enabled combining diacritic marks in input fields in Web Inspector (r201592)

Media

  • Prevented double-painting the outline of a replaced video element (r201752)
  • Properly prevented video.play() for video.src="file" with audio user gesture restrictions in place (r201841)
  • Prevented showing the caption menu if the video has no selectable text or audio tracks (r201883)
  • Improved performance of HTMLMediaElement.prototype.canPlayType that was accounting for 250–750ms first loading theverge.com (r201831)
  • Fixed inline media controls to show PiP and fullscreen buttons (r202075)

Rendering

  • Fixed a repaint issue with vertical text in an out-of-flow container (r201635)
  • Show text in a placeholder font while downloading the specified font (r201676)
  • Fixed rendering an SVG in the correct vertical position when no vertical padding is applied, and in the correct horizontal position when no horizontal padding is applied (r201604)
  • Fixed blending of inline SVG elements with transparency layers (r202022)
  • Fixed display of hairline borders on 3x displays (r201907)
  • Prevented flickering and rendering artifacts when resizing the web view (r202037)
  • Fixed logic to trigger new layout after changing canvas height immediately after page load (r201889)

Bug Fixes

  • Fixed an issue where Find on Page would show too many matches (r201701)
  • Exposed static text if form label text only contains static text (r202063)
  • Added Origin header for CORS requests on preloaded cross-origin resources (r201930)
  • Added support for the upgrade-insecure-requests (UIR) directive of Content Security Policy (r201679, r201753)
  • Added proper element focus and caret destination for keyboard users activating a fragment URL (r201832)
  • Increased disk cache capacity when there is lots of free space (r201857)
  • Prevented hangs during synchronous XHR requests if a network session doesn’t exist (r201593)
  • Fixed the response for a POST request on a blob resource to return a “network error” instead of HTTP 500 response (r201557)
  • Restricted HTTP/0.9 responses to default ports and cancelled HTTP/0.9 resource loads if the document was loaded with another HTTP protocol (r201895)
  • Fixed parsing URLs containing tabs or newlines (r201740)
  • Fixed cookie validation in private browsing (r201967)
  • Provided memory cache support for the Vary header (r201800, r201805)

Planet MozillaMozilla Awards $385,000 to Open Source Projects as part of MOSS “Mission Partners” Program

moz-love-open

For many years people with visual impairments and the legally blind have paid a steep price to access the Web on Windows-based computers. The market-leading software for screen readers costs well over $1,000. The high price is a considerable obstacle to keeping the Web open and accessible to all. The NVDA Project has developed an open source screen reader that is free to download and to use, and which works well with Firefox. NVDA aligns with one of the Mozilla Manifesto’s principles: “The Internet is a global public resource that must remain open and accessible.”

That’s why, at Mozilla, we have elected to give the project $15,000 in the inaugural round of our Mozilla Open Source Support (MOSS) “Mission Partners” awards. The award will help NVDA stay compatible with the Firefox browser and support a long-term relationship between our two organizations. NVDA is just one of eight grantees in a wide range of key disciplines and technology areas that we have chosen to support as part of the MOSS Mission Partners track. This track financially supports open source software projects doing work that meaningfully advances Mozilla’s mission and priorities.

Giving Money for Open Source Accessibility, Privacy, Security and More

Aside from accessibility, security and privacy are common themes in this set of awards. We are supporting several secure communications tools, a web server which only works in secure mode, and a distributed, client-side, privacy-respecting search engine. The set is rounded out with awards to support the growing Rust ecosystem and promote open source options for the building of compelling games on the Web. (Yes, games. We consider games to be a key art-form in this modern era, which is why we are investing in the future of Web games with WebAssembly and Open Web Games.)

MOSS is a continuing program. The Mission Partners track has a budget for 2016 of around US$1.25 million. The first set of awards listed below total US$385,000 and we look forward to supporting more projects in the coming months. Applications remain open both for Mission Partners and for the Foundational Technology track (for projects creating software that Mozilla already uses or deploys) on an ongoing basis.

We are greatly helped in evaluating applications and making awards by the MOSS Committee. Many thanks again to them.

And The Winners Are….

The first eight awardees are:

Tor: $152,500. Tor is a system for using a distributed network to communicate anonymously and without being tracked. This award will be used to significantly enhance the Tor network’s metrics infrastructure so that the performance and stability of the network can be monitored and improvements made as appropriate.

Tails: $77,000. Tails is a secure-by-default live operating system that aims at preserving the user’s privacy and anonymity. This award will be used to implement reproducible builds, making it possible for third parties to independently verify that a Tails ISO image was built from the corresponding Tails source code.

caddy-open

Caddy: $50,000. Caddy is an HTTP/2 web server that uses HTTPS automatically and by default via Let’s Encrypt. This award will be used to add a REST API, web UI, and new documentation, all of which make it easier to deploy more services with TLS.

Mio: $30,000. Mio is an asynchronous I/O library written in Rust. This award will be used to make ergonomic improvements to the API and thereby make it easier to build high performance applications with Mio in Rust.

getdns-300

DNSSEC/DANE Chain Stapling: $25,000. This project is standardizing and implementing a new TLS extension for transport of a serialized DNSSEC record set, to reduce the latency associated with DANE and DNSSEC validation. This award will be used to complete the standard in the IETF and build both a client-side and a server-side implementation.

godot

Godot Engine: $20,000. Godot is a high-performance multi-platform game engine which can deploy to HTML5. This award will be used to add support for Web Sockets, WebAssembly and WebGL 2.0.

pears

PeARS: $15,500. PeARS (Peer-to-peer Agent for Reciprocated Search) is a lightweight, distributed web search engine which runs in an individual’s browser and indexes the pages they visit in a privacy-respecting way. This award will permit face-to-face collaboration among the remote team and bring the software to beta status.

nvaccess

NVDA: $15,000. NonVisual Desktop Access (NVDA) is a free, open source screen reader for Microsoft Windows. This award will be used to make sure NVDA and Firefox continue to work well together as Firefox moves to a multi-process architecture.

This is only the beginning. Stay tuned for more award announcements as we allocate funds. Open Source is a movement that is only growing, both in numbers and in importance. Operating in the open makes for better security, better accessibility, better policy, better code and, ultimately, a better world. So if you know any projects whose work furthers the Mozilla Mission, send them our way and encourage them to apply.

Planet MozillaHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1281189] Modal View Date field is Glitched after Setting Needinfo

discuss these changes on mozilla.tools.bmo.


Planet MozillaWhat We Learned from the Government Digital Service Design Team

Toward the end of Mozilla’s All-Hands Meeting in London last week, about a dozen members of the Firefox UX team paid a visit to the main…

Planet MozillaTriage translations by author in Pontoon

A few months ago we rolled out bulk actions in Pontoon, allowing you to perform various operations on multiple strings at the same time. Today we’re introducing a new string filter, bringing mass operations a level further.

From now on you can filter translations by author, which simplifies tasks like triaging suggestions from a particular translator. The new filter is especially useful in combination with bulk actions.

For example, you can delete all suggestions submitted by Prince of Nigeria, because they are spam. Or approve all suggestions from Mia Müller, who was just granted Translator permission and was previously unable submit approved translations.

See how to filter by translation author in the video.

P.S.: Gašper, don’t freak out. I didn’t actually remove your translations.

Planet Mozillasyscast discussion on curl and life

I sat down and talked curl, HTTP, HTTP/2, IETF, the web, Firefox and various internet subjects with Mattias Geniar on his podcast the syscast the other day.

Planet MozillaMozilla H1 2016

Mozilla H1 2016 What did Mozilla accomplish in the first half of 2016? Here's the mind-boggling list in rapid-fire review.

Planet MozillaMartes mozilleros, 21 Jun 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1277600] Custom field “Crash Signature” link is deprecated
  • [1274757] hourly whine being run every 15 minutes
  • [1278592] “status” label disappears when changing the resolution in view mode
  • [1209219] Add an option to the disable the “X months ago” format for dates

discuss these changes on mozilla.tools.bmo.


Planet MozillaJoining the U.S. Digital Service

I’ve never worked in government before – or even considered it until I read Dan Portillo’s blog post when he joined the U.S. Digital Service. Mixing the technical skills and business tactics honed in Silicon Valley with the domain specific skills of career government employees is a brilliant way to solve long-standing complex problems in the internal mechanics of government infrastructure. Since their initial work on healthcare.gov, they’ve helped out at the Veterans Administration, Dept of Education and IRS to mention just a few public examples. Each of these solutions have material impact to real humans, every single day.

Building Release Engineering infrastructure at scale, in all sorts of different environments, has always been interesting to me. The more unique the situation, the more interesting. The possibility of doing this work, at scale, while also making a difference to the lives of many real people made me stop, ask a bunch of questions and then apply.

The interviews were the most thorough and detailed of my career so far, and the consequence of this is clear once I started working with other USDS folks – they are all super smart, great at their specific field, unflappable when suddenly faced with un-imaginable projects and downright nice, friendly people. These are not just “nice to have” attributes – they’re essential for the role and you can instantly see why once you start.

The range of skills needed is staggering. In the few weeks since I started, projects I’ve been involved with have involved some combinations of: Ansible, AWS, Cobol, GitHub, NewRelic, Oracle PL/SQL, nginx, node.js, PowerBuilder, Python, Ruby, REST and SAML. All while setting up fault tolerant and secure hybrid physical-colo-to-AWS production environments. All while meeting with various domain experts to understand the technical and legal constraints behind why things were done in a certain way and also to figure out some practical ideas of how to help in an immediate and sustainable way. All on short timelines – measured in days/weeks instead of years. In any one day, it is not unusual to jump from VPN configurations to legal policy to branch merging to debugging intermittent production alerts to personnel discussions.

Being able to communicate effectively up-and-down the technical stack and also the human stack is tricky, complicated and also very very important to succeed in this role. When you see just how much the new systems improve people’s lives, the rewards are self-evident, invigorating and humbling – kinda like the view walking home from the office – and I find myself jumping back in to fix something else. This is very real “make a difference” stuff and is well worth the intense long days.

Over the coming months, please be patient with me if I contact you looking for help/advice – I may very well be fixing something crucial for you, or someone you know!

If you are curious to find out more about USDS, feel free to ask me. There is a lot of work to do (before starting, I was advised to get sleep!) and yes, we are hiring (for details, see here!). I suspect you’ll find it is the hardest, most rewarding job you’ve ever had!

John.

Planet Mozillamicroformats.org at 11

ERMERGERD!!! (excited girl holding a large microformats logo) HERPER BERTHDER!!! Thanks to Julie Anne Noying for the meme birthday card.

<aside>

In this day and age of web frameworks that rise and fall like seasonal fashion displays, bait-and-break APIs, and sudden site-deaths, it’s nothing short of incredible that we’ve been able to continue evolving, improving, and growing microformats use for 11 years. All that incremental work has produced quite a lot since a year ago. Most recently, we’ve made great progress with iterating, publishing, and deploying microformats2 support.

I believe microformats.org has surived as a community, and microformats as a technology, by continuing to focus on solving smaller, simpler problems first, and then iterating only as needed with real world use-cases. It’s an approach I think works as a good starting point for nearly any project. -Tantek

</aside>

10,000s of microformats2 sites and now 10 microformats2 parsers

The past year saw a huge leap in the number of sites publishing microformats2, from 1000s to now 10s of thousands of sites, primarily by adoption in the IndieWebCamp community, and especially the excellent Known publishing system and continually improving WordPress plugins & themes.

New modern microformats2 parsers continue to be developed in various languages, and this past year, four new parsing libraries (in three different languages) were added, almost doubling our previous set of six (in five different languages) that brought our year 11 total to 10 microformats2 parsing libraries available in 8 different programming languages.

microformats2 parsing spec updates

The microformats2 parsing specification has made significant progress in the past year, all of it incremental iteration based on real world publishing and parsing experience, each improvement discussed openly, and tested with real world implementations. The microformats2 parsing spec is the core of what has enabled even simpler publishing and processing of microformats.

The specification has reached a level of stability and interoperability where fewer issues are being filed, and those that are being filed are in general more and more minor, although once in a while we find some more interesting opportunities for improvement.

We reached a milestone two weeks ago of resolving all outstanding microformats2 parsing issues thanks to Will Norris leading the charge with a developer spec hacking session at the recent IndieWeb Summit where he gathered parser implementers and myself (as editor) and walked us through issue by issue discussions and consensus resolutions. Some of those still require minor edits to the specification, which we expect to complete in the next few days.

One of the meta-lessons we learned in that process is that the wiki really is less suitable for collaborative issue filing and resolving, and as of today are switching to using a GitHub repo for filing any new microformats2 parsing issues.

more microformats2 parsers

The number of microformats2 parsers in different languages continues to grow, most of them with deployed live-input textareas so you can try them on the web without touching a line of parsing code or a command line! All of these are open source (repos linked from their sections), unless otherwise noted. These are the new ones:

The Java parsers are a particularly interesting development as one is part of the upgrade to Apache Any23 to support microformats2 (thanks to Lewis John McGibbney). Any23 is a library used for analysis of various web crawl samples to measure representative use of various forms of semantic markup.

The other Java parser is mf2j, an early-stage Java microformats2 parser, created by Kyle Mahan.

The Elixir, Haskell, and Java parsers add to our existing in-development parser libraries in Go and Ruby. The Go parser in particular has recently seen a resurgence in interest and improvement thanks to Will Norris.

These in-development parsers add to existing production parsers, that is, those being used live on websites to parse and consume microformats for various purposes:

As with any open source projects, tests, feedback, and contributions are very much welcome! Try building the production parsers into your projects and sites and see how they work for you.

Still simpler, easier, and smaller after all these years

Usually technologies (especially standards) get increasingly complex and more difficult to use over time. With microformats we have been able to maintain (and in some cases improve) their simplicity and ease of use, and continue to this day to get testimonials saying as much, especially in comparison to other efforts:

…hmm, looks like I should use a separate meta element: https://schema.org/startDate .
Man, Schema is verbose. @microformats FTW!

On the broader problem of schema.org verbosity (no matter the syntax), Kevin Marks wrote a very thorough blog post early in the past year:

More testimonials:

I still prefer @microformats over microdata
* * *
@microformats are easier to write, easier to maintain and the code is so much smaller than microdata.
* * *
I am not a big fan of RDF, semanticweb, or predefined ontologies. We need something lightweight and emergent like the microformats

This last testimonial really gets at the heart of one of the deliberate improvements we have made to iterating on microformats vocabularies in particular.

evolving h-entry

We have had an implementation-driven and implementation-tested practice for the microformats2 parsing specification for quite some time.

More and more we are adopting a similar approach to growing and evolving microformats vocabularies like h-entry.

We have learned to start vocabularies as minimal as possible, rather than start with everything you might want to do. That “start with everything you might want” is a common theory-first approach taken by a-priori vocabularies or entire "predefined ontologies" like schema.org's 150+ objects at launch, very few of which (single digits?) Google or anyone bothers to do anything with, a classic example of premature overdesign, of YAGNI).

With h-entry in particular, we started with an implementation filtered subset of hAtom, and since then have started documenting new properties through a few deliberate phases (which helps communicate to implementers which are more experimental or more stable)

  1. Proposed Additions – when someone proposes a property, gets some sort of consensus among their community peers, and perhaps one more person to implementing it in the wild beyond themselves (e.g. as the IndieWebCamp community does), it's worth capturing it as a proposed property to communicate that this work is happening between multiple people, and that feedback, experimentation, and iteration is desired.
  2. Draft Properties - when implementations begin to consume proposed properties and doing something explicit with them, then a postive reinforcement feedback loop has started and it makes sense to indicate that such a phase change has occured by moving those properties to "draft". There is growing activity around those properties, and thus this should be considered a last call of sorts for any non-trivial changes, which get harder to make with each new implementation.
  3. Core Properties - these properties have gained so much publishing and consuming support that they are for all intents and purposes stable. Another phase change has occured: it would be much harder to change them (too many implementations to coordinate) than keep them the same, and thus their stability has been determined by real world market adoption.

The three levels here, proposed, draft, and core, are merely "working" names, that is, if you have a better idea what to call these three phases by all means propose it.

In h-entry in particular, it's likely that some of the draft properties are now mature (implemented) enough to move them to core, and some of the proposed properties have gained enough support to move to draft. The key to making this happen is finding and citing documentation of such implementation and support. Anyone can speak up in the IRC channel etc. and point out such properties that they think are ready for advancement.

How we improve moving forward

We have made a lot of progress and have much better processes than we did even a year ago, however I think there’s still room for improvement in how we evolve both microformats technical specifications like the microformats2 parsing spec, and in how we create and improve vocabularies.

It’s pretty clear that to enable innovation we have to ways of encouraging constructive experimentation, and yet we also need a way of indicating what is stable vs in-progress. For both of those we have found that real world implementations provide both a good focusing mechanism and a good way to test experiments.

In the coming year I expect we will find even better ways to explain these methods, in the hopes that others can use them in their efforts, whether related to microformats or in completely different standards efforts. For now, let’s appreciate the progress we’ve made in the past year from publishing sites, to parsing implementations, from process improvements, to continuously improving living specifications. Here's to year 12.

Also published on: microformats.org.

Planet Mozillacurl user survey results 2016

The annual curl user poll was up 11 days from May 16 to and including May 27th, and it has taken me a while to summarize and put together everything into a single 21 page document with all the numbers and plenty of graphs.

Full 2016 survey analysis document

The conclusion I’ve drawn from it: “We’re not done yet”.

Here’s a bonus graph from the report, showing what TLS backends people are using with curl in 2016 and 2015:

curl-tlsbackends2-2016

Dev.OperaMaking progressive web apps even better: ambient badging and “pop into browser”

<figure block="figure" mod="right"> Add to home screen badge in URL bar </figure>

We’re excited to release a Labs build of Opera for Android with support for two experimental features that enhance the discoverability and use of progressive web apps.

Ambient badging

Earlier this month, Alex Russell wrote:

Wouldn’t it be great if there were a button in the URL bar that appeared whenever you landed on a PWA that you could always tap to save it to your homescreen?

We have been exploring this idea for a while, and are previewing an early version of this idea in this Labs build.

If you load a site that passes the criteria to qualify as a progressive web app, a small phone icon is shown to the left of the URL bar, labeling it as such. Note that this is different from the “add to home screen” install banner, which requires a user engagement check.

Give it a spin on one of the sites listed on pwa.rocks and let us know how it goes.

Pop into browser

<figure block="figure" mod="right"> <video controls="" cover="/blog/pwa-badge-pop/video.jpg" elem="media" style="margin: 0 0 15px 15px" width="360"> <source src="/blog/pwa-badge-pop/video.mp4" type="video/mp4"> <source src="/blog/pwa-badge-pop/video.webm" type="video/webm"> </video> </figure>

A few weeks ago, there were some discussions around the lack of visibility of URLs in progressive web apps. What to do for instance if you want to find out the URL of a page inside a progressive web app?

Jeremy Keith tweeted:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Also here, we’ve been doing some explorations: initially, we thought we could surface the URL by requiring the user to long-press anywhere in the web app and show it in a context menu. However, after further investigations, this turned out to be harder than expected — e.g. what to do with form fields, where context menus serve a different purpose? Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right. You can see in the video how this works. Quite useful, and it’s discoverable, without getting in the way.

These are just early proposals, and we may or may not include them in a final Opera for Android build. In the meanwhile, we’re happy to hear your feedback: give it a spin and let us know what you think.

Planet MozillaMy closing keynote at Awwwards NYC 2016: A New Hope – the web strikes back

Last week I was lucky enough to give the closing keynote at the Awwwards Conference in New York.

serviceworker beats appcache

Following my current fascination, I wanted to cover the topic of Progressive Web Apps for an audience that is not too technical, and also very focused on delivering high-fidelity, exciting and bleeding edge experiences on the web.

Getting slightly too excited about my Star Wars based title, I got a bit overboard with bastardising Star Wars quotes in the slides, but I managed to cover a lot of the why of progressive web apps and how it is a great opportunity right now.

I covered:

  • The web as an idea and its inception: independent, distributed and based on open protocols
  • The power of links
  • The horrible environment that was the first browser wars
  • The rise of standards as a means to build predictable, future-proof products
  • How we became too dogmatic about standards
  • How this lead to rebelling developer using JavaScript to build everything
  • Why this is a brittle environment and a massive bet on things working flawlessly on our users’ computers
  • How we never experience this as our environments are high-end and we’re well connected
  • How we defined best practices for JavaScript, like Unobtrusive JavaScript and defensive coding
  • How libraries and frameworks promise to fix all our issues and we’ve become dependent on them
  • How a whole new generation of developers learned development by copying and pasting library-dependent code on Stackoverflow
  • How this, among other factors, lead to a terribly bloated web full of multi-megabyte web sites littered with third party JavaScript and library code
  • How to rise of mobile and its limitations is very much a terrible environment for those to run in
  • How native apps were heralded as the solution to that
  • How we retaliated by constantly repeating that the web will win out in the end
  • How we failed to retaliate by building web-standard based apps that played by the rules of native – an environment where the deck was stacked against browsers
  • How right now our predictions partly came true – the native environments and closed marketplaces are failing to deliver right now. Users on mobile use 5 apps and download on average not a single new one per month
  • How users are sick of having to jump through hoops to try out some new app and having to lock themselves in a certain environment
  • How the current state of supporting mobile hardware access in browsers is a great opportunity to build immersive experiences with web technology
  • How ServiceWorker is a great opportunity to offer offline capable solutions and have notifications to re-engage users and allow solutions to hibernate
  • How Progressive Web Apps are a massive opportunity to show native how software distribution should happen in 2016

Yes, I got all that in. See for yourself :).

The slides are available on SlideShare

You can watch the screencast of the video on YouTube.

Planet MozillaWebRTC and communications projects in GSoC 2016

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf / LinuxWochen.at

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

Planet MozillaShow Firefox Bookmark Toolbar in Fullscreen Mode

By default, the bookmark toolbar is hidden when Firefox goes into fullscreen mode. It’s quite annoying because I use the bookmark toolbar a lot. And since I use i3 window manager, I also use the fullscreen mode very often to avoid resizing the window. After some googling I found this quick solution on SUMO (Firefox commuity is awesome!).

before

The idea is that the Firefox chrome (not to be confused with the Google Chrome browser) is defined using XUL. You can adjust its styling using CSS. The user defined chrome CSS is located in your Firefox profile. Here is how you do it:

  • Open your Firefox profile folder, which is ~/.mozilla/firefox/<hash>.<profile_name> on Linux. If you can’t find it, you can open about:support in your Firefox. and click the “Open Directory” button in the “Profile Directory” field.

before

  • Create a folder named chrome if it doesn’t exist yet.
  • Create a file called userChrome.css in the chrome folder, copy the following content into it and save.

@namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"); /* only needed once */

/* full screen toolbars */
#navigator-toolbox[inFullscreen] toolbar:not([collapsed="true"]) {
   visibility:visible!important;
}
  • Restart your Firefox and Voila!

before

Planet MozillaEU Internet Users Can Stand Up For Net Neutrality

Over the past 18 months, the debate around the free and open Internet has taken hold in countries around the world, and we’ve been encouraged to see governments take steps to secure net neutrality. A key component of these movements has been strong public support from users upholding the internet as a global public resource. From the U.S. to India, public opinion has helped to positively influence internet regulators and shape internet policy.

Now, it’s time for internet users in the EU to speak out and stand up for net neutrality.

The Body of European Regulators of Electronic Communications (BEREC) is currently finalising implementation guidelines for the net neutrality legislation passed by EU Parliament last year. This is an important moment — how the legislation is interpreted will have a major impact on the health of the internet in the EU. A clear, strong interpretation can uphold the internet as a free and open platform. But a different interpretation can grant big telecom companies considerable influence and the ability to implement fast lanes, slow lanes, and zero-rating. It would make the internet more closed and more exclusive.

At Mozilla, we believe the internet is at its best as a free, open, and decentralised platform. This is the kind of internet that enables creativity and collaboration; that grants everyone equal opportunity; and that benefits competition and innovation online.

Everyday internet users in the EU have the opportunity to stand up for this type of internet. From now through July 18, BEREC is accepting public comments on the draft guidelines. It’s a small window — and BEREC is simultaneously experiencing pressure from telecom companies and other net neutrality foes to undermine the guidelines. That’s why it’s so important to sound off. When more and more citizens stand up for net neutrality, we’re empowering BEREC to stand their ground and interpret net neutrality legislation in a positive way.

Mozilla is proud to support savetheinternet.eu, an initiative by several NGOs — like European Digital Rights (EDRi) and Access Now — to uphold strong net neutrality in the EU. savetheinternet.eu makes it simple to submit a public comment to BEREC and stand up for an open internet in the EU. BEREC’s draft guidelines already address many of the ambiguities in the Regulation; your input and support can bring needed clarity and strength to the rules. We hope you’ll join us: visit savetheinternet.eu and write BEREC before the July 18 deadline.

Planet MozillaThis Week in Rust 135

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42 and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is error-chain which feels like the missing piece in Rust's Result-based error-handling puzzle. Thanks to KodrAus for the suggestion.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

73 pull requests were merged in the last two weeks.

New Contributors

  • Esteban Küber
  • marudor

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

  • 6/22. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
  • 6/23. Rust release triage at #rust-triage on irc.mozilla.org.
  • 6/29. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
  • 6/30. Zurich, Switzerland - Introduction to Rust.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The Rust standard libs aren't quite batteries included, but they come with a pile of adaptor cables and an optional chemistry lab.

Gankro on Twitter

Thanks to llogiq for the suggestion.

Submit your quotes for next week!

Planet MozillaThis Week In Servo 69

In the last week, we landed 84PRs in the Servo organization’s repositories.

One of our contributors, Florian Duraffourg, landed a patch recently that switched our public domain list matching source and algorithm, resulting in a huge speedup in page load times (~25%!). Shing Lyu tracked down and measured this unexpectedly-large gain through a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!

We’re excited to announce that Connor Brewster (cbrewster on IRC and ConnorGBrewster on github) is the newest addition to the list of Servo reviewers!

This upcoming week is the final week of June! So, expect the first tech preview of Servo and browser.html to be announced on this blog shortly…

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • nox upgraded our Rust version
  • eddyb landed some great work to migrate us off of use of return_address, which has been removed from Rust
  • nox continued the migration off of syntax plugins to get WebRender building against stable Rust
  • jack fixed linking problems with our CEF build
  • vvuk landed fixes to get more of our Windows dependencies building with cmake and outside of msys
  • SimonSapin has fixed the style crate to nearly build with a stable version of the Rust compiler
  • avadacatavra prevented potential IRCxit by those members of our community with UK keyboards when typing issue numbers in IRC chat
  • jdm fixed a source of undefined behaviour by preventing unwinding from Rust code through C stack frames
  • Ivan Yang added a reload keyboard shortcut
  • carols10cents documented the 0.9 -> 1.0 migration for rust-url users
  • Ms2ger removed the layout->script dependency, enabling better CPU usage while building libscript
  • emilio implemented type-based validation of numerous WebGL values
  • jdm added a signal handler that reports a backtrace if a segfault occurs
  • anesshusa ensured that the packaging-related code doesn’t regress in the future

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

No screenshots this week.

Planet MozillaThis Week In Servo 68

In the last week, we landed 55 PRs in the Servo organization’s repositories.

The entire Servo team and several of our contributors spent last week in London for the Mozilla All Hands meeting. While this meeting resulted in fewer PRs than normal, there were many great meetings that resulted in both figuring out some hard problems and introducing more people to Servo’s systems. These meetings included:

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • connorgbrewster added tests for the history interface
  • ms2ger moved ServoLayoutNode to script, as part of the effort to untangle our build dependencies
  • nox is working to reduce our shameless dependencies on Nightly Rust across our dependencies
  • aneeshusa added better support for installing the Android build tools for cross-compilation on our builders
  • jdm avoided a poor developer experience when debugging on OS X
  • darinm223 fixed the layout of images with percentage dimensions
  • izgzhen implemented several APIs related to Blob URLs
  • srm912 and jdm added support for private mozbrowser iframes (ie. incognito mode)
  • nox improved the performance of several 2d canvas APIs
  • jmr0 implemented throttling for mozbrowser iframes that are explicitly marked as invisible
  • notriddle fixed the positioning of the cursor in empty input fields

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

No screenshots this week.

Planet MozillaMozilla London All Hands 2016

Last week, all of Mozilla met in London for a whirlwind tour from TARDIS to TaskCluster, from BBC1 to e10s, from Regent Park to the release train, from Paddington to Positron. As an Outreachy intern, I felt incredibly lucky to be part of this event, which gave me a chance to get to know Mozilla, my team, and the other interns much better. It was a jam-packed work week of talks, meetings, team events, pubs, and parties, and it would be impossible to blog about all of the fun, fascinating, and foxy things I learned and did. But I can at least give you some of the highlights! Or, should I say, Who-lights? (Be warned, that is not the last pun you will encounter here today.)

Role models

While watching the plenary session that kicked off the week, it felt great to realize that of the 4 executives emerging from the TARDIS in the corner to take the stage (3 Mozillians and 1 guest star), a full 50% were women. As I had shared with my mentor (also a woman) before arriving in London, one of my goals for the week was to get inspired by finding some new role moz-els (ha!): Mozillians who I could aspire to be like one day, especially those of the female variety.

Why a female role model, specifically? What does gender have to do with it?

Well, to be a good role model for you, a person needs to not only have a life/career/lego-dragon you aspire to have one day, but also be someone who you can already identify with, and see yourself in, today. A role model serves as a bridge between the two. As I am a woman, and that is a fundamental part of my experience, a role model who shares that experience is that much easier for me to relate to. I wouldn’t turn down a half-Irish-half-Indian American living in Germany, either.

At any rate, in London I found no shortage of talented, experienced, and - perhaps most importantly - valued women at Mozilla. I don’t want to single anyone out here, but I can tell you that I met women at all levels of the organization, from intern to executive, who have done and are doing really exciting things to advance both the technology and culture of Mozilla and the web. Knowing that those people exist, and that what they do is possible, might be the most valuable thing I took home with me from London.

Aw, what about the Whomsycorn?

No offense, Whomyscorn, you’re cool too.

Electrolysis (e10s)

Electrolysis, or “e10s” for those who prefer integers to morphemes, is a massive and long-running initiative to separate the work Firefox does into multiple processes.

At the moment, the Firefox desktop program that the average user downloads and uses to explore the web runs in a single process. That means that one process has to do all the work of loading & displaying web pages (the browser “content”), as well as the work of displaying the user interface and its various tabs, search bars, sidebars, etc. (the browser “chrome”). So if something goes wrong with, say, the execution of a poorly-written script on a particular page, instead of only that page refusing to load, or its tab perhaps crashing, the entire browser itself may hang or crash.

That’s not cool. Especially if you often have lots of tabs open. Not that I ever do.

Of course not. Anyway, even less cool is the possibility that some jerk (not that there are any of those on the internet, though, right?) could make a page with a script that hijacks the entire browser process, and does super uncool stuff.

It would be much cooler if, instead of a single massive process, Firefox could use separate processes for content and chrome. Then, if a page crashes, at least the UI still works. And if we assign the content process(es) reduced permissions, we can keep possibly-jerkish content in a nice, safe sandbox so that it can’t do uncool things with our browser or computer.

Better performance, more security? Super cool.

Indeed. Separation is cool. Electrolysis is separation. Ergo, Electrolysis is cool.

It’s not perfect yet - for example, compatibility with right-to-left languages, accessibility (or “a11y”, if “e10s” needs a buddy), and add-ons is still an issue - but it’s getting there, and it’s rolling out real soon! Given that the project has been underway since 2008, that’s pretty exciting.

Rust, Servo, & Oxidation

Rust logo I first heard about the increasingly popular language Rust when I was at the Recurse Center last fall, and all I knew about it was that it was being used at Mozilla to develop a new browser engine called Servo.

More recently, I heard talks from Mozillians like E. Dunham that revealed a bit more about why people are so excited about Rust: it’s a new language for low-level programming, and compared with the current mainstay C, it guarantees memory safety. As in, “No more segfaults, no more NULLs, no more dangling pointers’ dirty looks”. It’s also been designed with concurrency and thread safety in mind, so that programs can take better advantage of e.g. multi-core processors. (Do not ask me to get into details on this; the lowest level I have programmed at is probably sitting in a beanbag chair. But I believe them when they say that Rust does those things, and that those things are good.)

Finally, it has the advantage of having a tight-knit, active, and dedicated community “populated entirely by human beings”, which is due in no small part to the folks at Mozilla and beyond who’ve been working to keep the community that way.

OK OK OK, so Rust is a super cool new language. What can you do with it?

Well, lots of stuff. For example, you could write a totally new browser engine, and call it Servo.

Wait, what’s a browser engine?

A browser engine (aka layout or rendering engine) is basically the part of a browser that allows it to show you the web pages you navigate to. That is, it takes the raw HTML and CSS content of the page, figures out what it means, and turns it into a pretty picture for you to look at.

Uh, I’m pretty sure I can see web pages in Firefox right now. Doesn’t it already have an engine?

Indeed it does. It’s called Gecko, and it’s written in C++. It lets Firefox make the web beautiful every day.

So why Servo, then? Is it going to replace Gecko?

No. Servo is an experimental engine developed by Mozilla Research; it’s just intended to serve(-o!) as a playground for new ideas that could improve a browser’s performance and security.

The beauty of having a research project like Servo and a real-world project like Gecko under the same roof at Mozilla is that when the Servo team’s research unveils some new and clever way of doing something faster or more awesomely than Gecko does, everybody wins! That’s thanks to the Oxidation project, which aims to integrate clever Rust components cooked up in the Servo lab into Gecko. Apparently, Firefox 45 already got (somewhat unexpectedly) an MP4 metadata parser in Rust, which has been running just fine so far. It’s just the tip of the iceberg, but the potential for cool ideas from Servo to make their way into Gecko via Oxidation is pretty exciting.

The Janitor

Another really exciting thing I heard about during the week is The Janitor, a tool that lets you contribute to FOSS projects like Firefox straight from your browser.

For me, one of the biggest hurdles to contributing to a new open-source project is getting the development environment all set up.

Ugh I hate that. I just want to change one line of code, do I really need to spend two days grappling with installation and configuration?!?

Exactly. But the Janitor to the rescue!

Cowboy Bebop via GIPHY

Powered by the very cool Cloud9 IDE, the Janitor gives you one-click access to a ready-to-go, cloud-based development environment for a given project. At the moment there are a handful of project supported (including Firefox, Servo, and Google Chrome), and new ones can be added by simply writing a Dockerfile. I’m not sure that an easier point of entry for new FOSS contributors is physically possible. The ease of start-up is perfect for short-term contribution efforts like hackathons or workshops, and thanks to the collaborative features of Cloud9 it’s also perfect for remote pairing.

Awesome, I’m sold. How do I use it?

Unfortunately, the Janitor is still in alpha and invite-only, but you can go to janitor.technology and sign up to get on the waitlist. I’m still waiting to get my invite, but if it’s half as fantastic as it seems, it will be a huge step forward in making it easier for new contributors to get involved with FOSS projects. If it starts supporting offline work (apparently the Cloud9 editor is somewhat functional offline already, once you’ve loaded the page initially, but the terminal and VNC always needs a connection to function), I think it’ll be unstoppable.

L20n

The last cool thing I heard about (literally, it was the last session on Friday) at this work week was L20n.

Wait, I thought “localization” was abbreviated “L10n”?

Yeah, um, that’s the whole pun. Way to be sharp, exhausted-from-a-week-of-talks-Anjana.

See, L20n is a next-generation framework for web and browser localization (l10n) and internationalization (i18n). It’s apparently a long-running project too, born out of the frustrations of the l10n status quo.

According to the L20n team, at the moment the localization system for Firefox is spread over multiple files with multiple syntaxes, which is no fun for localizers, and multiple APIs, which is no fun for developers. What we end up with is program logic intermingling with l10n/i18n decisions (say, determining the correct format for a date) such that developers, who probably aren’t also localizers, end up making decisions about language that should really be in the hands of the localizers. And if a localizer makes a syntax error when editing a certain localization file, the entire browser refuses to run. Not cool.

Pop quiz: what’s cool?

Um…

C’mon, we just went over this. Go on and scroll up.

Electrolyis?

Yeah, that’s cool, but thinking more generally…

Separation?

That’s right! Separation is super cool! And that’s what L20n does: separate l10n code from program source code. This way, developers aren’t pretending to be localizers, and localizers aren’t crashing browsers. Instead, developers are merely getting localized strings by calling a single L20n API, and localizers are providing localized strings in a single file format & syntax.

Wait but, isn’t unifying everything into a single API/file format the opposite of separation? Does that mean it’s not cool?

Shhh. Meaningful separation of concerns is cool. Arbitrary separation of a single concern (l10n) is not cool. L20n knows the difference.

OK, fine. But first “e10s” and “a11y”, now “l10n”/”l20n” and “i18n”… why does everything need a numbreviation?

You’ve got me there.

Planet MozillaSetting up mutt with gmail on Ubuntu

I was looking to set up the mutt email client on my Ubuntu box to go through my gmail account. Since it took me a couple of hours to figure out, and I’ll probably forget by the time I need to know again, I figure I’d post my steps here.

I’m on Ubuntu 16.04 LTS (lsb_release -a)

Install mutt:

<figure class="code">
1
$ sudo apt-get install mutt
</figure>

In gmail, allow other apps to access gmail:

Allowing less secure apps to access your account Less Secure Apps

Create the folder

<figure class="code">
1
2
3
$ sudo touch $MAIL
$ sudo chmod 660 $MAIL
$ sudo chown `whoami`:mail $MAIL
</figure>

where $MAIL for me was /var/mail/nick.

Create the ~/.muttrc file

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
set realname = "<first and last name>"
set from = "<gmail username>@gmail.com"
set use_from = yes
set envelope_from = yes

set smtp_url = "smtps://<gmail username>@gmail.com@smtp.gmail.com:465/"
set smtp_pass = "<gmail password>"
set imap_user = "<gmail username>@gmail.com"
set imap_pass = "<gmail password>"
set folder = "imaps://imap.gmail.com:993"
set spoolfile = "+INBOX"
set ssl_force_tls = yes

# G to get mail
bind index G imap-fetch-mail
set editor = "vim"
unset record
set move = no
set charset = "utf-8"
</figure>

I’m sure there’s better/more config options. Feel free to go wild, this is by no means a comprehensive setup.

Run mutt:

<figure class="code">
1
$ mutt
</figure>

We should see it connect and download our messages. m to start sending new messages. G to fetch new messages.

Planet MozillaPourquoi il n'aurait pas du arrêter d'utiliser CSS

Cet article est un commentaire sur celui-ci. Je l'ai trouvé via un tweet de Korben, évidemment. Il m'a un peu hérissé le poil, évidemment, et donc j'ai besoin de fournir une réponse à son auteur via ce blog.

Les sélecteurs

L'auteur de l'article fait les trois reproches suivants aux sélecteurs CSS :

  1. La définition d'un style associé à un sélecteur peut être redéfinie ailleurs
  2. Si on associe plusieurs styles à un sélecteur, les derniers définis dans le CSS auront toujours la priorité
  3. Quelqu'un peut péter les styles d'un composant pour peu qu'il ne sache pas qu'un sélecteur est utilisé ailleurs

Le moins que l'on puisse dire est j'ai halluciné en lisant cela. Quelques lignes au-dessus, l'auteur faisait une comparaison avec JavaScript. Reprenons donc ses trois griefs....

Pour le numéro 1, il râle parce que  if (foo) { a = 1; } ... if (foo) { a = 2;} est tout simplement possible. Bwarf.

Dans le cas numéro 2, il râle parce que dans if (foo) { a = 1; a = 2; ... } les ... verront la variable a avoir la valeur 2.

Dans le cas numéro 3, oui, certes, je ne connais pas de langage dans lequel quelqu'un qui modifie sans connaître le contexte ne fera aucune connerie...

La spécificité

Le « plein délire » de !important, et je suis le premier à reconnaître que cela ne m'a pas aidé dans l'implémentation de BlueGriffon, c'est quand même ce qui a permis à des tétraflopées de bookmarklets et de codes injectés dans des pages totalement arbitraires d'avoir un résultat visuel garanti. L'auteur ne râle pas seulement contre la spécificité et son calcul mais aussi sur la possibilité d'avoir une base contextuelle pour l'application d'une règle. Il a en partie raison, et j'avais moi-même il y a longtemps proposé un CSS Editing Profile limitant les sélecteurs utilisés dans un tel profil pour une plus grande facilité de manipulation. Mais là où il a tort, c'est que des zillions de sites professionnels utilisant des composants ont absolument besoin des sélecteurs complexes et de ce calcul de spécificité...

Les régressions

En lisant cette section, j'ai laché un sonore « il exagère tout de même »... Oui, dans tout langage interprété ou compilé, modifier un truc quelque part sans tenir compte du reste peut avoir des effets de bords négatifs. Son exemple est exactement conforme à celui d'une classe qu'on dériverait ; un ajout à la classe de base se retrouverait dans la classe dérivée. Oh bah c'est pas bien ça... Soyons sérieux une seconde svp.

Le choix de priorisation des styles

Là, clairement, l'auteur n'a pas compris un truc fondamental dans les navigateurs Web et le DOM et l'application des CSS. Il n'y a que deux choix possibles : soit on utilise l'ordre du document tree, soit on utilise des règles de cascade des feuilles de styles. Du point de vue du DOM, class="red blue" et class="blue red" sont strictement équivalents et il n'y a aucune, je répète aucune, garantie que les navigateurs préservent cet ordre dans leur DOMTokenList.

Le futur de CSS

Revenons à la comparaison JS de l'auteur. En gros, si on a en ligne 1 var a=1 en ligne 2 alert(a), l'auteur râle parce que si on insère var a = 2 entre les deux lignes on affichera la valeur 2 et pas 1... C'est clairement inadmissible (au sens de pas acceptable) comme argument.

La méthodologie BEM

Un pansement sur une jambe de bois... On ne change rien mais on fout des indentations qui augmentent la taille du fichier, gênent son édition et sa manipulation, et ne sont aucunement compréhensibles par une machine puisque tout cela n'est pas préservé par le CSS Object Model.

Sa proposition alternative

J'ai toussé à m'en éjecter les poumons du corps à la lecture de cette section. C'est une horreur non-maintenable, verbeuse et error-prone.

En conclusion...

Oui, CSS a des défauts de naissance. Je le reconnais bien volontiers. Et même des défauts d'adulte quand je vois certaines cochoncetés que le ShadowDOM veut nous faire mettre dans les Sélecteurs CSS. Mais ça, c'est un rouleau-compresseur pour écraser une mouche, une usine à gaz d'une magnitude rarement égalée.

Je ne suis globalement pas d'accord du tout avec bloodyowl, qui oublie trop facilement les immenses bénéfices que nous avons pu tirer de tout ce qu'il décrie dans son article. Des centaines de choses seraient impossibles sans tout ça. Alors oui, d'accord, la Cascade c'est un peu capillotracté. Mais on n'attrape pas des mouches avec du vinaigre et si le monde entier a adopté CSS (y compris le monde de l'édition qui vient pourtant de solutions assez radicalement différentes du Web), c'est bien parce que c'est bien et que ça marche.

En résumé, non CSS n'est pas « un langage horriblement dangereux ». Mais oui, si on laisse n'importe qui faire des ajouts n'importe comment dans un corpus existant, ça peut donner des catas. C'est pareil dans un langage de programmation, dans un livre, dans une thèse, dans de la mécanique, partout. Voilà voilà.

Planet MozillaPost #mozlondon

Writing this from the comfort of my flat, in London, just as many people are tweeting about their upcoming flight from “#mozlondon”—such a blissful post-all Hands travel experience for once!

Note: #mozlondon was a Mozilla all hands which was held in London last week. And since everything is quite social networked nowadays, the “#mozlondon” tag was chosen. Previous incarnations: mozlando (for Orlando), mozwww (for Vancouver’s “Whistler Work Week” which made for a very nice mountainous jagged tag), and mozlandia (because it was held in Portland, and well, Portlandia. Obviously!)

I always left previous all hands feeling very tired and unwell in various degrees. There’s so much going on, in different places, and there’s almost no time to let things sink in your brain (let alone in your stomach as you quickly brisk from location to location). The structure of previous editions also didn’t really lend itself very well to collaboration between teams—too many, too long plenaries, very little time to grab other people’s already exhausted attention.

This time, the plenaries were shortened and reduced in number. No long and windy “inspirational” keynotes, and way more room for arranging our own meetings with other teams, and for arranging open sessions to talk about your work to anyone interested. More BarCamp style than big and flashy, plus optional elective training sessions in which we could learn new skills, related or not to our area of expertise.

I’m glad to say that this new format has worked so much better for me. I actually was quite surprised that it was going really well for me half-way during the week, and being the cynic that I sometimes am, was expecting a terrible blow to be delivered before the end of the event. But… no.

We have got better at meetings. Our team meeting wasn’t a bunch of people interrupting each other. That was a marvel! I loved that we got things done and agreements and disagreements settled in a civilised manner. The recipe for this successful meeting: have an agenda, a set time, and a moderator, and demand one or more “conclusions” or “action items” after the meeting (otherwise why did you meet?), and make everyone aware that time is precious and running out, to avoid derailments.

We also met with the Servo team. With almost literally all of them. This was quite funny: we had set up a meeting with two or three of them, and other members of the team saw it in somebody else’s calendar and figured a meeting to discuss Servo+DevRel sounded interesting, so they all came, out of their own volition! It was quite unexpected, but welcome, and that way we could meet everyone and put faces to IRC nicknames in just one hour. Needless to say, it’s a great caring team and I’m really pleased that we’re going to work together during the upcoming months.

I also enjoyed the elective training sessions.

I went to two training sessions on Rust; it reminded me how much fun “systems programming” can be, and made me excited about the idea of safe parallelism (among other cool stuff). I also re-realised how hard programming and teaching programming can be as I confronted my total inexperience in Rust and increasing frustration at the amount of new concepts thrown at me in such a short interval—every expert on any field should regularly try learning something new every now and then to bring some ‘humility’ back and replenish the empathy stores.

The people sessions were quite long and extenuating and had a ton of content in 3 hours each, and after them I was just an empty hungry shell. But a shell that had learned good stuff!

One was about having difficult conversations, navigating conflict, etc. I quickly saw how many of my ways had been wrong in the past (e.g. replying to a hurt person with self-defense instead of trying to find why they were hurt). Hopefully I can avoid falling in the same traps in the future! This is essential for so many aspects in life, not only open source or software development; I don’t know why this is not taught to everyone by default.

The second session was about doing good interviews. In this respect, I was a bit relieved to see that my way of interviewing was quite close to the recommendations, but it was good to learn additional techniques, like the STAR interview technique. Which surfaces an irony: even “non-technical” skills have a technique to them.

A note to self (that I’m also sharing with you): always make an effort to find good adjectives that aren’t a negation, but a description. E.g. in this context “people sessions” or “interpersonal skills sessions” work so much better and are more descriptive and specific than “non-technical” while also not disrespecting those skills because they’re “just not technical”.

A thing I really liked from these two sessions is that I had the chance to meet people from areas I would not have ever met otherwise, as they work on something totally different from what I work on.

The session on becoming a more senior engineer was full of good inspiration and advice. Some of the ideas I liked the most:

  • as soon as you get into a new position, start thinking of who should replace you so you can move on to something else in the future (so you set more people in a path of success). You either find that person or make it possible for others to become that person…
  • helping people be successful as a better indicator of your progress to seniority than being very good at coding
  • being a good generalist is as good as being a good specialist—different people work differently and add different sets of skills to an organisation
  • but being a good specialist is “only good” if your special skill is something the organisation needs
  • changing projects and working on different areas as an antidote to burn out
  • don’t be afraid to occasionally jump into something even if you’re not sure you can do it; it will probably grow you!
  • canned projects are not your personal failure, it’s simply a signal to move on and make something new and great again, using what you learned. Most of the people on the panel had had projects canned, and survived, and got better
  • if a project gets cancelled there’s a very high chance that you are not going to be “fired”, as there are always tons of problems to be fixed. Maybe you were trying to fix the wrong problem. Maybe it wasn’t even a problem!
  • as you get more senior you speak less to machines and more to people: you develop less, and help more people develop
  • you also get less strict about things that used to worry you a lot and turn out to be… not so important! you also delegate more and freak out less. Tolerance.
  • I was also happy to hear a very clear “NO” to programming during every single moment of your spare time to prove you’re a good developer, as that only leads to burn out and being a mediocre engineer.

Deliberate strategies

I designed this week with the full intent of making the most of it while still keeping healthy. These are my strategies for future reference:

  • A week before: I spent time going through the schedule and choosing the sessions I wanted to attend.
  • I left plenty of space between meetings in order to have some “buffer” time to process information and walk between venues (the time pedestrians spend in traffic lights is significantly higher than you would expect). Even then, I had to rush between venues more than once!
  • I would not go to events outside of my timetable – no late minute stressing over going to an unexpected session!
  • If a day was going to be super busy on the afternoon, I took it easier on the morning
  • Drank lots of water. I kept track of how much, although I never met my target, but I felt much better the days I drank more water.
  • Avoided the terrible coffee at the venues, and also caffeine as much as possible. Also avoided the very-nice-looking desserts, and snacks in general, and didn’t eat a lot because why, if we are just essentially sitting down all day?
  • Allowed myself a good coffee a day–going to the nice coffee places I compiled, which made for a nice walk
  • Brought layers of clothes (for the venues were either scorching hot and humid or plainly freezing) and comfy running trainers (to walk 8 km a day between venues and rooms without developing sore feet)
  • Saying no to big dinners. Actively seeking out smaller gatherings of 2-4 people so we all hear each other and also have more personal conversations.
  • Saying no to dinner with people when I wasn’t feeling great.

The last points were super essential to being socially functional: by having enough time to ‘recharge’, I felt energised to talk to random people I encountered in the “Hallway track”, and had a number of fruitful conversations over lunch, drinks or dinner which would otherwise not have happened because I would have felt aloof.

I’m now tired anyway, because there is no way to not get tired after so many interactions and information absorbing, but I am not feeling sick and depressed! Instead I’m just thinking about what I learnt during the last week, so I will call this all hands a success! 🎉

flattr this!

Planet MozillaBuild and Test against Docker Images in Travis

The road towards the absolute CI/CD pipeline goes through building Docker images and deploying them to production. The code included in the images gets unit tested, both locally during development and after merging in master branch using Travis.

But Travis builds its own environment to run the tests on which could be different from the environment of the docker image. For example Travis may be running tests in a Debian based VM with libjpeg version X and our to-be-deployed docker image runs code on-top of Alpine with libjpeg version Y.

To ensure that the image to be deployed to production is OK, we need to run the tests inside that Docker image. That still possible with Travis with only a few changes to .travis.yml:

Sudo is required

sudo: required

Start by requesting to run tests in a VM instead of in a container.

Request Docker service:

services:
  - docker

The VM must run the Docker daemon.

Add TRAVIS_COMMIT to Dockerfile (Optional)

before_install:
  - docker --version
  - echo "ENV GIT_SHA ${TRAVIS_COMMIT}" >> Dockerfile

It's very useful to export the git SHA of HEAD as a Docker environment variable. This way you can always identify the code included, even if you have .git directory in .dockerignore to reduce size of the image.

The resulting docker image gets also tagged with the same SHA for easier identification.

Build the image

install:
  - docker pull ${DOCKER_REPOSITORY}:last_successful_build || true
  - docker pull ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} || true
  - docker build -t ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} --pull=true .

Instead of pip installing packages, override the install step to build the Docker image.

Start by pulling previously built images from the Docker Hub. Remember that Travis will run each job in a isolate VMs therefore there is no Docker cache. By pulling previously built images cache gets seeded.

Travis' built-in cache functionality can also be used, but I find it more convenient to push to the Hub. Production will later pull from there and if debug is needed I can pull the same image locally too.

Each docker pull is followed by a || true which translates to "If the Docker Hub doesn't have this repository or tag it's OK, don't stop the build".

Finally trigger a docker build. Flag --pull=true will force downloading the latest versions of the base images, the ones from the FROM instructions. For example if an image is based on Debian this flag will force Docker to download the latest version of Debian image. The Docker cache has been already populated so this is not superfluous. If skipped the new build could use an outdated base image which could have security vulnerabilities.

Run the tests

script:
  - docker run -d --name mariadb -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=foo mariadb:10.0
  - docker run ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} flake8 foo
  - docker run --link mariadb:db -e CHECK_PORT=3306 -e CHECK_HOST=db giorgos/takis
  - docker run --env-file .env --link mariadb:db ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} coverage run ./manage.py test

First start mariadb which is needed for the Django tests to run. Fork to the background with -d flag. The --name flag makes linking with other containers easier.

Then run flake8 linter. This is run after mariadb - although it doesn't depend on it - to allow some time for the database to download and initialize before it gets hit with tests.

Travis needs about 12 seconds to get MariaDB ready which is usually more than the time the linter runs. To wait for the database to become ready before running the tests, run Takis. Takis waits for the container named mariadb to open port 3306. I blogged in detail about Takis before.

Finally run the tests making sure that the database is linked using --link and that environment variables needed for the application to initialize are set using --env-file.

Upload built images to the Hub

deploy:
  - provider: script
    script: bin/dockerhub.sh
    on:
      branch: master
      repo: foo/bar

And dockerhub.sh

docker login -e "$DOCKER_EMAIL" -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
docker push ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT}
docker tag -f ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} ${DOCKER_REPOSITORY}:last_successful_build
docker push ${DOCKER_REPOSITORY}:last_successful_build

The deploy step is used to run a script to tag images, login to Docker Hub and finally push those tags. This step is run only on branch: master and not on pull requests.

Pull requests will not be able to push to Docker Hub anyway because Travis does not include encrypted environment variables to pull requests and therefore, there will be no $DOCKER_PASSWORD. In the end of the day this is not a problem because you don't want pull requests with arbitrary code to end up in your Docker image repository.

Set the environment variables

Set the environment variables needed to build, test and deploy in env section:

env:
  global:
    # Docker
    - DOCKER_REPOSITORY=example/foo
    - DOCKER_EMAIL="foo@example.com"
    - DOCKER_USERNAME="example"
    # Django
    - DEBUG=False
    - DISABLE_SSL=True
    - ALLOWED_HOSTS=*
    - SECRET_KEY=foo
    - DATABASE_URL=mysql://root@db/foo
    - SITE_URL=http://localhost:8000
    - CACHE_URL=dummy://

and save them to .env file for the docker daemon to access with --env-file

before_script:
  - env > .env

Variables with private data like DOCKER_PASSWORD can be added through Travis' web interface.

That's all!

Pull requests and merges to master are both tested against Docker images and successful builds of master are pushed to the Hub and can be directly used to production.

You can find a real life example of .travis.yml in the Snippets project.

Planet MozillaMulti-process Firefox and AMO

In Firefox 48, which reaches the release channel on August 1, 2016, multi-process support (code name “Electrolysis”, or “e10s”) will begin rolling out to Firefox users without any add-ons installed.

In preparation for the wider roll-out to users with add-ons installed, we have implemented compatibility checks on all add-ons uploaded to addons.mozilla.org (AMO).

There are currently three possible states:

  1. The add-on is a WebExtension and hence compatible.
  2. The add-on has marked itself in the install.rdf as multi-process compatible.
  3. The add-on has not marked itself compatible, so the state is currently unknown.

If a new add-on or a new version of an old add-on is not multi-process compatible, a warning will be shown in the validation step. Here is an example:

image01

In future releases, this warning might become more severe as the feature nears full deployment.

For add-ons that fall into the third category, we might implement a more detailed check in a future release, to provide developers with more insight into the “unknown” state.

After an add-on is uploaded, the state is shown in the Developer Hub. Here is an example:

image00

Once you verify that your add-on is compatible, be sure to mark it as such and upload a new version to AMO. There is documentation on MDN on how to test and mark your add-on.

If your add-on is not compatible, please head to our resource center where you will find information on how to update it and where to get help. We’re here to support you!

Planet MozillaRelEng Conf 2016: Call for papers

(Suddenly, its June! How did that happen? Where did the year go already?!? Despite my recent public silence, there’s been a lot of work going on behind the scenes. Let me catchup on some overdue blogposts – starting with RelEngConf 2016!)

We’ve got a venue and a date for this conference sorted out, so now its time to start gathering presentations, speakers and figuring out all the other “little details” that go into making a great, memorable, conference. This means two things:

1) RelEngCon 2016 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 01-jul-2016.

2) Like all previous RelEng Conferences, the mixture of attendees and speakers, from academia and battle-hardened industry, makes for some riveting topics and side discussions. Come talk with others of your tribe, swap tips-and-gotchas with others who do understand what you are talking about and enjoy brainstorming with people with very different perspectives.

For further details about the conference, or submitting proposals, see http://releng.polymtl.ca/RELENG2015/html/index.html. If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off November 18th, 2016 on your calendar and book your travel to Seattle now. It will be well worth your time.

I’ll be there – and look forward to seeing you there!
John.

Planet MozillaEnhancing Articles Through Hyperlinks

When reading an article, I often run into a problem: there are links I want to open but now is not a good time to open them. Why is now not a good time?

  • If I open them and read them now, I’ll lose the context of the article I’m currently reading.
  • If I open them in the background now and come back to them later, I won’t remember the context that this page was opened from and may not remember why it was relevant to the original article.

I prototyped a solution – at the end of an article, I attach all of the links in the article to the end of the article with some additional context. For example, from my Android thread annotations post:

links with context at the end of an article

To remember why I wanted to open the link, I provide the sentence the link appeared in.

To see if the page is worth reading, I access the page the link points to and include some of its data: the title, the host name, and a snippet.

There is more information we can add here as well, e.g. a “trending” rating (a fake implementation is pictured), a favicon, or a descriptive photo.

And vice versa

You can also provide the original article’s context on a new page after you click a link:

context from where this page was opened

This context can be added for more than just articles.

Shout-out to Chenxia & Stefan for independently discovering this idea and a context graph brainstorming group for further fleshing this out.

Note: this is just a mock-up – I don’t have a prototype for this.

Themes

The web is a graph. In a graph, we can access new nodes, and their content, by traversing backwards or forwards. Can we take advantage of this relationship?

Alan Kay once said, people largely use computers “as a convenient way of getting at old media…” This is prevalent on the web today – many web pages fill fullscreen browser windows that allow you to read the news, create a calendar, or take notes, much like we can with paper media. How can we better take advantage of the dynamic nature of computers? Of the web?

Can we mix and match live content from different pages? Can we find clever ways to traverse & access the web graph?

This blog & prototype provide two simple examples of traversing the graph and being (ever so slightly) more dynamic: 1) showing the context of where the user is going to go and 2) showing the context of where they came from. Wikipedia (with a certain login-needed feature enabled) has another interesting example when mousing over a link:

wikipedia link mouse-over shows next page pop-up

They provide a summary and an image of the page the hyperlink will open. Can we use this technique, and others, to provide more context for hyperlinks on every page on the web?

To summarize, perhaps we can solve user problems by considering:

  • The web as a graph – accessing content backwards & forwards from the current page
  • Computers & the web as a truly dynamic medium, with different capabilities than their print predecessors

For the source and a chance to run it for yourself, check out the repository on github.

Planet MozillaRepsNext – Introduction Video

At the 2015 Reps Leadership Meeting in Paris it became clear that the program was ready for “a version 2”. As the Reps Council had recently become a formal part of Mozilla Leadership, it was time to bring the program to the next level. Literally building on that idea, the RepsNext initiative was born.

Since then several working groups were formed to condense reflections on the past and visions for the future into new program proposals.

At our last Council meetup from 14-17 April 2016 in Berlin we recorded interviews with Council and Peers explaining RepsNext and summarizing our current status.

You can find a full transcript at the end of this blog post. Thanks to Yofie for editing the video!

Please share this video broadly, creating awareness for the exciting future of the Reps program.

 

Getting involved

We will focus our work at the London All Hands from June 12th to June 17th to work on open questions around the working groups. We will share our outcomes and open up for discussions after that. For now, there are several discussions to jump in and shape the future of the Reps program:

Additionally, you can help out and track our Council efforts on the Reps GitHub repository.

 

Moving beyond RepsNext

It took us a little more than a year to come up with this “new release” of the Reps program. For the future we plan to take smaller steps improving the program beyond RepsNext. So expect experiments and tweaks arriving in smaller bits and with a higher clockspeed (think Firefox Rapid Release Model).

 

Video transcript

Question: What is RepsNext?

[Arturo] I think we have reached a point of maturity in the program that we need to reinvent ourselves to be adaptors of Mozilla’s will and to the modern times.

Question: How will the Reps program change?

[Pierros] What we’re really interested in and picking up as a highlight are the changes on the governance level. There are a couple of things that are coming. The Council has done really fanstastic work on bringing up and framing really interesting conversations around what RepsNext is, and PeersNext as a subset of that, and how do we change and adapt the leadership structure of Mozilla Reps to be more representative of the program that we would like to see.

[Brian] The program will still remain a grassroots program, run by volunteers for volunteers.

[Henrik] We’ve been working heavily on it in various working groups over the last year, developed a very clear understanding of the areas that need work and actually got a lot of stuff done.

[Konstantina] I think that the program has a great future ahead of it. We’re moving to a leadership body where our role is gonna be to empower the rest of the volunteer community and we’re gonna try to minimize the bureacracy that we already have. So the Reps are gonna have the same resources that they had but they are gonna have tracks where they can evolve their leadership skills and with that empower the volunteer communities. Reps is gonna be the leadership body for the volunteer community and I think that’s great. We’re not only about events but we’re something more and we’re something the rest of Mozilla is gonna rely on when we’re talking about volunteers.

Question: What’s important about this change?

[Michael] We will have the Participation team’s support to have meetings together, to figure out the strategy together.

[Konstantina] We are bringing the tracks where we specialize the Reps based on their interest.

Question: Why do we need changes?

[Christos] There is the need of that. There is the need to reconsider the mentoring process, reconsidering budgets, interest groups inside of Reps. There is a need to evolve Reps and be more impactful in our regions.

Question: Is this important for Mozilla?

[Arturo] We’re going to have mentors and Reps specialized in their different contribution areas.

Question: How is RepsNext helping local communities?

[Guillermo] Our idea, what we’re planning with the changes on RepsNext is to bring more people to the program. More people is more diversity, so we’re trying to find new people, more people with new interests.

Question: What excites you about RepsNext?

[Faisal] We have resources for different types of community, for example if somebody needs hardware or somebody training material, a variety of things not just what we used to have. So it will open up more ways on how we can support Reps for more impactful events and making events more productive.

Planet MozillaFirefox 48 Beta 3 Testday, June 24th

Hello Mozillians,

We are happy to announce that next Friday, June 24th, we are organizing Firefox 48 Beta 3 Testday. We’ll be focusing our testing on the New Awesomebar feature, bug verifications and bug triage. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Planet MozillaContextual Identities on the Web

The Containers Feature in Firefox Nightly enables users to login to multiple accounts on the same site simultaneously and gives users the ability to segregate site data for improved privacy and security.

We all portray different characteristics of ourselves in different situations. The way I speak with my son is much different than the way I communicate with my coworkers. The things I tell my friends are different than what I tell my parents. I’m much more guarded when withdrawing money from the bank than I am when shopping at the grocery store. I have the ability to use multiple identities in multiple contexts. But when I use the web, I can’t do that very well. There is no easy way to segregate my identities such that my browsing behavior while shopping for toddler clothes doesn’t cross over to my browsing behavior while working. The Containers feature I’m about to describe attempts to solve this problem: empowering Firefox to help segregate my online identities in the same way I can segregate my real life identities.

With Containers, users can open tabs in multiple different contexts – Personal, Work, Banking, and Shopping.  Each context has a fully segregated cookie jar, meaning that the cookies, indexeddb, localStorage, and cache that sites have access to in the Work Container are completely different than they are in the Personal Container. That means that the user can login to their work twitter account on twitter.com in their Work Container and also login to their personal twitter on twitter.com in their Personal Container. The user can use both mail accounts in side-by-side tabs simultaneously. The user won’t need to use multiple browsers, an account switcher[1], or constantly log in and out to switch between accounts on the same domain.

User logged into work twitter account in Work Container and personal twitter account in Personal Container, simulatenously in side-by-side tabs

Simultaneously logged into Personal Twitter and Work Twitter accounts.

Note that the inability to efficiently use “Contextual Identities” on the web has been discussed for many years[2]. The hard part about this problem is figuring out the right User Experience and answering questions like:

  • How will users know what context they are operating in?
  • What if the user makes a mistake and uses the wrong context; can the user recover?
  • Can the browser assist by automatically assigning websites to Containers so that users don’t have to manage their identities by themselves?
  • What heuristics would the browser use for such assignments?

We don’t have the answers to all of these questions yet, but hope to start uncovering some of them with user research and feedback. The Containers implementation in Nightly Firefox is a basic implementation that allows the user to manage identities with a minimal user interface.

We hope to gather feedback on this basic experience to see how we can iterate on the design to make it more convenient, elegant, and usable for our users. Try it out and share your feedback by filling out this quick form or writing to containers@mozilla.com.

FAQ

How do I use Containers?

You can start using Containers in Nightly Firefox 50 by opening a New Container Tab. Go the File Menu and select the “New Container Tab” option. (Note that on Windows you need to hit the alt key to access the File Menu.) Choose between Personal, Work, Shopping, and Banking.

Use the File Menu to access New Container Tab, then choose between Personal, Work, Banking, and Shopping.

Notice that the tab is decorated to help you remember which context you are browsing in. The right side of the url bar specifies the name of the Container you are in along with an icon. The very top of the tab has a slight border that uses the same color as the icon and Container name. The border lets you know what container a tab is open in, even when it is not the active tab.

User interface for the 4 different types of Container tabs

You can open multiple tabs in a specific container at the same time. You can also open multiple tabs in different containers at the same time:

User Interface when multiple container tabs are open side-by-side

2 Work Containers tabs, 2 Shopping Container tabs, 1 Banking Container tab

Your regular browsing context (your “default container”) will not have any tab decoration and will be in a normal tab. See the next section to learn more about the “default container”

Containers are also accessible via the hamburger menu. Customize your hamburger menu by adding in the File Cabinet icon. From there you can select a container tab to open. We are working on adding more access points for container tabs; particularly on long-press of the plus button.

User Interface for Containers Option in Hamburger Menu

How does this change affect normal tabs and the site data already stored in my browser?

The containers feature doesn’t change the normal browsing experience you get when using New Tab or New Window. The normal tab will continue to access the site data the browser has already stored in the past. The normal tab’s user interface will not change. When browsing in the normal context, any site data read or written will be put in what we call the “default container”.

If you use the containers feature, the different container tabs will not have access to site data in the default container. And when using a normal tab, the tab won’t have access to site data that was stored for a different container tab. You can use normal tabs along side other containers:

User Interface when 2 normal tabs are open, next to 2 Work Container tabs and 1 Banking Container tab

2 normal tabs (“Default Container tabs”), 2 Work Container tabs, 1 Banking Container tab

What browser data is segregated by containers?

In principle, any data that a site has read or write access to should be segregated.

Assume a user logins into example.com in their Personal Container, and then loads example.com in their Work Container. Since these loads are in different containers, there should be no way for the example.com server to tie these two loads together. Hence, each container has its own separate cookies, indexedDB, localStorage, and cache.

Assume the user then opens a Shopping Container and opens the History menu option to look for a recently visited site. example.com will still appear in the user’s history, even though they did not visit example.com in the Shopping Container. This is because the site doesn’t have access to the user’s locally stored History. We only segregate data that a site has access to, not data that the user has access to. The Containers feature was designed for a single user who has the need to portray themselves to the web in different ways depending on the context in which they are operating.

By separating the data that a site has access to, rather than the data that a user has access to, Containers is able to offer a better experience than some of the alternatives users may be currently using to manage their identities.

Is this feature going to be in Firefox Release?

This is an experimental feature in Nightly only. We would like to collect feedback and iterate on the design before the containers concept goes beyond Nightly. Moreover, we would like to get this in the hands of Nightly users so they can help validate the OriginAttribute architecture we have implemented for this feature and other features. We have also planned a Test Pilot study for the Fall.

To be clear, this means that when Nightly 50 moves to Aurora/DevEdition 50, containers will not be enabled.

How do users manage different identities on the web today?

What do users do if they have two twitter accounts and want to login to them at the same time? Currently, users may login to one twitter account using their main browser, and another using a secondary browser. This is not ideal, since then the user is running two browsers in order to accomplish their tasks.

Alternatively, users may open a Private Browsing Window to login to the second twitter account. The problem with this is that all data associated with Private Browsing Windows is deleted when they are closed. The next time the user wants to use their secondary twitter account, they have to login again. Moreover, if the account requires two factor authentication, the user will always be asked for the second factor token, since the browser shouldn’t remember that they had logged in before when using Private Browsing.

Users may also use a second browser if they are worried about tracking. They may use a secondary browser for Shopping, so that the trackers that are set while Shopping can’t be associated with the tasks on their primary browser.

Can I disable containers on Nightly?

Yes, by following these steps:

  1. Open a new window or tab in Firefox.
  2. Type about:config and press enter.
  3. You will get to a page that asks you to promise to be careful. Promise you will be.
  4. Set the privacy.userContext.enabled preference to false.
Can I enable containers on a version of Firefox that is not Nightly?

Although the privacy.userContext.enabled preference described above may be present in other versions of Firefox, the feature may be incomplete, outdated, or buggy. We currently only recommend enabling the feature in Nightly, where you’ll have access to the newest and most complete version.

How is Firefox able to Compartmentalize Containers?

An origin is defined as a combination of a scheme, host, and port. Browsers make numerous security decisions based on the origin of a resource using the same-origin-policy. Various features require additional keys to be added to the origin combination. Examples include the Tor Browser’s work on First Party Isolation, Private Browsing Mode, the SubOrigin Proposal, and Containers.

Hence, Gecko has added additional attributes to the origin called OriginAttributes. When trying to determine if two origins are same-origin, Gecko will not only check if they have matching schemes, hosts, and ports, but now also check if all their OriginAttributes match.

Containers adds an OriginAttribute called userContextId. Each container has a unique userContextId. Stored site data (i.e. cookies) is now stored with a scheme, host, port, and userContextId. If a user has https://example.com cookies with the userContextId for the Shopping Container, those cookies will not be accessible by https://example.com in the Banking Container.

Note that one of the motivations in enabling this feature in Nightly is to help ensure that we iron out any bugs that may exist in our OriginAttribute implementation before features that depend on it are rolled out to users.

How does Containers improve user privacy and security?

The Containers feature offers users some control over the techniques websites can use to track them. Tracking cookies set while shopping in the Shopping Container won’t be accessible to sites in the Personal Container. So although a tracker can easily track a user within their Shopping Container, they would have to use device fingerprinting techniques to link that tracking information with tracking information from the user’s Personal Container.

Containers also offers the user a way to compartmentalize sensitive information. For example, users could be careful to only use their Banking Container to log into banking sites, protecting themselves from potential XSS and CSRF attacks on these sites. Assume a user visits attacker.com in an non-banking-container. The malicious site may try to use a vulnerability in a banking site to obtain the user’s financial data, but wouldn’t be able to since the user’s bank’s authentication cookies are shielded off in a separate container that the malicious site can’t touch.

Is there any chance that a tracker will be able to track me across containers?

There are some caveats to data separation with Containers.

The first is that all requests by your browser still have the same IP address, user agent, OS, etc. Hence, fingerprinting is still a concern. Containers are meant to help you separate your identities and reduce naive tracking by things like cookies. But more sophisticated trackers can still use your fingerprint to identify your device. The Containers feature is not meant to replace the Tor Browser, which tries to minimize your fingerprint as much as possible, sometimes at the expense of site functionality. With Containers, we attempt to improve privacy while still minimizing breakage.

There are also some bugs still open related to OriginAttribute separation. Namely, the following areas are not fully separated in Containers yet:

  • Some favicon requests use the default container cookies even when you are in a different container – Bug 1277803
  • The about:newtab page makes network requests to recently visited sites using the default container’s cookies even when you are in a different container – Bug 1279568
  • Awesome Bar search requests use the default container cookies even when you are in a different container – Bug 1244340
  • The Forget About Site button doesn’t forget about site data from Container tabs – Bug 1238183
  • The image cache is shared across all containers – Bug 1270680

We are working on fixing these last remaining bugs and hope to do so during this Nightly 50 cycle.

How can I provide feedback?

I encourage you to try out the feature and provide your feedback via:

Thank you

Thanks to everyone who has worked to make this feature a reality! Special call outs to the containers team:

Andrea Marchesini
Kamil Jozwiak
David Huseby
Bram Pitoyo
Yoshi Huang
Tim Huang
Jonathan Hao
Jonathan Kingston
Steven Englehardt
Ethan Tseng
Paul Theriault

Footnotes

[1] Some websites provide account switchers in their products. For websites that don’t support switching, users may install addons to help them switch between accounts.
[2] http://www.ieee-security.org/TC/W2SP/2013/papers/s1p2.pdf, https://blog.mozilla.org/ladamski/2010/07/contextual-identity/
[3] Containers Slide Deck

Planet MozillaThe final major player is set to ship WebDriver

It was nearly a year ago that Microsoft shipped their first implementation of WebDriver. I remember being so excited as I wrote a blog post about it.

This week, Apple have said that they are going to be shipping a version of WebDriver that will allow people to drive Safari 10 in macOS. In the release notes they have created safari driver that will be shipping with the OS.

If you have ever wondered why this is important? Have a read of my last blog post. In Firefox 47 Selenium caused Firefox to crash on startup. The Mozilla implementation of WebDriver, called Marionette and GeckoDriver, would never have hit this problem because test failures and crashes like this would lead to patches being reverted and never shipped to end users.

Many congratulations to the Apple team for making this happen!

Planet MozillaConoce los complementos destacados para junio

Como cada mes, les traemos de primera mano los complementos recomendamos para los usuarios de Firefox.

Ver entrada en el blog de los Addons »

Planet MozillaI want to mock with you

This post brought to you from Mozilla’s London All Hands meeting - cheers!

When writing Python unit tests, sometimes you want to just test one specific aspect of a piece of code that does multiple things.

For example, maybe you’re wondering:

  • Does object X get created here?
  • Does method X get called here?
  • Assuming method X returns Y, does the right thing happen after that?

Finding the answers to such questions is super simple if you use mock: a library which “allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.” Since Python 3.3 it’s available simply as unittest.mock, but if you’re using an earlier Python you can get it from PyPI with pip install mock.

So, what are mocks? How do you use them?

Well, in short I could tell you that a Mock is a sort of magical object that’s intended to be a doppelgänger for some object in your code that you want to test. Mocks have special attributes and methods you can use to find out how your test is using the object you’re mocking. For example, you can use Mock.called and .call_count to find out if and how many times a method has been called. You can also manipulate Mocks to simulate functionality that you’re not directly testing, but is necessary for the code you’re testing. For example, you can set Mock.return_value to pretend that an function gave you some particular output, and make sure that the right thing happens in your program.

But honestly, I don’t think I could give a better or more succinct overview of mocks than the Quick Guide, so for a real intro you should go read that. While you’re doing that, I’m going to watch this fantastic Michael Jackson video:

Oh you’re back? Hi! So, now that you have a basic idea of what makes Mocks super cool, let me share with you some of the tips/tips/trials/tribulations I discovered when starting to use them.

Patches and namespaces

tl;dr: Learn where to patch if you don’t want to be sad!

When you import a helper module into a module you’re testing, the tested module gets its own namespace for the helper module. So if you want to mock a class from the helper module, you need to mock it within the tested module’s namespace.

For example, let’s say I have a Super Useful helper module, which defines a class HelperClass that is So Very Helpful:

# helper.py

class HelperClass():
    def __init__(self):
        self.name = "helper"
    def help(self):
        helpful = True
        return helpful

And in the module I want to test, tested, I instantiate the Incredibly Helpful HelperClass, which I imported from helper.py:

# tested.py

from helper import HelperClass

def fn():
    h = HelperClass() # using tested.HelperClass
    return h.help()

Now, let’s say that it is Incredibly Important that I make sure that a HelperClass object is actually getting created in tested, i.e. that HelperClass() is being called. I can write a test module that patches HelperClass, and check the resulting Mock object’s called property. But I have to be careful that I patch the right HelperClass! Consider test_tested.py:

# test_tested.py

import tested

from mock import patch

# This is not what you want:
@patch('helper.HelperClass')
def test_helper_wrong(mock_HelperClass):
    tested.fn()
    assert mock_HelperClass.called # Fails! I mocked the wrong class, am sad :(

# This is what you want:
@patch('tested.HelperClass')
def test_helper_right(mock_HelperClass):
    tested.fn()
    assert mock_HelperClass.called # Passes! I am not sad :)

OK great! If I patch tested.HelperClass, I get what I want.

But what if the module I want to test uses import helper and helper.HelperClass(), instead of from helper import HelperClass and HelperClass()? As in tested2.py:

# tested2.py

import helper

def fn():
    h = helper.HelperClass()
    return h.help()

In this case, in my test for tested2 I need to patch the class with patch('helper.HelperClass') instead of patch('tested.HelperClass'). Consider test_tested2.py:

# test_tested2.py

import tested2
from mock import patch

# This time, this IS what I want:
@patch('helper.HelperClass')
def test_helper_2_right(mock_HelperClass):
    tested2.fn()
    assert mock_HelperClass.called # Passes! I am not sad :)

# And this is NOT what I want!
# Mock will complain: "module 'tested2' does not have the attribute 'HelperClass'"
@patch('tested2.HelperClass')
def test_helper_2_right(mock_HelperClass):
    tested2.fn()
    assert mock_HelperClass.called

Wonderful!

In short: be careful of which namespace you’re patching in. If you patch whatever object you’re testing in the wrong namespace, the object that’s created will be the real object, not the mocked version. And that will make you confused and sad.

I was confused and sad when I was trying to mock the TestManifest.active_tests() function to test BaseMarionetteTestRunner.add_test, and I was trying to mock it in the place it was defined, i.e. patch('manifestparser.manifestparser.TestManifest.active_tests').

Instead, I had to patch TestManifest within the runner.base module, i.e. the place where it was actually being called by the add_test function, i.e. patch('marionette.runner.base.TestManifest.active_tests').

So don’t be confused or sad, mock the thing where it is used, not where it was defined!

Pretending to read files with mock_open

One thing I find particularly annoying is writing tests for modules that have to interact with files. Well, I guess I could, like, write code in my tests that creates dummy files and then deletes them, or (even worse) just put some dummy files next to my test module for it to use. But wouldn’t it be better if I could just skip all that and pretend the files exist, and have whatever content I need them to have?

It sure would! And that’s exactly the type of thing mock is really helpful with. In fact, there’s even a helper called mock_open that makes it super simple to pretend to read a file. All you have to do is patch the builtin open function, and pass in mock_open(read_data="my data") to the patch to make the open in the code you’re testing only pretend to open a file with that content, instead of actually doing it.

To see it in action, you can take a look at a (not necessarily great) little test I wrote that pretends to open a file and read some data from it:

def test_nonJSON_file_throws_error(runner):
    with patch('os.path.exists') as exists:
        exists.return_value = True
        with patch('__builtin__.open', mock_open(read_data='[not {valid JSON]')):
            with pytest.raises(Exception) as json_exc:
                runner._load_testvars() # This is the code I want to test, specifically to be sure it throws an exception
    assert 'not properly formatted' in json_exc.value.message

Gotchya: Mocking and debugging at the same time

See that patch('os.path.exists') in the test I just mentioned? Yeah, that’s probably not a great idea. At least, I found it problematic.

I was having some difficulty with a similar test, in which I was also patching os.path.exists to fake a file (though that wasn’t the part I was having problems with), so I decided to set a breakpoint with pytest.set_trace() to drop into the Python debugger and try to understand the problem. The debugger I use is pdb++, which just adds some helpful little features to the default pdb, like colors and sticky mode.

So there I am, merrily debugging away at my (Pdb++) prompt. But as soon as I entered the patch('os.path.exists') context, I started getting weird behavior in the debugger console: complaints about some ~/.fancycompleterrc.py file and certain commands not working properly.

It turns out that at least one module pdb++ was using (e.g. fancycompleter) was getting confused about file(s) it needs to function, because of checks for os.path.exists that were now all messed up thanks to my ill-advised patch. This had me scratching my head for longer than I’d like to admit.

What I still don’t understand (explanations welcome!) is why I still got this weird behavior when I tried to change the test to patch 'mymodule.os.path.exists' (where mymodule.py contains import os) instead of just 'os.path.exists'. Based on what we saw about namespaces, I figured this would restrict the mock to only mymodule, so that pdb++ and related modules would be safe - but it didn’t seem to have any effect whatsoever. But I’ll have to save that mystery for another day (and another post).

Still, lesson learned: if you’re patching a commonly used function, like, say, os.path.exists, don’t forget that once you’re inside that mocked context, you no longer have access to the real function at all! So keep an eye out, and mock responsibly!

Mock the night away

Those are just a few of the things I’ve learned in my first few weeks of mocking. If you need some bedtime reading, check out these resources that I found helpful:

I’m sure mock has all kinds of secrets, magic, and superpowers I’ve yet to discover, but that gives me something to look forward to! If you have mock-foo tips to share, just give me a shout on Twitter!

Dev.OperaProgressive Web Apps in Nigeria and Kenya: a Double Interview

While keeping an eye on Twitter mentions of progressive web apps, I came across a number of conversations between web developers in Nigeria and Kenya. Intrigued, I got in touch with them to hear their thoughts on progressive web apps, resulting in this double interview.

Hi! Please tell us a little about yourselves.

<figure block="figure"> Constance Okoghenun </figure>

Constance Okoghenun, @okoghenun: I’m from Lagos, Nigeria. I work as a UI Specialist for Konga, Nigeria’s largest online marketplace helping tens of thousands of merchants selling over 200,000 products on our platform. At Konga I help to build products that provide great shopping experiences for our customers. I also help organise and speak from time to time at Usable (formerly UXLagos), where designers, developers and enthusiasts here in Lagos and, recently, Abuja Nigeria, have monthly conversations about how to improve the experiences we deliver to our customers.

I love the web and have been working on the web for about 5 years now. It has been a delight to see it mature and become better as a platform, enabling me and other developers and designers to offer increasingly better experiences to our users.

<figure block="figure"> Eugene Mutai </figure>

Eugene Mutai, @kn9ts: I’m from Nairobi, Kenya. I’m currently working for/with Andela. Andela is new startup, with a revolutionary model founded on the principle that brilliance is evenly distributed around the world, but opportunity is not. Andela is transforming the brightest minds in Africa into world-class developers, who then are fused into startups or Fortune 500 companies all over the world. I am currently working as a full-time full-stack developer for RBI, a Canadian multinational company (and one of the largest fast food brands in the world) whose brands include Burger King and Tim Hortons.

I have been a web enthusiast and developer for at least four years now. I am also a Google Developer Expert on Web Technologies, attributed by one of my community contributions such as being the lead manager of the Nairobi JavaScript Community growing it to over 450 members, and I regularly speak at a variety of programming and tech meetups.

We met online because you were talking about Progressive Web Apps (PWAs) on Twitter. What is it about them that excites you?

Eugene: I haven’t talked about progressive web apps on Twitter only; I have evangelized about them wherever I got the chance to. My last talk on progressive web apps was at Nairobi Tech Week, the largest tech event in Sub-Saharan Africa this year. I’ve also written a blog post about it — Progressive Webs Apps: Africa may be the biggest gainer — which is a robust answer to the above question.

In summary, I am excited that finally web technology is earning its place on mobile. PWAs are changing the way web apps have been viewed: a secondary, less-intuitive and less-performant alternative to using a mobile app’s service. While settling in on mobile, leveraging its greatest advantage over native mobile apps — very low friction in usage — it may solve problems that make the daily usage of native mobile applications in Africa a challenge, e.g. the size of apps and requirement of new downloads every time they are updated, among many others.

On a side note, as a passionate web developer, it’s been tons of losing development battles with Android developers, and I’m finally unquantifiably happy we are becoming a force to be reckoned with.

Constance: There are lots of exciting things about progressive web apps and the new range of possibilities they bring to the web. If I were to pick just one feature of progressive web apps that I’m super-excited about, it’s the ability to detect and handle offline / unreliable network conditions with service workers.

This is vitally important, as web apps finally now have a space on user home screens — so being slow to load or breaking because of network unreliability is not acceptable. And, in my part of the world, where getting good connectivity speeds on mobile is pretty difficult, service workers present an opportunity for developers to build apps that aren’t just fast, but which are also resilient.

An example of this is Konga’s PWA: when we detect that the user is offline and has items in their shopping cart, we provide them the option of going through an offline checkout experience, thereby preventing internet connectivity from becoming a bottleneck when they shop with us.

Constance, what are the characteristics of the Nigerian market that make PWAs an appealing development route?

Constance: Data is pretty expensive in Nigeria. Over the last 12 months, data prices have become lower, but not low enough to make the majority of the Nigerian internet market data insensitive. On average, it costs about NGN 1000 (about $5) to get 1GB of data. In a country where minimum wage is about NGN 18,000 (about $90) that is already 5.6% of the monthly salaries. So, as you can imagine, Nigerians are extremely data sensitive.

A huge number of smartphone users do not download their apps and never update their apps. Instead, people side-load apps and other content from third parties who have these apps downloaded to a PC and for a small fee will install apps from their computers to users’ phones. Among younger, more tech-savvy Nigerians, the app Xender is really popular as it allows them skip app downloads and just send it from one device to another.

As you can imagine, these workarounds for downloading apps from the app store create all sorts of problems for developers — from app distribution, to getting the most recent, bug-free versions of apps in the hands of users.

With PWAs giving web apps a place in the user’s home screen, without the download overhead of native apps, this becomes more exciting; developers in Nigeria can now always give a great and up-to-date experience to their users. Also, they have the ability to re-engage with their users via push notifications, and improve conversions.

Eugene, is it similar in Kenya?

Eugene: The average pay is Kenya is approximately $300-400 a month. On a good month, with tough choices made in budgeting, one gets to barely spare $30 after covering monthly expenses. It costs $1 for 135MB of data and $5 for 1GB of data. On average a user has 15+ apps installed (extrapolated from global mobile app install stats). At least half of these apps demand to be updated after a day or two (best case scenario is weekly).

If each app is on average 15MB, this means 150MB of data used up on downloads from re-installations without considering each user’s unique personal data usage. If at least $5 of data is being used for updates only, I’d say this is very challenging to a user if they are is to keep up with these updates. Some updates also are just bug fixes — I kid you not, no one would update the app for that (including me) unless the bug is not negligible.

Constance, what were the challenges of making the Konga PWA?

Constance: The biggest for us at Konga was time: we had only about six weeks to build a stable version of our app and we did it in time. Thinking offline-first was something we had to get used to doing. Apart from those, building our PWA wasn’t very difficult.

And how was user feedback for your PWA, Constance?

Constance: Though we haven’t officially launched the app yet, feedback has been overwhelmingly positive. It requires 84% less data to complete the first transaction vs the previous mobile web experience, and 63% less data for initial load, and we are not even done building and optimizing the app. It certainly will be a delight for our users when it gets into their hands.

To quote our CEO Shola Adekoya:

We estimate that with our new light, super-fast, UX-rich browsing capability, customers’ data consumption will fall dramatically.

Back to Eugene now. How was user feedback to your evangelism, Eugene?

Eugene: I’m been satisfactorily happy with the feedback I have got from my blog post and all my talks on PWAs. In the last talk I gave, at one point a member of the audience who was an Android developer (among many others in the room) who became anxious and disturbed that PWAs would render native Android developers jobless. The best answer I could give was we’re only moving in the mobile department since we’re now in a sharing economy, and not kicking them out, and the hype was to only create awareness to all web tech developers and not to cause panic among Android developers.

Do you think those Android developers’ concerns are justified? Do you see PWAs becoming dominant in the kind of markets that you work in?

Eugene: No, I don’t think they’re justified. First of all, awareness about PWAs is something that needs to be urgently addressed. Every time I give a talk on PWAs the facial response of the audience is like “Why didn’t I know about this? This is revolutionary!” and the question that always follows is “Where can I find more information and learn about PWAs?”

Furthermore, I don’t envision PWAs being dominant in the mobile apps space, but I see their presence vastly increasing once knowledge about them is in the hands of every web developer. PWAs solve a number of challenges for entrepreneurs: it’s a cross-platform solution, there’s very low friction compared to native apps, there’s no APK size limitation, and so on.

Constance: I also don’t think that fear of PWAs is justified. I think both PWAs and native SDKs are just tools in a good developer’s tool box. There are use cases where a PWA would be perfect, and there are others for which a native app is best. Each developer just needs to evaluate when to use what. Moreover, there is nothing stopping native mobile developers from working on the web; Konga’s PWA was built by both our current mobile team (Android and iOS developers) and our front-end team.

What else would you like to see browsers do to make PWAs more widely available and accepted?

Constance: Lots of developers use frameworks; it would be great to see vendors reach out to the maintainers of those frameworks and work with them improving their tooling and processes, so that it’s super easy to build PWAs with their frameworks and tools. More importantly, I’d like to see a more consistent implementation of the web specs that PWAs are based on.

Eugene: Opera and Google Chrome have done a wonderful job so far. When talking about PWAs being installable, the “Add to home screen” option isn’t that convincing. I’d like to see something like “Install Web App” or “Add to Your Apps”.

Browsers should offer APIs so that PWAs can compete on an equal footing as native mobile apps. I am aware these missing APIs such as Bluetooth, NFC, USB access and so forth are currently being worked on. So this is just a matter of time.

Thank you both!

Planet MozillaWho uses voice control?

Voice control (Siri, Google Now, Amazon Echo, etc.) is not a very useful feature to me, and wonder if I’m in the minority.

Why it is not useful:

  • I live with other people.
    • Sometimes one of the people I live with or myself may be sleeping. If someone speaks out loud to the TV or phone, that might wake the other up.
    • Even when everyone is awake, that doesn’t mean we are together. It annoys me when someone talks to the TV while watching basketball. I don’t want to find out how annoying it would be to listen to someone in another room tell the TV or their phone what to do.
  • I work with other people.
    • If I’m having lunch, and a co-worker wants to look something up on his/her phone, I don’t want to hear them speak their queries out loud. I actually have coworkers that use their phones as boomboxes to listen to music while eating lunch, as if no-one else can hear it, or everyone has the same taste in music, or everyone wants to listen to music at all during lunch.

The only times I use Siri are:

  • When I am in the car.
  • When I am speaking with others in a social setting, like a pub, and we want to look something up pertaining to the conversation.
  • When I’m alone

When I saw Apple introduce tvOS, the dependence on Siri turned me off from upgrading my Apple TV.

Am I in the minority here?

I get the feeling I’m not. I cannot recall anyone I know using Siri for other anything than entertainment with friends. Controlling devices with your voice in public must be Larry David’s worst nightmare.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>