Planet MozillaRep of the Month – July 2015

Please join us in congratulating Mohamed Hafez for being selected as Mozilla Rep of the Month for July 2015.

Mohamed Hafez

Mohamed Hafez is an incredible Rep with great capabilities from Egypt, he has being part of the big Mozilla tour in Egypt to spread the Mozilla love and mission all over his region.
His latest big event was Mozilla Egypt Iftar (Arabic: إفطار‎ ) in which they talked about Firefox OS and ways to get involved with Mozilla and also a formal Mozilla Egypt meeting. Additionally, he is involved in the coordination of the local launch campaign for Firefox OS in Egypt.

Don’t forget to congratulate him on Discourse!

Planet MozillaWebDriver now a living standard

The WebDriver specification is now officially a living standard. Practically this means that all changes are automatically published to http://www.w3.org/TR/webdriver/.

This brings an end to the era of forever outdated (two years in our case!) technical reports. It also helps bridge the disconnect many readers were having when they looked for information on our specification.

This is made possible with the Echidna tool that has recently been developed at the W3C. It integrates with Github and Travis, and lets you trigger the publishing steps when changes land on a specific branch in your source repository.

A possible future enhancement is abandoning the now superfluous master branch in favour of making the autopublishing gh-pages the default. The two-step landing process seems more tuned towards a levelled Editor’s Draft-to-Working Draft model.

Thanks to tripu and Michael[tm] Smith for doing the legwork.

Planet Mozillacurl me if you can

I got this neat t-shirt in the mail yesterday. 3scale runs a sort of marketing campaign right now and they give away this shirt to the ones who participate, and they were kind enough to send one to me!

Curl me if you can

Planet Mozillahappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1189075] keyboard shortcut for going into edit mode conflicts with Firefox’s tab groups feature
  • [1188339] Increase length of all tokens value for greater security
  • [1185856] Tabbing out of the keyword field should not select the first available keyword
  • [1189172] remove link to ‘release notes’ from index page, and point ‘help’ to bmo.readthedocs.org
  • [1188561] Pre-populate form.fxos.feature fields with GET parameters
  • [1189362] Fix memory leak in Bugzilla::Bug->comments
  • [1190255] modal UI is used immediately after bug creation when using a non-standard form, even if preferenced off

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Planet MozillaErase and Rewind – a talk about open web enthusiasm at Open Web Camp

I just flew from San Francisco to Seattle still suffering from the aftermath of the after party of Open Web Camp 7, a gathering of enthusiasts of the web that lasted for seven years and showed that you can teach, inspire and meet without having to pay a lot. The ticket prices were $10 and even those were mostly to avoid people getting tickets and not coming. All the money left over was then donated to a great cause. Thank you for everyone involved, especially John Foliot for seven years of following a dream and succeeding. And also for moving on whilst you are still happy with what you do.

My presentation at the event, “Erase and rewind – a tale of innovation and impatience” discussed the problems I found with advocating for the open web I encountered over the years. The problems we found, the gaps I see in our storytelling and the loss of focus we suffered when smartphones became a new form factor that seemed great for the web, but became its biggest problem very soon.

There’s a screencast of the presentation on YouTube

The slides are available on Slideshare

I got a bit into a rant, but I think there is a big problem that the people who advocate about great ideas of the web clash with those who want to innovate it. There are a lot of events going on right now that want to achieve the same goal, but keep violating the best practices of others. We need to rally to keep the web relevant and alive. Not define that what we do is the one true way.

Planet MozillaFriend of AMO: Amir Faryar Zahedi

Our newest Friend of AMO is Amir Faryar Zahedi! Amir began contributing as an add-on reviewer over a year ago, and has since reviewed nearly 10,000 add-ons. 60-80% of add-ons on addons.mozilla.org (AMO) are reviewed by volunteer contributors, so it’s Mozillians like Amir who play a key part in keeping the community running. He says,

“In a world driven by profit, it is good to be part of a community that aspires to providing free software for everyone.”

Big thanks to Amir!

We wrapped up a productive July, and now the August contribution wiki is ready.

There is a new section that tracks how many “goodfirstbugs” were fixed in the last month. In July, volunteer contributors fixed 9 bugs!

Thanks to everyone for your continued support.

 

Dev.OperaOpera 31 released

Opera 31 (based on Chromium 44) for Mac, Windows, Linux is out! To find out what’s new for users, see our Desktop blog. Here’s what it means for web developers.

CSS multi-column layout

Chromium now includes a brand new implementation of multi-column layout by Opera engineer Morten Stenshorne, solving historic issues with incorrect column balancing and interaction with compositing (hardware-accelerated layers). While it might have been possible to solve that in the old implementation, there was another big problem with the old implementation, code-wise: it needed a lot of hooks and special code in central layout code. The new implementation gives better support, and cleaner code that’s easier to maintain.

Note that for now, the -webkit- prefix is still required when using column layout-related styles.

A multi-column demo is available.

document.scrollingElement

Browsers disagree on whether html or body reflects the viewport for scrollTop et al. Some sites are using UA sniffing to decide which to use. Browsers want to align and use html (in standards mode), so sites that UA sniff need to be changed. To assist with this transition, Chromium now implements the document.scrollingElement API. Developers can now use that, falling back to a polyfill or whatever solution they were using previously.

See Simon’s write-up on fixing the scrollTop bug for more background.

ES6 computed property names

ES6 computed property names enable specifying properties in object literals whose names are the result of expressions rather than just hardcoded identifiers or strings.

const prefix = 'foo';

const object = {
	[prefix + 'bar']: 'Hello',
	[`${prefix}baz`]: 'world'
};

`${object.foobar} ${object.foobaz}!`;
// → 'Hello world!'

ES6 Unicode code point escapes

Unicode code point escapes such as \u{1F389} are now supported in string and template literals. This way, any symbol can be escaped using its Unicode code point.

Before ES6, JavaScript supported the so-called “Unicode escape sequences” of the form \uXXXX which only allowed up to 4 hexadecimal digits. Code points requiring more than four hexadecimal digits, (such as the immensely important party popper emoji), had to be represented using UCS-2/UTF-16 surrogate pairs.

// Let’s represent the U+1F389 PARTY POPPER Unicode symbol
// in a JavaScript string using escape sequences.

// In ES5, we’d have to calculate the surrogates manually,
// and use two separate escape sequences:
console.log('\uD83C\uDF89');
// '🎉'

// In ES6, we don’t have to calculate anything — we can
// simply use Unicode code point escapes:
console.log('\u{1F389}');
// '🎉'

Cache.prototype.add(request)

The add method on Cache instances is now supported. add takes a RequestInfo object, fetches it, and then adds the resulting response object to the cache.

Request.prototype.context

The read-only context property of the Request interface contains the context of the request (e.g. 'audio', 'image', 'iframe'). This indicates what sort of resource is being fetched.

The context of a request is only relevant in the ServiceWorker API. A service worker can make decisions based on whether the URL is for an image, or an embeddable object such as a <video>, <iframe>, etc.

Web Audio API updates

Matching a change in the Web Audio API specification, the buffer property of an AudioBufferSourceNode can no longer be set more than once. This protects developers from the lack of control over when the new source starts. Previously (as of Chromium 42 and Opera 29), if the buffer property is assigned to more than once, a deprecation message was logged to the DevTools console. Now, doing so throws an error.

The channel order of ChannelMergerNode is now static after instantiation, matching the latest spec. This change addresses various issues with the previous implementation.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Internet Explorer blogRecapping the July 2015 TC39 Committee meeting in Redmond

For most of Microsoft, the end of July means the excitement of our //OneWeek celebration and welcoming tens of millions of new users to Windows 10 and Microsoft Edge! Some who follow JavaScript closely are excited for another reason – the July TC39 meeting in Redmond. ECMA’s TC39 is the committee responsible for developing the JavaScript language – officially called ECMAScript – and since times immemorial (c. 2008) the committee descends on Redmond during the warm summer months to hash out contentious language design issues (and enjoy some high-quality campus cafeteria food).

July TC39 meeting in Redmond

A few proposals were advanced to later stages of the new agile TC39 process, signaling a great amount of progress by the committee and feature champions. Notably, the Array.prototype.includes and the exponentiation operator were advanced to “Candidate” status (signaling the proposal is ready for implementations to gather further feedback), and Async Functions advanced to “Draft” status (formalizing the proposal in a precise specification). Additionally we discussed and resolved a number of technical issues in both ES6 (adding a few more errata to the list) and the upcoming ES2016.

Another outcome of this meeting is that Allen Wirfs-Brock, the long-time editor of the ECMAScript specification, has stepped down as editor. We would like to extend our thanks to Allen for his innumerable contributions over the last several years. I’m honored to have been elected, and look forward to working closely with TC39 and the rest of the community on making the next versions of ECMAScript even more awesome!

Stay cool out there, and don’t forget your semicolons!

Brian Terlson, Senior Program Manager, Chakra Team

Planet WebKitWeb Inspector Interface Changes

Quickly switching tasks is a common action when developing a web site. You might be debugging JavaScript one minute — the next minute you might be poking around in an XHR to validate server data. A designer might only care about the DOM tree and CSS editing. A backend developer might only need network information and a console. Making sure these task areas are quick to access is key for a tool containing such broad functionality.

Web Inspector Tab Icons

Catering to these disparate tasks, each of Web Inspector’s core functions have been divided out into their own tabs. Like the tabs in Safari, they can be rearranged to fit your workflow and closed if you don’t need them. Quickly switch among tasks for Network, Elements, Timelines, Console, Debugger, Resources, Storage, and Search Results with this flexible tab-based interface.

Web Inspector Tab Bar

The everyday tasks of manipulating a page’s DOM tree and styles are now contained in the dedicated Elements tab. New improvements to our style editing experience also help you to stay productive.

Web Inspector Elements Tab

You can stay on top of resource requests in the Network tab — allowing for continuous network request monitoring, without the overhead of a full timeline recording. It includes filterable access to all resource requests and XHRs, along with a details sidebar for quick-hit information about individual requests and the server responses.

Web Inspector Network Tab

These enhancements are available to use in WebKit Nightly Builds and the Safari 9 beta. Help us make Web Inspector even better by sending us quick feedback on Twitter (@xeenon, @jonathandavis, @JosephPecoraro), file a bug report, or even contributing your own enhancements to Web Inspector.

For more information about other Web Inspector changes, please watch Using Safari to Deliver and Debug a Responsive Web Design from WWDC 2015.

Planet MozillaReps Weekly Call – July 30th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

ParticipationSmall

Summary

  • Participation team Q3 goals.
  • Featured events.
  • Help me with my project.
  • What’s up with Council this week.

AirMozilla video

Detailed notes

Shoutouts to Flaki, Michael, all Webmaker party organizers, Ankit, Dian Ina, and Reps Portal devs.

Participation Team: Q3 goals

Rosana and Ruben talked about the Participation Team and the goals for this quarter.

Some highlights:

  • George is here to stay and will be leading the team in the long term.
  • The team will work with partners in the functional areas to develop relevant initiatives volunteers can chime in. WilliamQ and Brian will focus on this.
  • Global-local work will focus on supporting Reps and regional communities working together, supporting and coaching volunteers to be able to work in top initiatives. Rosana, Rubén, Guillermo, Konstantina and Francisco will focus on this.
  • Special focus on developing leaders to have more impact, running workshops around this. Emma will be leading these efforts.
  • To support more technical opportunities the Participation Tech Platform will work on this front, with Pierros leading and with the help of Tasos, Nemo, Nikos and Emma.
  • In order to be more effective there will be more focus on 10 specific countries to enable more impact.
  • The main initiatives will be: Firefox OS Ignite, Brand building in Germany, Firefox for Android in India, Midterm plan for 3 communities, Leadership program and Technology group.
  • The team will keep working using Hearthbeats.

Check the presentation and the team’s github.

Featured events

  • Maker Parties/Webmaker events
    • Dehli: From July 31st to August 15th.
    • Bacolod, Philippines: July 31st.
  • WebAssembly for humans. PereiraJS. Pereira, Colombia. July 30th.
  • Firefox OS introduction in Maseno University. Kisumu, Kenya. July 31st.
  • Firefox OS Africa Workshop. Paris, France. July 31st, August 2nd.
  • Weekly MozTW Lab, Taipei. Community space Taipei. July 31st.
  • Stumbling in a box, Kerala. Kerala, India. August 2nd to 22nd.

Help me with my project!

Reps portal

Currently there is a lot going on with the Participation team tooling and the Reps portal devs need help to fix bugs and add new features.

They can devote some time to mentor new people to get involved with the portal development.

Check the repository and current bugs.

QA

The new bug triage tool is now live. The idea is that users sign up to get a list of untriaged bugs to move them to the proper category and make them actionable.

Give it a try and let them know what do you think.

What’s up with Council this week

  • Mentor selection criteria: The feedback has been discussed and a final draft will be shared soon.
  • Update of the Budget SOP to make it clearer and easier to read it
  • Blog post for ROM of July.
  • Working together with Francisco to be updated about future, important events and get more background from him
  • Helping out with the Participation Team Goals for Q3

Raw etherpad notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Planet MozillaFirefox 41.0 Aurora Testday Results

Hello Mozillians!

As you may already know, last Friday – July 31st – we held a new Testday event, for Firefox 41.0 Aurora.

We’d like to take this opportunity to thank Bolaram PaulMoin Shaikh, gaby2300, kenkon and the Bangladesh QA Community: Nandita Roy, Nazir Ahmed Sabbir, MD.Owes Quruny Shubho, Sadia Islam, Ashickur RahmanMohammad Maruf IslamSaheda Reza AntoraMuktasib Un NurRezaul Huque Nayeem, Md.Ehsanul Hassan, Eyakub, Md. Jahid Hasan Fahim, Md. Mahmudul Huq and Nahida Akter for getting involved in this event and making Firefox as best as it could be.

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events! :)

Planet MozillaTying ecosystems through browsers

One of the principles behind HTML5, and the community building it, is that the specifications that say how the Web works should have enough detail that somebody reading them can implement the specification. This makes it easier for new Web browsers to enter the market, which in turn helps users through competitive pressure on existing and new browsers.

I worry that the Web standards community is in danger of losing this principle, quite quickly, and at a cost to competition on the Web.

Some of the recent threats to the ability to implement competitive browsers are non-technical:

  • Many leading video and audio codecs are subject to non-free patent licenses, due at least in part to the patent policies and practices of the standards bodies building such codecs.
  • Implementing EME in a way that is usable in practice requires having a proprietary DRM component and then convincing the sites that use EME to support that component. This can be done by building such a component or forming a business relationship with somebody else who already has. But this threat to browser competition is at least partly related to the nature of DRM, whose threat model treats the end user as the attacker.

Many parts of the technology industry today are dominated by a small group of large companies (effectively an oligopoly) that have an ecosystem of separate products that work better together than with their competitors' products. Apple has Mac OS (software and hardware), iOS (again, software and hardware), Apple TV, Apple Pay, etc. Google has its search engine and other Web products, Android (software only), Chrome OS, Chromecast and Google Cast, Android Pay, etc. Microsoft has Windows, Bing, Windows Phone, etc. These products don't line up precisely, but they cover many of the same areas while varying based on the companies strengths and business models. Many of these products are tied together in ways that both help users and, since these ties aren't standardized and interoperable, strongly encourage users to use other products from the same company.

There are some Web technologies in development that deal with connections between parts of these ecosystems. For example:

  • The Presentation API defines a way for a Web page to show content on something like a Chromecast or an Apple TV. But it only specifies the API between the Web page and the browser; the API between the browser and the TV is completely unspecified. (Mozilla participants in the group tried to change that early in the group's history, but gave up.)
  • The future Web Payments Working Group (which I wrote about last week) is intended to build technology in which the browser connects a user making a payment to a Web site. This has the risk that instead of specifying how browsers talk to payment networks or banks, a browser is expected to make business deals with them, or make business deals with somebody who already has such deals.

In both cases, specifying the system fully is more work. But it's work that needs to happen to keep the Web open and competitive. That's why we've had the principle of complete specification, and it still applies here.

I'm worried that the ties that connect the parts of these ecosystems together will start running through unspecified parts of Web technologies. This would, through the loss of the principle of specification for competition, makes it harder for new browsers (or existing browsers made by smaller companies) to compete, and would make the Web as a whole a less competitive place.

Planet MozillaThis Week in Rust 90

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere

New Releases & Project Updates

Slides and talks from RustCamp!

RustCamp was on Saturday, August 1st. It was lovely event populated by lovely people. If you couldn't make it here are the slides from some of the talks. Hopefully the remainder of slides will become available this week. Video recordings will be available at an indeterminate future date.

What's cooking on nightly?

130 pull requests were merged in the last week.

New Contributors

  • Agoston Szepessy
  • Andrew
  • Andrew Kuchev
  • Blake Loring
  • Daniel Albert
  • diaphore
  • Jeehoon Kang
  • Kieran Hunt
  • krumelmonster
  • Mark Buer
  • Nicolette Verlinden
  • Ralf Jung
  • Taliesin Beynon

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Internals discussions

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

There are some jobs writing Rust! This week's listings:

  • Assistant Researcher in Karlsruhe, Germany for embedded development on ARM stm32. Contact Oliver Schneider

Quote of the Week

It should be noted that the authentic Rust learning experience involves writing code, having the compiler scream at you, and trying to figure out what the heck that means. I will be carefully ensuring that this occurs as frequently as possible.

From @Gankro's Learning Rust With Entirely Too Many Linked Lists.

Thanks to @carols10cents for the tip. Submit your quotes for next week!.

Planet MozillaAlaska Cruise Log Day 1

Crown Princess view, another boat in the distance, and Mt Rainier in the far distance

<aside>

Foreword

Yesterday at Seatac I boarded a bus for Seattle Harbor and lacking any phone or mifi service, promptly left the grid. The following is rough chronology of events of day one of seven days aboard the Crown Princess cruiseliner with my family for my mom’s 70th birthday. I’m writing these summaries (typically the day after) based on items recorded in my personal log and other offline tools and devices, not knowing when they’ll make it to my site.

</aside>

We boarded the Crown Princess not long after checking in at the port, after having them verify our passports, take our self-assessed medical questionnaires, and give us our boarding cards. My parents had kindly arranged early boarding for us.

Just before we stepped onto the gangplank they checked our boarding cards and took our photos (without glasses). Thoughts of a complimentary cruise yearbook briefly went through my head.

Finding our rooms easily, we unloaded our backpacks and put away our small hand luggages. We had checked-in our rollaways at the Princess Cruises counter at Seatac for them to bring direclty to our state rooms later that afternoon. Apparently hotel rooms on ships are called “state rooms”.

<aside>

All Nighter

I was up all night the night before, alternating between laundry/cleaning and doing online tasks in some rough order of priority. Folded sports clothing, picking out just one tshirt and pair of shorts to pack. Respond to a few things on IRC, editing wiki pages accordingly. Folded numerous clean towels, having hosted four the past weekend.

Wrote a letter of recommendation and sent it. Cleaned house some more. From lots of travel I’ve learned that returning to a cleaner home makes you feel taken care of. Even if it was just your past self caring for your future self.

I checked the Twitter home page / social stream one last time (I only check it every other week or so anyway). A few friends & colleagues had tweeted about Foo Camp starting that night and how impressed they were with each other. Decided to post a friendly nudge of a note to their #foocamp hashtag as my last post to my site before leaving home.

It had been a while since I pulled an all-nighter, through sheer willpower (no caffeine), not at an all night underground somewhere, and having been up since before 6am. Suffice it to say I slept well on the flight from SFO to SEA, but awoke starving upon landing, and had to grab a quick snack before meeting my family at baggage claim, the others having flown in from other airports.

After an airport Starbucks run for my usual (quad espresso over ice, ice first, in a venti cup, with cold soy on top), I felt awake again, and took care of a few last things. Downloaded the maps.me offline mapping application along with maps for Washington, Alaska, British Columbia.

I reserved our Mozilla Community Space for August's first Homebrew Website Club meetup in SF, and asked my co-organizer Kyle Mahan to post an indie event and POSSE it to Facebook accordingly, as well as pick a venue for the second meeting in August.

Took a photo of the cruise ship icon at Seatac and posted it to Instagram as a public clue to my off-grid plans, cross-posted as a Foursquare check-in. My last online act of creation, ironically a double-silo post. Perhaps a public self-reminder that I too have much work to do.

</aside>

First Lunch

When you sleep little or not at all the night before, you’re a lot hungrier the next day. After checking out our rooms, their balconies, which happened to be connected, we quickly went to get lunch. There was no line at the Horizon Lounge, one of many buffet restaurants on the ship that’s open continuously from 06:00 to 23:00. I had pan cooked Alaskan cod with lemon, mashed potatoes, a big salad, and a small piece of onion focacia.

As we left the line went out the door. Curious about our workout options, we went to explore the gym.

To get to the gym you have to walk through the spa, with all manner of marketing posters and upsells of various rejuvenation, pampering, and beautification options. (Writing this makes me want to go back and photograph the posters, realizing just how strange they might seem out of context.)

In addition to wide spread of cardio and weight machines, most with a beautiful view out the front of the ship, they offered various classes on the hardwood floor behind the machines. Yoga, pilates, TRX, etc. As part of the cruise package we’re on, we each have $50 credit on our boarding cards (which double as room keys, triple as onboard credit cards), and a $12 yoga class seemed like a reasonable way to spend some credit. The classes only had about 15 spots each, with 3 lines for a waitlist. That seemed small for a population of nearly 3500.

We explored the upper decks, took in the views, and watched our departure from Seattle Harbor. We went back to our rooms, and I decided to read a bit from the three books I brought and take a nap to keep catching up on my sleep.

Safety Drill

I awoke a couple of hours later (apparently I needed that nap) to the sounds of my roommate (nephew1) scurrying about and noise from next door as well. Everyone was getting ready for the imminent safety drill.

Each cabin has lifevests for precisely the number of occupants. We were instructed that the drill was imminent and to carry (not wear) our life jackets to the Muster Station for our section. Everyone in my family was already on their way well before the official drill time. I made grabbed my life jacket and made my way as well, following the well placed signs and arrows to the Muster Station.

Found my parents, sister, a nephew, and my niece in the Muster Station which turned out to be an auditorium. Crew members scanned our boarding cards at the entrance. The crew member on stage quickly quieted everyone down and then started welcoming people by country and making other comical banter. Once in a while an official sounding (or perhaps just British accented) voice would come on the loud speakers reminding us what was going to happen in mere minutes.

Minutes before drill time, the lead crew member becamse increasingly serious, teaching us what would happen in case we had to abandone ship, finally instructing everyone how to put on their lifejackets, and then had us all do so.

16:00: the general alarm sounded. 7 short bleats and one long bleat. Everyone looked around, and the various crew members did a once over the crowd. Having passed the drill, we were told to take off our lifevests and return to our rooms. We did so, putting the lifevests back where we found them.

A Little Tour, A Few Mental Exercises

After the drill we wandered around the boat a bit, walking from pool to pool (pretty sure I counted four, not counting the hot tubs). We stopped to check out the Calypso pool in particular, above which was place a massive display and adjacent speakers, for daily movie showings.

My parents arranged for reserved dining for our dinners. 17:30 and 20:00 were the only two options so naturally we chose 17:30, better suiting the kids and us early risers. After our little mini-tour, we changed to look a bit nicer for dinner. For me that meant zipping up my fitted Betabrand jacket rather than being loose and casual with a black v-neck t-shirt underneath.

Dinner was sit down and order style, and I suddenly realized just how hungry I was (again). Feeling even more impetuous and impatient than my nephews & niece, I decided to first help nephew2 work on the puzzles on his kid’s placemat, and then we did mental exercises while waiting for our food.

First I asked him to tell me the ISO date, which he nailed without hesitating. Two thousand fiften DASH zero eight DASH zero one. Then a big grin knowing he’d nailed it. So of course I hit him up with a bigger challenge, the ISO ordinal date. He protested, claiming he hadn’t practiced it.

No chance I was letting him off that easy. I told him, no problem, let’s figure it out from what we know. How many days in January? 31. How many days in February? 28. What does that total? 59. How many days in March? 31. Add that? 90. Ok let’s put that aside, 90 days in the first quarter. How many days in April? 30. May? 31. June? 30. Total? 91. Let’s put that aside for the second quarter. 181. Yes that many days in the first half of the year.

How many days in July? 31. What day is it in August? 1. Add those. 32. What was the number we had before? 181. Now add those. 213. So what’s the ISO Ordinal date? Two thousand fifteen DASH two hundred thirteen. Nailed it. In his head, no paper needed.

The bigger goal here is of course to teach him two general purpose problem solving tools by practicing them: deconstruction and clustering/chunking. Every problem can be deconstructed into smaller, often trivial pieces. By clustering and chunking these solved pieces into larger pieces, you can keep the whole solution in your head as you build it back up.

Having mastered dates (all an 8-year old needs to know about dates anyway), we moved onto other units. I grilled him on metric lengths. Millimeters, centimeters, decimeters, meters, kilometers. With a little help, he got all those too. What about weight / mass? 1 kilogram is 2.2 pounds. He told me his weight / mass in both.

What about the periodic table of elements? Apparently he hadn't studided these yet. So I let him "phone a friend" and ask his older brother for help. We got thru nearly the first two rows. Finally we ended with naming airport codes and our food arrived.

Dinner And A Sunset

Our food arrived, one course at a time. For my appetizer I had Alaskan salmon gravlox. Then a simple small Caesar salad. Finally the baked Alaskan salmon special. Everyone else ordered dessert. I merely helped with some of the chocolate bits.

The temperature outside had swiftly dropped from high 80s down to windchilled 50s. We returned to our rooms and put on a layer or two. I grabbed a book to read as well. My younger sister and I went out to the upper sports deck level to walk around and watch the sunset.

<aside>

First sunset on a boat in just over two years. Last time was nearly half a world away, when I’d let myself believe, rather, was convinced by another, that we were on a particular trajectory. Memories fade with time, and thankfully so do projections, expectations, even hopes. That particular alternate timeline continues to diverge, dimming as it recedes beyond my current time cone, leaving behind nothing but a couple of abstract notions.

</aside>

The sun lit the cloudy horizon on fire, glinting off the tips of the waves. Approaching the bow of the ship, we had to push against an ever stronger headwind. I held my camera firmly, took a few more shots near the bow, then put it away, leaned into the wind, and just enjoyed the view.

A little windchill can’t scare off a San Franciscan. I walked back to the semi-protected Calypso Pool area where a movie was playing. Didn’t matter which one, it was just background. I picked out a pool chair, reclined comfortably in my layers, and only then noticed that everyone else on the chairs was bundled-up under indentical red green patterned wool blankets. It wasn’t the first, nor would it be the last time on the ship that I would suddenly feel different from everyone else around.

I started reading More Awesome Than Money (MATM), and while doing so gave in to the nearby unlimited pizza and softserve bar. A couple of slices of margerita and a small chocolate softserve later, I’d read thru about 10% of MATM and decided to return to my room.

Introduction to TRON: A Bedtime Movie

My roommate nephew1 was already getting ready for bed, and I found myself sleepy as well. Earlier I’d disclosed that I “brought” a few movies with me (they just happened to be images on my laptop from a few DVDs at home), and he’d done his due diligence, asking his parents which he could watch. Of the dozen or so I had that was permitted to see, he picked TRON.

It was already late, so we collectively decided to watch the first half hour, and then go to bed. It’s amazing what observations an 11-year old will make, and what questions they ask. Especially when seeing a movie with some of the earliest computer generated special effects.

We watched up through the scene where Kevin Flynn is introduced, and talks his friends and former colleagues into helping him break into ENCOM. As they were sneaking past towers of computers, we paused the movie and went to sleep.

Dev.OperaOn PPK’s moratorium on new browser features

Famed developer and commentator Peter Paul Koch (PPK) recently called for “a moratorium on new browser features for about a year or so”. If you haven’t read his article Stop pushing the web forward, give it a look; he raises interesting points, as he always does.

(Let us say now: we’re all big fans of PPK; he’s undergone personal attacks for his article — we’re going to disagree with his central thesis, while continuing to love him deeply, and thanking him for starting this discussion.)

In many ways, we Opera devrel folks feel his pain. Each of us is familiar with the feeling of coming back from a vacation and not understanding the Twitter conversations about specs that were launched in the fortnight we were lounging next to a pool / winning hearts with our bachata / touring ancient ruins / having it large in Magaluf / doing emoji poos and wishing we hadn’t eaten so many U+1F364s last night. (Delete as appropriate depending if you’re Bruce, Shwetank, Mathias, Vadim, or Andreas.)

There’s a lot to learn, and the web platform is becoming more complex. Even Ian Hickson, the editor of HTML5, said:

The platform has been too complex for any one person to fully understand for a long time already. Heck, there are parts of the Web platform that I haven’t even tried to grok — for example, WebGL or IndexDB — and parts that I continually find to be incredibly complicated despite my efforts at understanding them

…and that was two and a half years ago!

But it’s not necessary to remember the minutiae of every specification. It’s necessary to know what’s possible, and have access to a search engine to find the spec or tutorials to find details. How many of us memorise the syntax for CSS gradients, or remember every piece of the Web Audio API syntax? But that doesn’t stop us using them when we need to.

But PPK’s complaint isn’t primarily about complexity:

We should focus on the web’s strengths: simplicity, URLs and reach. The innovation machine is running at full speed in the wrong direction.

PPK cites an example:

To me, Navigation Transitions exemplifies what’s wrong with new browser features today. Its purpose is to allow for a smooth transition from one web page to another, to the point of synchronising the animations on the source and destination pages … We’ve done without for years. More importantly, end users have done without for years, and are quite used to a slight delay when they load another page … But why do web developers want navigation transitions? In order to emulate native apps, of course. To me, that’s not good enough.

But the point here is that users do want such things, because they’ve now become used to experiences available in native apps. And we know that consumers love the app experience; in April 2014, the mobile analytics firm Flurry reported

Apps continued to cement their lead, and commanded 86% of the average US mobile consumer’s time, or 2 hrs and 19 minutes per day. Time spent on the mobile web continued to decline and averaged just 14% of the US mobile consumer’s time, or 22 minutes per day.

Many of the new “features” coming to the web, like Service Worker or Installable Web Apps, are designed to enhance the web experience for end users — experiences they’ve become accustomed to from native apps but weren’t achievable previously on the web. That’s a win.

We also respectfully disagree with PPK that there’s a dichotomy between adding native-like user experiences and protecting the web’s core strengths of simplicity, URLs and reach.

Let’s address those core strengths in reverse order:

Reach

We’re reaching the point when some organisations are willing to forfeit the reach of the web because they want the features of native. For instance, Myntra does this — and its parent company Flipkart is planning to do so soon as well; they have disabled their site on mobile and urge people to use the app (although you can still use the site on desktop).

Uber’s service is only available through the app. Arguably, that’s understandable because sometimes the Geolocation API is not very accurate on web, and really accurate location info is critical to a taxi hailing service. But that’s an argument for making the Geolocation API better, rather than stopping development.

If we slow development of the web, we risk losing more services to native, thereby diminishing the web’s reach.

URLs

If you get a great big saucepan, and boil the web in it all weekend, periodically skimming off the scum of YouTube comments, porn and photographs of horrible kittens (tautology?), when you look under the lid on Sunday night you’ll find you’re left with URLs. (Or URIs. Or URNs. Who cares what the difference is?)

It’s called “the web” because it’s a network that joins resources together, and those resources are individually addressable.

Modern standards are designed to preserve URLs. Take Service Worker, for example; the explainer document for Navigation Controller (the previous name for what’s become Service Worker) says

It forces you to have URLs! Some modern apps platforms have foresaken this core principle of the web and suffer for it. The web should never make the same mistake.

Similarly, the Web Manifest spec defines a web app’s start and scope in terms of good old-fashioned vitally-important URLs. The proposed Upgrade Insecure Requests spec tries to ensure that no links break if a developer upgrades their server to HTTPS in order to provide a better (more secure) user experience.

There’s a lot the web can learn from native (without slavishly emulating it), but linkability is something that native needs to emulate from the web; see the Rube Goldberg machine-like App Links proposal to see how Facebook is trying to bring “deep linking to content in your mobile app”. We have URLs; PPK is right that we need to jealously preserve them, so modern standards attempt to do just that.

Simplicity

There’s a deeper complexity to the modern web platform than the sheer volume of features. It’s to do with the way the specs were written, the timescale over which they’ve been written and the fact that some features that we rely on have never been specified at all.

One example is HTML5 Parsing. For years, developers had to deal with the different DOMs that browsers constructed from invalid markup (which, as we know, is the vast majority of the web). This was allowed because HTML 4 never specified what to do with bad markup, so browsers were free to do as they saw fit.

HTML5 changed that, and now all browsers worth shaking an angle bracket at produce the same DOM regardless of the validity of the markup. This has produced a huge boost in interoperability, benefitting consumers and saving developers megatons of heartache.

A more current example is XMLHttpRequest which was never formalised and standardised until years after Microsoft implemented it and everyone else reverse-engineered it and copied it.

XHR is hardly a beautiful API, and will be replaced by the Fetch Standard which aims to simplify and unify network requests. Its preface says

At a high level, fetching a resource is a fairly simple operation. A request goes in, a response comes out. The details of that operation are however quite involved and used to not be written down carefully and differ from one API to the next.

Numerous APIs provide the ability to fetch a resource, e.g. HTML’s img and script element, CSS’ cursor and list-style-image, the navigator.sendBeacon() and self.importScripts() JavaScript APIs. The Fetch Standard provides a unified architecture for these features so they are all consistent when it comes to various aspects of fetching, such as redirects and the CORS protocol.

Modern standards are all about explaining the platform to simplify development, and ensuring a solid, understandable foundation upon which to build.

This is built on a design philosophy called the Extensible Web Manifesto. It’s too much to explore here, but The Chair of the W3C Extensible Web Community Group, Brian Kardell, wrote us an article about it called Sex, Houdini and the Extensible Web.

Immediacy

A central pillar of the web that PPK doesn’t mention is what I call “immediacy”. When you make a change to a web site, the next visitor gets the updated version immediately. With native apps you have to publish to an App Store, your user is alerted that there’s an updated version and, when they have wifi, they’ll update it. Maybe.

Installable Web Apps give us an app-like experience — an icon on the homescreen, potentially working even while offline — but retain the immediacy of the web because the app is hosted on a server. In fact, the app is actually — wait for it — a web site with a URL pointed to by the homescreen icon. We combine the strengths of the web with the user experience of native.

Conclusion

At Opera, a lot our developers work on bringing the web to people who otherwise wouldn’t get it, either with Opera Mini, or by reducing Chromium’s memory consumption so that it works on the lower-specification devices that most of the world uses.

We know that the fastest growing mobile phone markets don’t use apps, so by artificially slowing the pace of evolution on the web, we’re deciding that these people should get a second-class online experience.

It’s imperative, we believe, for the web to continue to add new features — like Service Workers and Installable Web Apps, just as we added native video, the Audio API, the <picture> element, Storage APIs — that extend what the web can do so that it continues to grow and provide the reach that PPK wants, and that we want.

Opera welcomes new developments that make the web better for users. We’re the only browser manufacturer that isn’t also trying to sell an Operating System or locked-down device. So for us, it’s vital that the web continues to thrive — and we believe it’s vital for everybody.

This was written by Bruce Lawson, with input from the rest of Opera’s Developer Relations team. Disagree? Please, write a commentary post and tweet us the link!

Added 4 August: Simultaneously with our publishing, Google Chrome evangelist Jake Archibald published If we stand still, we go backwards on the same subject.

Planet MozillaTenFourFox 38 beta 2 available

The next TenFourFox beta is now available (downloads, release notes, hashes). Officially, the version number is 38.1.1b2 due to the revbump on that ESR branch.

The most important bug this fixes is, of course, our new IonPower JavaScript JIT compiler backend wreaking havoc with Faceblech and Farceboink-derived sites such as Instacrap. Near as I am able to determine, as a conscientious objector to the Zuckerbrat Amalgamated Evil Empire, this fixes all outstanding issues with these sites. Oddly, the edge case responsible for this was not detected by Mozilla's JIT tests, which is probably why most sites were unaffected; the actual problem was diagnosed only by a couple of weird failures while running the strict ECMA conformance suite. Also, as mentioned, the engine has been tuned a bit more for improved overall throughput, and is approximately 4-5% faster than beta 1.

Some of you complained of a quit bug where memory usage would skyrocket while exiting the browser, causing it to crash after exhausting its addressing space instead. I cannot confirm this specific manifestation on any of my test systems. However, I did find another bug in the webfont cache that may be possibly related: if you close a window with webfonts loaded in it that are slated for cleanup, the browser can get stuck in an infinite call loop while freeing those resources, which will exhaust the processor stack. This issue is specific to our Apple Type Services font code. On TenFourFox the stack allocation is an entire gigabyte in size because of ABI stack frame requirements, so completely using it up may well freak out systems with less memory (all of my test machines have at least 1.25GB of RAM). In any case, that bug is fixed. Hopefully that's actually what you're seeing, because I still can't reproduce any problems with exiting.

A Leopard-only crash observed on the New York Times is now fixed by implementing a formal webfont URL blocklist; Tiger users don't crash, but get various weird and annoying font errors. This is caused by yet another underlying ATS bug. Safari on 10.5 is subject to this also, but it (and Leopard WebKit) get around the problem by turning the ATSFontRef into a CGFontRef and seeing if it validates that way (see issue 261). This is clearly a much better general solution, but while these functions exist as undocumented entry points on 10.4 they all call into ATS, so 10.4 users still get the weird messages. The only way to solve this fully on both operating systems is to just block the font entirely. Previously we did this by blocking on the PostScript name, but the New York Times, because it is old and senile, uses webfonts with the supremely unhelpful PostScript name of - and blocking that blocked everything. Fortunately, various reorganizations of the Gecko font system make it easy to wedge in a URL blocker that looks at the source URL and, if it is a known bad font, tells Gecko the font is not supported. The font never loads, Gecko selects the fallback, and the problem is solved. This problem also affects 31.8, but the solution is much different for that older codebase and there won't be another 31 anyway.

In the non-showstopper category, the issues with saved passwords not appearing in preferences and checkmarks not appearing on context or pull-down menus are both corrected. In addition, I reduced window and tab undo limits to hold onto less memory (back to what they were in 31), forced tenured objects to be finalized on the foreground thread to get around various SMP problems (again, as in 31 and in 17 and previous), tweaked media buffering a bit more, fixed a nasty assertion in private browsing mode with saved logins, and turned on colour management for all the things as politely requested by Dan DeVoto. The blank saved passwords list is actually due to the fact we can't yet compile ICU internationalization support into XUL because of issue 266, which also appears to be why Zotero started failing, since it also depends on it. For 38.2 final, both of these issues are worked around using trivial stubs and some minor code changes. Hopefully we can get it to make a shared dylib for a future release of 38.x and remove these hacks.

There are two changes left for final: put the update interval back to every 24 hours, and possibly remove the Marketplace icon from the Start page since we don't really support the webapp runtime. (The Apps option was already removed from the drop-down menus.) No one has complained about the faster/lower quality downscaler, so that will remain as is; about the only place it annoys me personally is favicons. Full MP3 support is being deferred to a feature beta after 38.2.

Builders will want the new versions of strip7 and gdb7. In fact, you'll need the new strip7 to build 38.1.1, because it fixes a crash with setting section flags. Although the gdb7 update to patchlevel 3 is optional, it is much faster than patchlevel 2, and will cause you to go less crazy single-stepping through code. Now that all the known build problems are dealt with, I am hopeful our builder in the land of the Rising Sun can make the jump to Tenfourbird 38 along with us.

Finally, many thanks to our localizers; the current list is English (natch), Spanish, French, Italian, Russian, German, Finnish and now Swedish. We still might need some help with Polish, and I cannot find an old copy of the Japanese language pack, so it is possible that localization will have to be dropped. Please help us with Polish, Japanese, or your own language if you can! Localizations must be complete by midnight August 5 so that I have enough time to upload and write the new page copy ahead of the formal general release of 38.2 on the evening of August 10. See issue 42 if you'd like to assist.

Once 38 launches, we will transition from Google Code to Github (leaving downloads on SourceForge). All of our project activity on Google Code will be frozen on August 5 after the last localization is uploaded. More about that shortly.

Planet MozillaAugust 2015 Featured Add-ons

Pick of the Month: Forecastfox (fix version)

by Oleksandr
Get international weather forecasts from AccuWeather.com and display them in any toolbar or statusbar with this highly customizable and unobtrusive extension.

“After trying the other weather add-ons when this one went obsolete, I give them up, they were simply inadequate. This was the only add-on that we need in a browser. THANK-YOU for bringing it back it is the only one to have PERIOD…..”

Featured: Tamper Data

by Adam Judson
Use tamperdata to view and modify HTTP/HTTPS headers and post parameters.

Featured: Add-ons Manager Context Menu

by Zulkarnain K.
Add more items to Add-ons Manager context menu.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Planet MozillaRelEng & RelOps Weekly highlights - July 31, 2015

Welcome back to the weekly releng Friday update! Here’s what we’ve been up to this week.

Modernize infrastructure: Rob checked in code to integrate Windows with our AWS cloud-tools software so that we now have userdata for deploying spot instances (https://bugzil/la/1166448) as well as creating the golden AMI for those instances.

Mark checked in code to update our puppet-managed Windows machines with a newer version of mecurial, working around some installation oddities (https://bugzil.li/1170588).

Now that Windows 10 has been officially released, Q can more easily tackle the GPOs that configure our test machines, verifying which don’t need changes, and which will need an overhaul (https://bugzil.la/1185844). Callek is working to get buildbot setup on Windows 10 so we can start figuring out which suites are failing and engage developers for help.

Improve CI pipeline: With the last security blockers resolved and a few weeks of testing under his belt, Rail is planning to enable Funsize on mozilla-central next Tuesday (https://bugzil.la/1173452)

Release: Uplift starts next week, followed by the official go-to-build for Firefox 40. Beta 9 is out now.

Operational: Buildduty contractors started this week! Alin (aselagea) and Vlad (vladC) from Softvision are helping releng with buildduty tasks. Kim and Coop are trying to get them up-to-speed as quickly as possible. They’re finding lots of undocumented assumptions built into our existing release engineering documentation.

Dustin has migrated our celery backend for relengapi to mysql since we were seeing reliability issues on the rabbit cluster we had been using (https://bugzil.la/1185507).

Our intern, Anthony Miyaguchi, added database upgrade/downgrade ability to relengapi via alembic, making future schema changes painless. (https://github.com/mozilla/build-relengapi/pull/300)

Amy has finished replacing the two DeployStudio servers with newer hardware, OS, and deployment software, and we are now performing local Timemachine backups of the their data (https://bugzil.la/1186197). Offsite backups will follow once Bacula releases a new version of their software that correctly supports TLS 1.2.

The new Windows machines we setup last week are now in production, increasing capacity by 10 machines each in the Windows XP, Windows 7, and Windows 8 test pools (https://bugzil.la/1151591).

See you next week!

Planet MozillaPayments on the Web

Lately I've been involved in discussions in the W3C's Web Payments Interest Group about chartering a new working group to work on payment APIs for the Web. I certainly don't have the resources to implement this work in Firefox by myself, but I'm hoping to at least help the standardization activity get started in an effective way, and, if it does, to help others from Mozilla get involved.

From a high-level perspective, I'd like to see the working group produce a technology that allows payments in the browser, involving some trusted UI in the browser (like for in-app payments on mobile operating systems) that says what payment is going to happen, and involving tokenization in the browser or on a server or application with which the browser communicates, with only the tokens being sent from the browser to the website.

I think this has two big benefits. First, it improves security by avoiding sending the user's credit card details to every site that the user wants to pay. It sends tokens that contain the information needed to make a single payment of a particular amount, instead of information that can be reused to make additional payments in the future. This makes payments on the Web more secure.

Second, if we can design the user interface in a way that users understand these improvements in security, we can hopefully make users more comfortable making small payments on the Web, in some cases to parties that they don't know very well. This could make business models other than advertizing more realistic for some providers of Web content or applications.

There are certainly risks here. One is that the effort might fail, as other efforts to do payments have failed in the past. There are also others, some of which I want to discuss in a future blog post.

Planet MozillaWhat’s up with SUMO – 31st July

Hey there, planet SUMO! Are you ready for another round of updates and reminders from the world of SUMO? You’d better be, cause here they come!

Hearken, there are new names in SUMO town

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

 We salute you!

Last Monday’s SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 3rd of August. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Help needed – thank you!

Developers

Community

Support Forum

  • SUMO Forum Supporters, remember that Madalina is currently away from her keyboard and will be back around the 10th of August. If you need help with something, let Michał know.
  • Reminder: the One and Done SUMO Contributor Support Training is live. Start here!

Knowledge Base

L10n

  • Calling all l10ns! Please localize this Windows 10 article, as it’s quite crucial to users around the world.
  • Sprint planning for six locales is in progress… and so is the video recording for the KB l10n tutorial.

Firefox

Thunderbird

  • One final reminder: as of July 1st, Thunderbird is 100% Community powered and owned! You can still +needinfo Roland in Bugzilla or email him if you need his help as a consultant. He will also hang around in #tb-support-crew.

Are you following us on Twitter? Not yet? Get to it! ;-) See you on Monday… Until then… take it easy!

Planet MozillaFirst Month of Mozilla Internship

It has been a month since I started my Mozilla internship in San Francisco, but it feels like I had just started yesterday. I have been interning with the Automation and Tools team this summer and it has been a wonderful experience. As an intern in Mozilla, one gets goodies, new laptop, free food, and there are various intern events in which one gets to take part in. My mentor @chmanchester also gave me the freedom to decide what I wanted to work on which is quite unlike some other companies.

I chose to work on tools to improve developer productivity and making libraries that I have worked on in the past, more relevant. In the past month, I have been working on getting various triggering options inside Treeherder, which is a reporting dashboard for checkins to Mozilla projects. This involved writing AngularJS code for the front-end UI and python code for the backend. The process involved publishing the “triggering actions” to pulse, listening to those actions and then use the mozci library on the backend to trigger jobs. Currently if developers, particularly the sheriffs, want to trigger a lot of jobs, they have to do it via command line, and it involves context switching plus typing out the syntax.  To improve that, this week we have deployed three new options in Treeherder that will hopefully save time and make the process easier. They are:

* Trigger_Missing_Jobs: This button is to ensure that there is one job for every build in a push. The problem is that on integration branches we coalesce jobs to minimize load during peak hours, but many times there is a regression and sheriffs need to trigger all jobs for the push. That is when this button will come in handy, and one will be able to trigger jobs easily.

* Trigger_All_Talos_Jobs: As the name suggests, this button is to trigger all the talos jobs on a particular push. Being a perf sheriff, I need to trigger all talos jobs a certain number of times for a regressing push to get more data, and this button will aid me and others in doing that.

Screen Shot 2015-07-30 at 22.30.56Fig: shows “Trigger Missing Jobs” and “Trigger All Talos Jobs” buttons in the Treeherder UI

* Backfill_Job: This button is to trigger a particular type of job till the last known push on which that job was run. Due to coalescing certain jobs are not run on a series of pushes, and when an intermittent or bustage is detected, sheriffs need to find the root cause and thus manually trigger those jobs for the pushes. This button should aid them in narrowing down the root regressing revision for a job that they are investigating.

Screen Shot 2015-07-31 at 09.39.43Fig: shows “Backfill this job” button in the Treeherder UI

All of the above features right now only work for jobs that are on buildapi, but when mozci will have the ability to trigger tasks on taskcluster, they will be able to work on those jobs too. Also, right now all these buttons trigger the jobs which have a completed build, in future I plan to make these buttons to also trigger jobs which don’t have an existing build. These features are in beta and have been released for sheriffs, I would love to hear your feedback! A big shout out to all the people who have reviewed, tested and given feedback on these features.


Planet MozillaThe “we are all remoties” book!?!

I’ve been working in distributed teams, as well as talking, presenting, coaching and blogging about “remoties”‚ in one form or another for 8?9? years now. So, I’m excited to announce that I recently signed a contract with O’Reilly to write a book about how to successfully work in, and manage in, a geo-distributed world. Yes, I’m writing a “we are all remoties” book. If you’ve been in one of my ever-evolving “we are all remoties” sessions, you have an idea of what will be included.

If you’ve ever talked with me about the pros (and cons!) of working as a remote employee or of working in a distributed team, you already know how passionate I am about this topic. I care deeply about people being able to work well together, and having meaningful careers, while being physically or somehow otherwise remote from each other. Done incorrectly, this situation can be frustrating and risky to your career, as well as risky to employers. Done correctly, however, this could be a global change for good, raising the financial, technical and economic standards across all sorts of far flung places around the globe. Heady game-changing stuff indeed.

There are many “advocacy books” out there, explaining why working remote is a good / reasonable thing to do – typically written from the perspective of the solo person who is already remote. There are also many different tools becoming available to help people working in distributed teams – exciting to see. However, I found very few books, or blogposts, talking about the practical mechanics of *how* to use a combination of these tools and some human habits to allow humans to work together effectively in distributed teams, especially at any scale or over a sustained amount of time. Hence, my presentations, and now, this upcoming book.

Meanwhile,

  • if you are physically geo-distributed from the people you work with, I’d like to hear what does or doesn’t work for you. If you know someone who is in this situation, please share this post with them.
  • If you have experience working in distributed teams, is there something that you wish was already explained in a book? Something that you had to learn the hard way, but which you wish was clearly signposted to make it easier for others following to start working in distributed teams? Do you have any ideas that did / didn’t work for you?
  • If you have published something on the internet about remoties, please be tolerant of any questions I might ask. If you saw any of my “we are all remoties” presentations, is there anything that you would like to see covered in more/less detail? Anything that you wish was written up in a book to help make the “remote” path easier for those following behind?

…you can reach me on twitter (“@joduinn”) or on email (john at my-domain-name – and be sure to include “remoties” in the subject, to get past spam filters.)

Now, time to brew some coffee and get back to typing.

John.
=====
(updated 31jul2015 to add twitter + email address.)

Planet MozillaMercurial 3.5 Released

Mercurial 3.5 was released today (following Mercurial's time-based schedule of releasing a new version every 3 months).

There were roughly 1000 commits between 3.4 and 3.5, making this a busy version. Although, 1000 commits per release has become the new norm, as development on Mercurial has accelerated in the past few years.

In my mind, the major highlight of Mercurial 3.5 is that the new bundle2 wire protocol for transferring data during hg push and hg pull is now enabled by default on the client. Previously, it was enabled by default only on the server. hg.mozilla.org is running Mercurial 3.4, so clients that upgrade to 3.5 today will be speaking to it using the new wire protocol.

The bundle2 wire protocol succeeds the existing protocol (which has been in place for years) and corrects many of its deficiencies. Before bundle2, pull and push operations were not atomic because Mercurial was performing a separate API call for each piece of data. It would start by transferring changeset data and then have subsequent transfers of metadata like bookmarks and phases. As you can imagine, there were race conditions and scenarios where pushes could be incomplete (not atomic). bundle2 transfers all this data in one large chunk, so there are much stronger guarantees for data consistency and for atomic operations.

Another benefit of bundle2 is it is a fully extensible data exchange format. Peers can add additional parts to the payload. For extensions that wish to transfer additional metadata (like Mozilla's pushlog data), they can simply add this directly into the data stream without requiring additional requests over the wire protocol. This translates to fewer network round trips and faster push and pull operations.

The progress extension has been merged into Mercurial's core and is enabled by default. It is now safe to remove the extensions.progress config option from your hgrc.

Mercurial 3.5 also (finally) drops support for Python 2.4 and 2.5. Hopefully nobody reading this is still running these ancient and unsupported versions of Python. This is a win for Mercurial developers, as we were constantly having to work around deficiencies with these old Python releases. There were dozens of commits removing hacks and workarounds for Python 2.4 and 2.5. Dropping 2.4 and 2.5 also means Python 3 porting can begin in earnest. However, this isn't a high priority for anyone, so don't hold your breath.

There were a number of performance improvements in 3.5:

  • operations involving obsolescence markers are faster (for users of changeset evolution)
  • various revsets were optimized
  • parts of phases calculation are now performed in C. The not public() revset should be much faster.
  • hg status and things walking the filesystem are faster (Mozillians should be using hgwatchman to make hg status insanely fast)

A ui.allowemptycommit config option was introduced to control whether empty commits are allowed. Mozillians manually creating trychooser commits may run into problems creating empty commits without this option (a better solution is to use mach push-to-try).

Work is progressing on per-directory manifests. Currently, Mercurial stores the mapping of files to content in a giant list called the manifest. For repositories with tens or hundreds of thousands of files, decoding and reading large manifests is very CPU intensive. Work is being done to enable Mercurial to split manifests by directory. So instead of a single manifest, there are several. This is a prequisite to narrow clone, which is the ability to clone history for a subset of files (like how Subversion works). This work will eventually enable repositories with millions of files to exist without significant performance loss. It will also allow monolithic repositories to exist without the common critique that they are too unwieldy to use because they are so large.

hgignore files now have an include: and subinclude: syntax that can be used to include other files containing ignore rules. This feature is useful for a number of reasons. First, it makes sense for ignore rules to live in the directory hierarchy next to paths they impact. Second, for people working with monolithic repositories, it means you can export a sub-directory of your monorepo (to e.g. a Git repository) and its ignore rules - being defined in local directories - can still work. (I'm pretty sure Facebook is using this approach to make its syncing of directories/projects from its Mercurial monorepo to GitHub easier to manage.)

Significant work has been done on the template parser. If you have written custom templates, you may find that Mercurial 3.5 is more strict about parsing certain syntax.

Revsets with chained or no longer result in stack exhaustion. Before, programmatically generated revsets like 1 or 2 or 3 or 4 or 5 or 6... would likely fail.

Interactions with servers over SSH should now display server output in real time. Before, server output was buffered and only displayed at the end of the operation. (You may not see this on hg.mozilla.org until the server is upgraded to 3.5, which is planned for early September.)

There are now static analysis checks in place to ensure that Mercurial config options have corresponding documentation in hg help config. As a result, a lot of formerly undocumented options are now documented.

I contributed various improvements. These include:

  • auto sharing repository data during clone
  • clone and pull performance improvements
  • hg help scripting

There were tons of other changes, of course. See the official release notes and the upgrade notes for more.

The Mercurial for Mozillians Installing Mercurial article provides a Mozilla tailored yet generally applicable guide for installing or upgrading Mercurial to 3.5. As always, conservative software users may want to wait until September 1 for the 3.5.1 point release to fix any issues or regressions from 3.5.

Planet MozillaMy Contributions to Mercurial 3.5

Mercurial 3.5 was released today. I contributed some small improvements to this version that I thought I'd share with the world.

The feature I'm most proud of adding to Mercurial 3.5 is what I'm referring to as auto share. The existing hg share extension/command enables multiple checkouts of a repository to share the same backing repository store. Essentially the .hg/store directory is a symlink to shared directory. This feature has existed in Mercurial for years and is essentially identical to the git worktree feature just recently added in Git 2.5.

My addition to the share extension is the ability for Mercurial to automatically perform an hg clone + hg share in the same operation. If the share.pool config option is defined, hg clone will automatically clone or pull the repository data somewhere inside the directory pointed to by share.pool then create a new working copy from that shared location. But here's the magic: Mercurial can automatically deduce that different remotes are the same logical repository (by looking at the root changeset) and automatically have them share storage. So if you first hg clone the canonical repository then later do a hg clone of a fork, Mercurial will pull down the changesets unique to the fork into the previously created shared directory and perform a checkout from that. Contrast with performing a full clone of the fork. If you are cloning multiple repositories that are logically derived from the same original one, this can result in a significant reduction of disk space and network usage. I wrote this feature with automated consumers in mind, particularly continuous integration systems. However, there is also mode more suitable for humans where repositories are pooled not by their root changeset but by their URL. For more info, see hg help -e share.

For Mercurial 3.4, I contributed changes that refactored how Mercurial's tags cache works. This cache was a source of performance problems at Mozilla's scale for many years. Since upgrading to Mercurial 3.4, Mozilla has not encountered any significant performance problems with the cache on either client or server as far as I know.

Building on this work, Mercurial 3.5 supports transferring tags cache entries from server to client when clients clone/pull. Before, clients would have to recompute tags cache entries for pulled changesets. On repositories that are very large in terms of number of files (over 50,000) or heads (hudreds or more), this could take several dozen seconds or even minutes. This would manifest as a delay either during or after initial clone. In Mercurial 3.5 - assuming both client and server support the new bundle2 wire protocol - the cache entries are transferred from server to client and no extra computation needs to occur. The client does pay a very small price for transferring this additional data over the wire, but the payout is almost always worth it. For large repositories, this feature means clones are usable sooner.

A few weeks ago, a coworker told me that connections to a Mercurial server were timing out mid clone. We investigated and discovered a potential for a long CPU-intensive pause during clones where Mercurial would not touch the network. On this person's under-powered EC2 instance, the pause was so long that the server's inactivity timeout was triggered and it dropped the client's TCP connection. I refactored Mercurial's cloning code so there is no longer a pause. There should be no overall change in clone time, but there is no longer a perceivable delay between applying changesets and manifests where the network could remain idle. This investigation also revealed some potential follow-up work for Mercurial to be a bit smarter about how it interacts with networks.

Finally, I contributed hg help scripting to Mercurial's help database. This help topic covers how to use Mercurial from scripting and other automated environments. It reflects knowledge I've learned from seeing Mercurial used in automation at Mozilla.

Of course, there are plenty of other changes in Mercurial 3.5. Stay tuned for another blog post.

Planet MozillaFirefox 41 Aurora Testday, July 31st

Hi there, I want to let you know that this Friday, July 31st, we’ll be hosting the Firefox 41.0 Aurora Testday. The main focus of this event is going to be set on NPAPI Flash and Hello Chat. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday! Let’s make Firefox better together! 😀

Planet MozillaShutting down the legacy Sync service

In response to strong user uptake of Mozilla’s new Sync service powered by Firefox Accounts, earlier this year we announced a plan to transition users off of our legacy Sync infrastructure and onto the new product.  With this migration now well under way, it is time to settle the details of a graceful end-of-life for the old service.

We will shut down the legacy Sync service on September 30th 2015.

We encourage all users of the old service to upgrade to a Firefox Account, which offers a simplified setup process, improved availability and reliability, and the possibility of recovering your data even if you lose all of your devices.

Users on Firefox 37 or later are currently being offered a guided migration process to make the experience as seamless as possible.  Users on older versions of Firefox will see a warning notice and will be able to upgrade manually.  Users running their own Sync server, or using a Sync service hosted by someone other than Mozilla, will not be affected by this change.

We are committed to making this transition as smooth as possible for Firefox users.  If you have any questions, comments or concerns, don’t hesitate to reach out to us on sync-dev@mozilla.org or in #sync on Mozilla IRC.

 

FAQ

 

  • What will happen on September 30th 2015?

After September 30th, we will decommission the hardware hosting the legacy Sync service and discard all data stored therein.  The corresponding DNS names will be redirected to static error pages, to ensure that appropriate messaging is provided for users who have yet to upgrade to the new service.

  • What’s the hurry? Can’t you just leave it running in maintenance mode?

Unfortunately not.  While we want to ensure as little disruption as possible for our users, the legacy Sync service is hosted on aging hardware in a physical data-center and incurs significant operational costs.  Maintaining the service beyond September 30th would be prohibitively expensive for Mozilla.

  • What about Extended Support Release (ESR)?

Users on the ESR channel have support for Firefox Accounts and the new Sync service as of Firefox 38.  Previous ESR versions reach end-of-life in early August and we encourage all users to upgrade to the latest version.

  • Will my data be automatically migrated to the new servers?

No, the strong encryption used by both Sync systems means that we cannot automatically migrate your data on the server.  Once you complete your account upgrade, Firefox will re-upload your data to the new system (so if you have a lot of bookmarks, you may want to ensure you’re on a reliable network connection).

  • Are there security considerations when upgrading to the new system?

Both the new and old Sync systems provide industry-leading security for your data: client-side end-to-end encryption of all synced data, using a key known only to you.

In legacy Sync this was achieved by using a complicated pairing flow to transfer the encryption key between devices.  With Firefox Accounts we have replaced this with a key securely derived from your account password.  Pick a strong password and you can remain confident that your synced data will only be seen by you.

  • Does Mozilla use my synced data to market to me, or sell this data to third parties?

No.  Our handling of your data is governed by Mozilla’s privacy policy which does not allow such use.  In addition, the strong encryption provided by Sync means that we cannot use your synced data for such purposes, even if we wanted to.

  • Is the new Sync system compatible with Firefox’s master password feature?

Yes.  There was a limitation in previous versions of Firefox that prevented Sync from working when a master password was enabled, but this has since been resolved.  Sync is fully compatible with the master password feature in the latest version of Firefox.

  • What if I am running a custom or self-hosted Sync server?

This transition affects only the default Mozilla-hosted servers.  If you are using a custom or self-hosted server setup, Sync should continue to work uninterrupted and you will not be prompted to upgrade.

However, the legacy Sync protocol code inside Firefox is no longer maintained, and we plan to begin its removal in 2016.  You should consider migrating your server infrastructure to use the new protocols; see below.

  • Can I self-host the new system?

Yes, either by hosting just the storage servers or by running a full Firefox Accounts stack.  We welcome feedback and contributions on making this process easier.

  • What if I’m using a different browser (e.g. SeaMonkey, Pale Moon, …)?

Your browser vendor may already provide alternate hosting.  If not, you should consider hosting your own server to ensure uninterrupted functionality.

Planet MozillaHow many Rust channels are there?

How many Rust channels are there?

I’m using search.mibbit.com to count these. All have at least one user in them as of 4pm PST 2015-07-31.

There are 53 Rust-related channels on irc.mozilla.org.

List below the fold.

Read more...

Planet MozillaUnnecessary Finger Pointing

I just wanted to pen quickly that I found Chris Beard’s open letter to Satya Nadella (CEO of Microsoft) to be a bit hypocritical. In the letter he said:

“I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.”

Right, but what about the experiences that Mozilla chooses to default for users like switching to Yahoo and making that the default upon upgrade and not respecting their previous settings ?What about baking Pocket and Tiles into the experience? Did users want these features? All I have seen is opposition to them.

“When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.”

Again see above and think about the past year or two where Mozilla has overridden existing user preferences in Firefox. The big difference here is Mozilla calls it acting on behalf of the user as its agent, but when Microsoft does the same it is taking away choice?

<figure class="wp-caption alignright" id="attachment_3051" style="width: 300px;">Set Firefox as Windows 10 DefaultClearly not that difficult</figure>

Anyways, I can go on but the gist is the letter is hypocritical and really unnecessarily finger pointing. Let’s focus on making great products for our users and technical changes like this to Windows won’t be a barrier to users picking Firefox. Sorry, that I cannot be a Mozillian that will blindly retweet you and support a misguided social media campaign to point fingers at Microsoft.

Read the entire letter here:

https://blog.mozilla.org/blog/2015/07/30/an-open-letter-to-microsofts-ceo-dont-roll-back-the-clock-on-choice-and-control/

Planet MozillaThe last HTTP Workshop day

This workshop has been really intense days so far and this last and forth Workshop day did not turn out differently. We started out the morning with the presentation: Caching, Intermediation and the Modern Web by Martin Thomson (Mozilla) describing his idea of a “blind cache” and how it could help to offer caching in a HTTPS world. It of course brought a lot of discussions and further brainstorming on the ideas and how various people in the room thought the idea could be improved or changed.

Immediately following that, Martin continued with a second presentation describing for us a suggested new encryption format for HTTP based on the JWE format and how it could possible be used.

The room then debated connection coalescing (with HTTP/2) for a while and some shared their experiences and thoughts on the topic. It is an area where over-sharing based on the wrong assumptions certainly can lead to tears and unhappiness but it seems the few in the room who actually have implemented this seemed to have considered most of the problems people could foresee.

Support of Trailers in HTTP was brought up and we discussed its virtues for a while vs the possible problems with supporting it and what possible caveats could be, and we also explored the idea of using HTTP/2 push instead of trailers to allow servers to send meta-data that way, and that then also doesn’t necessarily have to follow after the transfer but can in fact be sent during transfer!

Resumed uploads is a topic that comes back every now and then and that has some interest. (It is probably one of the most frequently requested protocol features I get asked about.) It was brought up as something we should probably discuss further, and especially when discussing the next generation HTTP.

At some point in the future we will start talking about HTTP/3. We had a long discussion with the whole team here on what HTTP/3 could entail and we also explored general future HTTP and HTTP/2 extensions and more. A massive list of possible future work was created. The list ended up with something like 70 different things to discuss or work on, but of course most of those things will never actually become reality.

With so much possible or potential work ahead, we need to involve more people that want to and can consider writing specs and to show how easy it apparently can be, Martin demoed how to write a first I-D draft using the fancy Internet Draft Template Repository. Go check it out!

Poul-Henning Kamp brought up the topic of “CO2 usage of the Internet” and argued for that current and future protocol work need to consider the environmental impact and how “green” protocols are. Ilya Grigorik (Google) showed off numbers from http archive.org’s data and demoed how easy it is to use the big query feature to extract useful information and statistical info out of the vast amount of data they’ve gathered there. Brad Fitspatrick (Google) showed off his awesome tool h2i and how we can use it to poke on and test HTTP/2 server implementations in a really convenient and almost telnet-style command line using way.

Finally, Mark Nottingham (Akamai) showed off his redbot.org service that runs HTTP against a site, checks its responses and reports with details exactly what it responds and why and provide a bunch of analysis and informational based on that.

Such an eventful day really had to be rounded off with a bunch of beers and so we did. The HTTP Workshop of the summer 2015 ended. The event was great. The attendees were great. The facilities and the food were perfect. I couldn’t ask for more. Thanks for arranging such a great happening!

I’ll round off showing off my laptop lid after the two new stickers of the week were applied. (The HTTP Workshop one and an Apache one I got from Roy):

laptop-stickers

… I’ll get up early tomorrow morning and fly back home.

Planet MozillaKyle Zentner: CSS Containment - Leave my divs alone!

Kyle Zentner: CSS Containment - Leave my divs alone! Mozilla Intern Kyle Zentner describes his project - CSS Containment: Leave my divs alone! How to make pages (and frameworks) less janky and more predictable.

Planet MozillaMeet an MDN Contributor: Heather Bloomer

Headshot photo of Heather Bloomer

Heather Bloomer started contributing to Mozilla in November 2014, initially on SUMO. There, she saw a link to MDN, and realized she could contribute there as well. So, she is a “crossover” who contributes to helping both end-users and developers. She has been heavily involved in the Learning Area project, writing and editing Glossary entries and tutorials. She describes her contributions as “a continuing journey of enlightenment and an overall awesome experience.”

Here’s more from Heather:

I feel what I do on MDN has personally enhanced my writing skills and expanded my technical knowledge. I also feel I am making a positive impact in the MDN community and for developers that refer to MDN from beginners to advanced. That is an amazing feeling to be part of something bigger than yourself and grow and nurture not only ones self, but others as well.

My advice for new contributors is to just reach out and connect with the MDN community. Join the team and just dig in. If you need help on getting started, we are more than happy to point you in the right direction. We are friendly, supportive, encouraging and a team driven bunch of folks!

Thanks, Heather!

Planet MozillaSpenser Bauman: Making Polymorphism Fast

Spenser Bauman: Making Polymorphism Fast Mozilla Intern Spenser Bauman describes his project SpiderMonkey: Making polymorphism fast. Tweaking the JIT for faster container operations.

Planet MozillaPeter Elmers: DXR: The new_one

Peter Elmers: DXR: The new_one Mozilla intern Peter Elmers describes his project - DXR: the new_one: what's there, what's new, and what's next in the land of DXR.

Planet MozillaNihanth Subramanya: Making ContentSearch Great

Nihanth Subramanya: Making ContentSearch Great Mozilla intern Nihanth Subramanya presents: Making ContentSearch Great Bringing the new "Flare" design to in-content search, consistent with the main searchbox.

Planet MozillaMiles Crabill: (Kinda Fear) The Reaper

Miles Crabill: (Kinda Fear) The Reaper Mozilla intern Miles Crabill presents (Kinda Fear) The Reaper. The Reaper is a Go application that queries AWS for resources, filters them, notifies their owners,...

Planet MozillaJimmy Wang: One Process At A Time, e10s

Jimmy Wang: One Process At A Time, e10s Mozilla intern Jimmy Wang presents: One Process At A Time, e10s. From converting page info to e10s to remove unsafe CPOWs, making light weight web...

Planet MozillaIntern Presentations

Intern Presentations 6 interns will be presenting what they worked on over the summer: Spenser Bauman, SpiderMonkey: Making polymorphism fast. Tweaking the JIT for faster container operations....

Planet MozillaFrancesco Polizzi: Marrying Growth, Data, and Privacy on the Web

Francesco Polizzi: Marrying Growth, Data, and Privacy on the Web Mozilla intern Francesco Polizzi describes his project: Marrying Growth, Data, and Privacy on the Web. Is the internet in danger of data driven disaster? Maybe....

Planet MozillaUrsula Sarracini: Three Easy Steps to a Happy e10s

Ursula Sarracini: Three Easy Steps to a Happy e10s Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

Planet MozillaIntern Presentation - Ursula Sarracini

Intern Presentation - Ursula Sarracini Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

Planet MozillaIntern Presentations

Intern Presentations Ursula Sarracini - Three Easy Steps To a Happy e10s. I'll show you how to make a project multi-process friendly by showing you how I...

Planet MozillaSafeguarding Choice and Control Online

We are calling on Microsoft to “undo” its aggressive move to override user choice on Windows 10

Mozilla exists to bring choice, control and opportunity to everyone on the Web. We build Firefox and our other products for this reason. We build Mozilla as a non-profit organization for this reason. And we work to make the Internet experience beyond our products represent these values as much as we can.

Sometimes we see great progress, where consumer products respect individuals and their choices. However, with the launch of Windows 10 we are deeply disappointed to see Microsoft take such a dramatic step backwards. It is bewildering to see, after almost 15 years of progress bolstered by significant government intervention, that with Windows 10 user choice has now been all but removed. The upgrade process now appears to be purposefully designed to throw away the choices its customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.

tweet-button

On the user choice benchmark, Microsoft’s Windows 10 falls woefully short, even when compared to its own past versions. While it is technically possible for people to preserve their previous settings and defaults, the design of the new Windows 10 upgrade experience and user interface does not make this obvious nor easy. We are deeply passionate about our mission to ensure people are front, center and squarely in the driver’s seat of their online experience, so when we first encountered development builds of Windows 10 that appeared would override millions of individual decisions people have made about their experience, we were compelled to immediately reach out to Microsoft to address this. And so we did. Unfortunately this didn’t result in any meaningful change.

Today we are sending an open letter to Microsoft’s CEO to again insist that Windows 10 make it easy, obvious and intuitive for people to maintain the choices they have already made — and make it easier for people to assert new choices and preferences.

In the meantime, we’re rolling out support materials and a tutorial video to help guide everyone through the process of preserving their choices on Windows 10.

Blog Post: Firefox for Windows 10: How to Restore or Choose Firefox as Your Default Browser

An Open Letter to Microsoft’s CEO: Don’t Roll Back the Clock on Choice and Control

Planet MozillaAn Open Letter to Microsoft’s CEO: Don’t Roll Back the Clock on Choice and Control

Satya,

I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.

When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.

We appreciate that it’s still technically possible to preserve people’s previous settings and defaults, but the design of the whole upgrade experience and the default settings APIs have been changed to make this less obvious and more difficult. It now takes more than twice the number of mouse clicks, scrolling through content and some technical sophistication for people to reassert the choices they had previously made in earlier versions of Windows. It’s confusing, hard to navigate and easy to get lost.

Mozilla exists to bring choice, control and opportunity to everyone. We build Firefox and our other products for this reason. We build Mozilla as a non-profit organization for this reason. And we work to make the Internet experience beyond our products represent these values as much as we can.

Sometimes we see great progress, where consumer products respect individuals and their choices. However, with the launch of Windows 10 we are deeply disappointed to see Microsoft take such a dramatic step backwards.

These changes aren’t unsettling to us because we’re the organization that makes Firefox. They are unsettling because there are millions of users who love Windows and who are having their choices ignored, and because of the increased complexity put into everyone’s way if and when they choose to make a choice different than what Microsoft prefers.

We strongly urge you to reconsider your business tactic here and again respect people’s right to choice and control of their online experience by making it easier, more obvious and intuitive for people to maintain the choices they have already made through the upgrade experience. It also should be easier for people to assert new choices and preferences, not just for other Microsoft products, through the default settings APIs and user interfaces.

Please give your users the choice and control they deserve in Windows 10.

Sincerely,

Chris Beard
CEO, Mozilla

Blog Post: Firefox for Windows 10: How to Restore or Choose Firefox as Your Default Browser

Blog Post: Safeguarding Choice and Control Online

Planet MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

Planet MozillaA single platform for localization

Let’s get straight to the biscuits. From now on, you only need one tool to localize Mozilla stuff. That’s it. Single user interface, single translation memory, single permission management, single user account. Would you like to give it a try? Keep on reading!

A little bit of background.
Mozilla software and websites are localized by hundreds of volunteers, who give away their free time to put exciting technology into the hands of people across the globe. Keep in mind that 2 out of 3 Firefox installations are non-English and we haven’t shipped a single Firefox OS phone in English yet.

Considering the amount of impact they have and the work they contribute, I have a huge respect for our localizers and the feedback we get from them. One of the most common complaints I’ve been hearing is that we have too many localization tools. And I couldn’t agree more. At one of our recent l10n hackathons I was even introduced to a tool I never heard about despite 13 years of involvement with Mozilla localization!

So I thought, “Let’s do something about it!”

9 in 1.
I started by looking at the tools we use in Slovenian team and counted 9(!) different tools:

Eating my own dog food, I had already integrated all 3 terminology services into Pontoon, so that suggestions from these sources are presented to users while they translate. Furthermore, Pontoon syncs with repositories, sometimes even more often that the dashboards, practically eliminating the need to look at them.

So all I had to do is migrate projects from the rest of the editors into Pontoon. Not a single line of code needed to be written for Verbatim migration. Pootle and the text editor were slightly more complicated. They were used to localize Firefox, Firefox for Android, Thunderbird and Lightning, which all use the huge mozilla-central repository as their source repository and share locale repositories.

Nevertheless, a few weeks after the team agreed to move to Pontoon, Slovenian now uses Pontoon as the only tool to localize all (31) of our active projects!

Who wants to join the party?
Slovenian isn’t the only team using Pontoon. In fact, there are 2 dozens of locales with at least 5 projects enabled in Pontoon. Recently, Ukranian (uk) and Portugese Brasil (pt-BR) have been especially active, not only in terms of localization but also in terms of feedback. A big shout out to Artem, Marco and Marko!

There are obvious benefits of using just one tool, namely keeping all translations, attributions, contributor stats, etc. in one place. To give Pontoon a try, simply select a project and request your locale to be enabled. Migrating projects from other tools will of course preserve all the translations. Starting today, that includes attributions and submission dates (who translated what, and when it was translated) if you’re moving projects from Verbatim.

And, as you already know, Pontoon is developed by Mozilla, so we invite you to report problems and request new features. We also accept patches. ;) We have many exciting things coming up by the end of the summer, so keep an eye out for updates!

Planet MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Planet MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Planet MozillaTab audio indicators and muting in Firefox Nightly

Sometimes when you have several tabs open, and one of them starts to make some noise, you may wonder where the noise is coming from.  Other times, you may want to quickly mute a tab without figuring out if the web page provides its own UI for muting the audio.  On Wednesday, I landed the user facing bits of a feature to add an audio indicator to the tabs that are playing audio, and enable muting them.  You can see a screenshot of what this will look like in action below.

Tab audio indicators in action

Tab audio indicators in action

As you can see in the screenshot, my Soundcloud tab is playing audio, and so is my Youtube tab, but the Youtube tab has been muted.  Muting and unmuting a tab is easy by clicking on the tab audio indicator icon.  You can now test this out yourself on Firefox Nightly tomorrow!

This feature should work with all APIs that let you play audio, such as HTML5 <audio> and <video>, and Web Audio.  Also, it works with the latest Flash beta.  Note that you actually need to install the latest Flash beta, that is, version 19.0.0.124 which was released yesterday.  Earlier versions of Flash won’t work with this feature.

We’re interested in your feedback about this feature, and especially about any bugs that you may encounter.  We hope to iron out the rough edges and then let this feature ride the trains.  If you are curious about this progress, please follow along on the tracking bug.

Last but not least, this is the results of the effort of many of my colleagues, most notably Andrea Marchesini, Benoit Girard, and Stephen Horlander.  Thanks to those and everyone else who helped with the code, reviews, and other things!

Planet MozillaAnother Marionette release! Now with Windows Support!

If you have been wanting to use Marionette but couldn't because you only work on Windows, now is your chance to do so! All the latest downloads are available from our development github repository releases page

There is also a new page on MDN that walks you through the process of setting up Marionette and using it. I have only updated the python bindings so I can get a fell for how people are using it

Since you are awesome early adopters it would be great if we could raise bugs.

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • getPageSource not available. This will be added in at a later stage, it was a slightly contentious part in the specification.
  • I am sure there are other things we don't remember

Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles. This is currently how it has been discussed in the specification.

If in doubt, raise bugs!

Thanks for being an early adopter and thanks for raising bugs as you find them!

Planet MozillaCSS Vendor Prefixes - Some Historical Context

A very good (must) read by Daniel Glazman about the CSS vendor prefixes and its challenges. He reminds us of what I was brushing of yesterday about the issues with regards to Web Compatibility:

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years.

Emphasis is mine on this last part. Yes it's a very bad signal. And check what was said yesterday.

@AlfonsoML And we will always support them (unlike some vendors that remove things at will). So what is the issue?

This is the issue in terms of Web Compatibility. It's what I was precisely saying that implementers do not understand the impact it has.

Planet MozillaDecoding Hashed known_hosts Files

Decoding Hashed known_hosts Files

tl;dr: You might find this gist handy if you enable HashKnownHosts

Modern ssh comes with the option to obfuscate the hosts it can connect to, by enabling the HashKnownHosts option. Modern server installs have that as a default. This is a good thing.

The obfuscation occurs by hashing the first field of the known_hosts file - this field contains the hostname,port and IP address used to connect to a host. Presumably, there is a private ssh key on the host used to make the connection, so this process makes it harder for an attacker to utilize those private keys if the server is ever compromised.

Super! Nifty! Now how do I audit those files? Some services have multiple IP addresses that serve a host, so some updates and changes are legitimate. But which ones? It’s a one way hash, so you can’t decode.

Well, if you had an unhashed copy of the file, you could match host keys and determine the host name & IP. [1] You might just have such a file on your laptop (at least I don’t hash keys locally). [2] (Or build a special file by connecting to the hosts you expect with the options “-o HashKnownHosts=no -o UserKnownHostsFile=/path/to/new_master”.)

I through together a quick python script to do the matching, and it’s at this gist. I hope it’s useful - as I find bugs, I’ll keep it updated.

Bonus Tip: https://github.com/defunkt/gist

Is a very nice way to manage gists from the command line.

Footnotes

[1]A lie - you’ll only get the host name and IP’s that you have connected to while building your reference known_hosts file.
[2]I use other measures to keep my local private keys unusable.

Planet MozillaCSS Vendor Prefixes

I have read everything and its contrary about CSS vendor prefixes in the last 48 hours. Twitter, blogs, Facebook are full of messages or articles about what are or are supposed to be CSS vendor prefixes. These opinions are often given by people who were not members of the CSS Working Group when we decided to launch vendor prefixes. These opinions are too often partly or even entirely wrong so let me give you my own perspective (and history) about them. This article is with my CSS Co-chairman's hat off, I'm only an old CSS WG member in the following lines...

  • CSS Vendor Prefixes as we know them were proposed by Mike Wexler from Adobe in September 1998 to allow browser vendors to ship proprietary extensions to CSS.

    In order to allow vendors to add private properties using the CSS syntax and avoid collisions with future CSS versions, we need to define a convention for private properties. Here is my proposal (slightly different than was talked about at the meeting). Any vendors that defines a property that is not specified in this spec must put a prefix on it. That prefix must start with a '-', followed by a vendor specific abbreviation, and another '-'. All property names that DO NOT start with a '-' are RESERVED for using by the CSS working group.

  • One of the largest shippers of prefixed properties at that time was Microsoft that introduced literally dozens of such properties in Microsoft Office.
  • The CSS Working Group slowly evolved from that to « vendor prefixes indicate proprietary features OR experimental features under discussion in the CSS Working Group ». In the latter case, the vendor prefixes were supposed to be removed when the spec stabilized enough to allow it, i.e. reaching an official Call for Implementation.
  • Unfortunately, some prefixed « experimental features » were so immensely useful to CSS authors that they spread at fast pace on the Web, even if the CSS authors were instructed not to use them. CSS Gradients (a feature we originally rejected: « Gradients are an example. We don't want to have to do this in CSS. It's only a matter of time before someone wants three colors, or a radial gradient, etc. ») are the perfect example of that. At some point in the past, my own editor BlueGriffon had to output several different versions of CSS gradients to accomodate the various implementation states available in the wild (WebKit, I'm looking at you...).
  • Unfortunately, some of those prefixed properties took a lot, really a lot, of time to reach a stable state in a Standard and everyone started relying on prefixed properties in production web sites...
  • Unfortunately again, some vendors did not apply the rules they decided themselves: since the prefixed version of some properties was so widely used, they maintained them with their early implementation and syntax in parallel to a "more modern" implementation matching, or not, what was in the Working Draft at that time.
  • We ended up just a few years ago in a situation where prefixed proprerties were so widely used they started being harmful to the Web. The indredible growth of first WebKit and then Chrome triggered a massive adoption of prefixed properties by CSS authors, up to the point other vendors seriously considered implementing themselves the -webkit- prefix or at least simulating it.

Vendor prefixes were not a complete failure. They allowed the release to the masses of innovative products and the deep adoption of HTML and CSS in products that were not originally made for Web Standards (like Microsoft Office). They allowed to ship experimental features and gather priceless feedback from our users, CSS Authors. But they failed for two main reasons:

  1. The CSS Working Group - and the Group is really made only of its Members, the vendors - took faaaar too much time to standardize critical features that saw immediate massive adoption.
  2. Some vendors did not update nor "retire" experimental features when they had to do it, ditching themselves the rules they originally agreed on.

From that perspective, putting experimental features behind a flag that is by default "off" in browsers is a much better option. It's not perfect though. I'm still under the impression the standardization process becomes considerably harder when such a flag is "turned on" in a major browser before the spec becomes a Proposed Recommendation. A Standardization process is not a straight line, and even at the latest stages of standardization of a given specification, issues can arise and trigger more work and then a delay or even important technical changes. Even at PR stage, a spec can be formally objected or face an IPR issue delaying it. As CSS matures, we increasingly deal with more and more complex features and issues, and it's hard to predict when a feature will be ready for shipping. But we still need to gather feedback, we still need to "turn flags on" at some point to get real-life feedback from CSS Authors. Unfortunately, you can't easily remove things from the Web. Breaking millions of web sites to "retire" an experimental feature is still a difficult choice...

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years. That requires taking the following hard decisions:

  • if a feature does not stabilize in two years' time, that's probably because it's not ready or too hard to implement, or not strategic at that moment, or that the production of a Test Suite is a too large effort, or whatever. It has then to be dropped or postponed.
  • Tests are painful and time-consuming. But testing is one of the mandatory steps of our Standardization process. We should "postpone" specs that can't get a Test Suite to move along the REC track in a reasonable time. That implies removing the experimental feature from browsers, or at least turning the flag they live behind off again. It's a hard and painful decision, but it's a reasonable one given all I said above and the danger of letting an experimenal feature spread.

Planet MozillaNóirín Plunkett: Remembering Them

<figure class="wp-caption alignleft" id="attachment_3047" style="width: 225px;">Nóirín Plunkett & Benjamin KerensaNóirín and I</figure>

Today I learned of some of the worst kind of news, my friend and a valuable contributor to the great open source community Nóirín Plunkett passed away. They (this is their preferred pronoun per their twitter profile) was well regarded in the open source community for contributions.

I had known them for about four years now, having met them at OSCON and seen them regularly at other events. They were always great to have a discussion with and learn from and they always had a smile on their face.

It is very sad to lose them as they demonstrated an unmatchable passion and dedication to open source and community and surely many of us will spend many days, weeks and months reflecting on the sadness of this loss.

Other posts about them:

https://adainitiative.org/2015/07/remembering-noirin-plunkett/
http://www.apache.org/memorials/noirin.html
http://www.harihareswara.net/sumana/2015/07/29/0

Planet MozillaA-Team Update, July 29, 2015

Highlights

Treeherder: We’ve added to mozlog the ability to create error summaries which will be used as the basis for automatic starring.  The Treeherder team is working on implementing database changes which will make it easier to add support for that.  On the front end, there’s now a “What’s Deployed” link in the footer of the help page, to make it easier to see what commits have been applied to staging and production.  Job details are now shown in the Logviewer, and a mockup has been created of additional Logviewer enhancements; see bug 1183872.

MozReview and Autoland: Work continues to allow autoland to work on inbound; MozReview has been changed to carry forward r+ on revised commits.

Bugzilla: The ability to search attachments by content has been turned off; BMO documentation has been started at https://bmo.readthedocs.org.

Perfherder/Performance Testing: We’re working towards landing Talos in-tree.  A new Talos test measuring tab-switching performance has been created (TPS, or Talos Page Switch); e10s Talos has been enabled on all platforms for PGO builds on mozilla-central.  Some usability improvements have been made to Perfherder – https://treeherder.mozilla.org/perf.html#/graphs.

TaskCluster: Successful OSX cross-compilation has been achieved; working on the ability to trigger these on Try and sorting out details related to packaging and symbols.  Work on porting Linux tests to TaskCluster is blocked due to problems with the builds.

Marionette: The Marionette-WebDriver proxy now works on Windows.  Documentation on using this has been added at https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver.

Developer Workflow: A kill_and_get_minidump method has been added to mozcrash, which allows us to get stack traces out of Windows mochitests in more situations, particularly plugin hangs.  Linux xpcshell debug tests have been split into two chunks in buildbot in order to reduce E2E times, and chunks of mochitest-browser-chrome and mochitest-devtools-chrome have been re-normalized by runtime across all platforms.  Now that mozharness lives in the tree, we’re planning on removing the “in-tree configs”, and consolidating them with the previously out-of-tree mozharness configs (bug 1181261).

Tools: We’re testing an auto-backfill tool which will automatically retrigger coalesced jobs in Treeherder that precede a failing job.  The goal is to reduce the turnaround time required for this currently manual process, which should in turn reduce tree closure times related to test failures

The Details

bugzilla.mozilla.org

Treeherder/Automatic Starring

  • We’re generating error summaries now that will serve as the basis for automatic starring work.

Treeherder/Front End

  • New “What’s Deployed” feature in Help footer to view stage/prod deployment status
  • Logviewer now contains the full ‘Job Info’ aka. tinderbox printlines (bug 1092209)
  • Created a mock of logviewer UI changes (bug 1183872)

Perfherder/Performance Testing

  • Working towards moving Talos code in-tree (bug 787200)
  • New Talos test TPS (Talos Page Switch) (bug 1166132)
  • Fixed a few data ingestion/duplication cases.
  • Adjusting calculation of suite summaries to match graph server, not finished yet (tracking: bug 1184968)
  • e10s on all platforms, only runs on mozilla-central for pgo builds, broken tests, big regressions are tracked in bug 1144120
  • perfherder is easier to use, some polish on test selection and the  compare view, and most importantly we have found a few odd bugs that has  caused duplicate data to show up, check it out: https://treeherder.mozilla.org/perf.html#/graphs
  • Starting the work of moving Android Talos to Autophone (bug 1170685)

MozReview/Autoland

  • bug 1184079 – Fix for autopublishing when authenticating to MozReview via BMO cookies
  • bug 1178025 – Commits table looks nicer
  • bug 1175166 – r+ is now carried forward on commits from level 3 authors

TaskCluster Support

Mobile Automation

  • Continued work on porting android talos tests to autophone, remaining work is to figure out posting results and ensuring it runs on a regular basis and reliable.
  • Support for the Android stock browser and Dolphin has been added to mozbench (bug 1103134)

Dev Workflow

  • Created patch that replaces mach’s logger with mozlog. Still several rough edges and perf issues to iron out

Media Automation

  • The new MSE rewrite is now enabled by default on Nightly and we’re replacing a few tests in response: bug 1186943 – detection of video stalls has to repond to new internal strings from new MSE implementation by :jya.
  • firefox-media-tests mozharness log is now parsed into steps for Treeherder’s Log Viewer
  • Fixed a problem with automation scripts for WebRTC tests for Windows 64.

General Automation

  • Moved mozlog.structured to top-level mozlog, and released mozlog 3.0
  • Added a kill_and_get_minidump method to mozcrash (bug 890026). As a result we’re getting minidumps out of Windows mochitests under more circumstances (in particular, plugin hangs in certain intermittently failing tests).
  • The MozillaPulse consumer now supports listening to multiple exchanges simultaneously (bug 1180897).
  • Bug 1186420 – Autophone – update requirements and deploy thclient 1.6
  • Bughunter moved to SCL3 without interruption
  • Bug 1185498 – Sisyphus – Bughunter – consume urls directly from Socorro
  • linux debug xpcshell was split into two chunks to reduce E2E times (bug 1185499)
  • runtimes for mochitest-browser-chrome and mochitest-devtools have been renormalized across all platforms
  • Allow Firefox UI tests to determine where to get Firefox crash symbols for releases and improve reproducibility
  • Testing auto-backfill in production (bug 1180732)
  • Now that mozharness lives in the tree, we’re going to remove the “in-tree configs”, which will consolidate mozharness options and make maintenance simpler (bug 1181261)

ActiveData

  • ActiveData requires monitoring on all nodes before it can be left alone for more than a day without it failing:
    • Made  fork of Supervisor to run simple Cron jobs – the biggest task was  finding and installing (and compiling!) the C libraries used
    • Added  Supervisor to spot instances to monitor ES; not just the process, but  query response time.  Also monitoring the indexing jobs.
  • Replicated OrangeFactor to ActiveData so masters student (and the public) we can query it, or extract it.

Marionette

  • Landed Proxy support via capabilities
  • Updating cookie support to return httpOnly flag
  • Added a –version arg to Marionette (bug 1183157)
  • Landing support for W3C Compatible Drivers in Selenium Tree and released 2.46.1 so users can use it.
  • Wrote a small guide to use it https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver
  • Marionette<->WebDriver Proxy now works on Windows, Linux and OSX as of 0.3.0

Planet MozillaLost in data – episode 2 – bisecting and comparing

This week on Lost in Data, we tackle yet another pile of alerts.  This time we have a set of changes which landed together and we push to try for bisection.  In addition we have an e10s only failure which happened when we broke talos uploading to perfherder.  See how I get one step closer to figuring out the root cause of the regressions.


Planet MozillaA third day of HTTP Workshopping

I’ve met a bunch of new faces and friends here at the HTTP Workshop in Münster. Several who I’ve only seen or chatted with online before and some that I never interacted with until now. Pretty awesome really.

Out of the almost forty HTTP fanatics present at this workshop, five persons are from Google, four from Mozilla (including myself) and Akamai has three employees here. Those are the top-3 companies. There are a few others with 2 representatives but most people here are the only guys from their company. Yes they are all guys. We are all guys. The male dominance at this event is really extreme and we’ve discussed this sad circumstance during breaks and it hasn’t gone unnoticed.

This particular day started out grand with Eric Rescorla (of Mozilla) talking about HTTP Security in his marvelous high-speed style. Lots of talk about how how the HTTPS usage is right now on  the web, HTTPS trends, TLS 1.3 details and when it is coming and we got into a lot of talk about how HTTP deprecation and what can and cannot be done etc.

Next up was a presentation about  HTTP Privacy and Anonymity by Mike Perry (from the Tor project) about lots of aspects of what the Tor guys consider regarding fingerprinting, correlation, network side-channels and similar things that can be used to attempt to track user or usage over the Tor network. We got into details about what recent protocols like HTTP/2 and QUIC “leak” or open up for fingerprinting and what (if anything) can or could be done to mitigate the effects.

Evolving HTTP Header Fields by Julian Reschke (of Green Bytes) then followed, discussing all the variations of header syntax that we have in HTTP and how it really is not possible to write a generic parser that can handle them, with a suggestion on how to unify this and introduce a common format for future new headers. Julian’s suggestion to use JSON for this ignited a discussion about header formats in general and what should or could be done for HTTP/3 and if keeping support for the old formats is necessary or not going forward. No real consensus was reached.

Willy Tarreau (from HAProxy) then took us into the world of HTTP Infrastructure scaling and Load balancing, and showed us on the microsecond level how fast a load balancer can be, how much extra work adding HTTPS can mean and then ending with a couple suggestions of what he thinks could’ve helped his scenario. That then turned into a general discussion and network architecture brainstorm on what can be done, how it could be improved and what TLS and other protocols could possibly be do to aid. Cramming out every possible gigabit out of load balancers certainly is a challange.

Talking about cramming bits, Kazuho Oku got to show the final slides when he showed how he’s managed to get his picohttpparser to parse HTTP/1 headers at a speed that is only slightly slower than strlen() – including a raw dump of the x86 assembler the code is turned into by a compiler. What could possibly be a better way to end a day full of protocol geekery?

Google graciously sponsored the team dinner in the evening at a Peruvian place in the town! Yet another fully packed day has ended.

I’ll top off today’s summary with a picture of the gift Mark Nottingham (who’s herding us through these days) was handing out today to make us stay keen and alert (Mark pointed out to me that this was a gift from one of our Japanese friends here):

kitkat

Planet MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Planet MozillaBeer and Tell – July 2015

Once a month, web developers from across the Mozilla Project get together to develop an encryption scheme that is resistant to bad actors yet able to be broken by legitimate government entities. While we toil away, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Osmose: Moseamp

Osmose (that’s me!) was up first, and shared Moseamp, an audio player. It’s built using HTML, CSS, and JavaScript, but acts as a native app thanks to the Electron framework. Moseamp can play standard audio formats, and also can load plugins to add support for extra file formats, such as Moseamp-Audio-Overload for playing PSF files and Moseamp-GME for playing NSF and SPC files. The plugins rely on libraries written in C that are compiled via Emscripten.

Peterbe: Activity

Next was Peterbe with Activity, a small webapp that displays the events relevant to a project, such as pull requests, PR comments, bug comments, and more, and displays the events in a nice timeline along with the person related to the action. It currently pulls data from Bugzilla and Github.

The project was born from the need to help track a single individual’s activities related to a project, even if they have different usernames on different services. Activity can help a project maintainer see what contributors are doing and determine if there’s anything they can do to help the contributor.

New One: MXR to DXR

New One was up next with a Firefox add-on called MXR to DXR. The add-on rewrites all links to MXR viewed in Firefox to point to the equivalent page on DXR, the successor to MXR. The add-on also provides a hotkey for switching between MXR and DXR while browsing the sites.

bwalker: Liturgiclock

Last was bwalker who shared liturgiclock, which is a webpage showing a year-long view of what religious texts that Lutherans are supposed to read throughout the year based on the date. The site uses a Node.js library that provides the data on which text belongs to which date, and the visualization itself is powered by SVG and D3.js.


We don’t actually know how to go about designing an encryption scheme, but we’re hoping to run a Kickstarter to pay for the Udacity cryptography course. We’re confident that after being certified as cryptologists we can make real progress towards our dream of swimming in pools filled with government cash.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Planet MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 23

The Joy of Coding (mconley livehacks on Firefox) - Episode 23 Watch mconley livehack on Firefox Desktop bugs!

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>