Planet MozillaRandom Thoughts on Management

I have ended up managing people at the last three places I’ve worked, over the last 18 years. I can honestly say that only in the last few years have I really started to embrace the job of managing. Here’s a collection of thoughts and observations:

Growth: Ideas and Opinions and Failures

Expose your team to new ideas and help them create their own voice. When people get bored or feel they aren’t growing, they’ll look elsewhere. Give people time to explore new concepts, while trying to keep results and outcomes relevant to the project.

Opinions are not bad. A team without opinions is bad. Encourage people to develop opinions about everything. Encourage them to evolve their opinions as they gain new experiences.

“Good judgement comes from experience, and experience comes from bad judgement” – Frederick P. Brooks

Create an environment where differing viewpoints are welcomed, so people can learn multiple ways to approach a problem.

Failures are not bad. Failing means trying, and you want people who try to accomplish work that might be a little beyond their current reach. It’s how they grow. Your job is keeping the failures small, so they can learn from the failure, but not jeopardize the project.

Creating Paths: Technical versus Management

It’s important to have an opinion about the ways a management track is different than a technical track. Create a path for managers. Create a different path for technical leaders.

Management tracks have highly visible promotion paths. Organization structure changes, company-wide emails, and being included in more meetings and decision making. Technical track promotions are harder to notice if you don’t also increase the person’s responsibilities and decision making role.

Moving up either track means more responsibility and more accountability. Find ways to delegate decision making to leaders on the team. Make those leaders accountable for outcomes.

Train your engineers to be successful managers. There is a tradition in software development to use the most senior engineer to fill openings in management. This is wrong. Look for people that have a proclivity for working with people. Give those people management-like challenges and opportunities. Once they (and you) are confident in taking on management, promote them.

Snowflakes: Engineers are Different

Engineers, even great ones, have strengthens and weaknesses. As a manager, you need to learn these for each person on your team. People can be very strong at starting new projects, building something from nothing. Others can be great at finishing, making sure the work is ready to release. Some excel at user-facing code, others love writing back-end services. Leverage your team’s strengthens to efficiently ship products.

The better you know your team, the less likely you will create bored, passionless drones. Don’t treat engineers as fungible, swapable resources. Set them, and the team, up for success. Keep people engaged and passionate about the work.

Further Reading

The Role of a Senior Developer
On Being A Senior Engineer
Want to Know Difference Between a CTO and a VP of Engineering?
Thoughts on the Technical Track
Bored People Quit
Strong Opinions, Weakly Held

Planet MozillaWeb Components, Stories Of Scars

Chris Heilmann has written about Web Components.

If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

Indeed a very good blog post to read. Then Chris went on saying:

Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web.

This is twitching in the back of my mind for the last couple of weeks. And I kind of remember a wicked pattern from 10 years ago. Enter Compound Document Formats (CDF) with its WICD (read wicked) specifications. If you think I'm silly, check the CDF FAQ:

When combining content from arbitrary sources, a number of problems present themselves, including how rendering is handled when crossing from one markup language to another, or how events propagate across the same boundaries, or how to interpret the meaning of a piece of content within an unanticipated context.

and

Simply put, a compound document is a mixture of content in any number of formats. Compound documents range from static (say, XHTML that includes a simple SVG illustration) to very dynamic (a full-fledged Web Application). A compound document may include its parts directly (such as when you include an SVG image in an XHTML file) or by reference (such as when you embed a separate SVG document in XHTML using an <object> element. There are benefits to both, and the application should determine which one you use. For instance, inclusion by reference facilitates reuse and eases maintenance of a large number of resources. Direct inclusion can improve portability or offline use. W3C will support both modes, called CDR ("compound documents by reference") and CDI ("compound documents by inclusion").

At that time, the Web and W3C, where full throttle on XML and namespaces. Now, the cool kids on the block are full HTML, JSON, polymers and JS frameworks. But if you look carefully and remove the syntax, architecture parts, the narrative is the same. And with the narratives of the battle and its scars, the Web Components sound very familiar to the Coupound Document Format.

Still by Chris

When it comes to componentising the web, the rabbit hole is deep and also a maze.

Note that not everything was lost from WICD. It helped develop a couple of things, and reimagine the platform. Stay tune, I think we will have surprises on this story. Not over yet.

Modularity has already a couple of scars when thinking about large distribution. Remember Opendoc and OLE. I still remember using Cyberdog. Fun times.

Otsukare!

Planet MozillaAdd-ons Update – Week of 2015/07/01

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

Add-ons Forum

As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The Add-ons category is already the most active one on the community forum, so thank you all for your contributions! The old forum is still available in read-only mode.

The Review Queues

  • Most nominations for full review are taking less than 10 weeks to review.
  • 272 nominations in the queue awaiting review.
  • Most updates are being reviewed within 8 weeks.
  • 159 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 10 weeks.
  • 295 preliminary review submissions in the queue awaiting review.

A number of factors have lead to the current state of the queues: increased submissions, decreased volunteer reviewer participation, and a Mozilla-wide event that took most of our attention last week. We’re back and our main focus are the review queues. We have a new reviewer on our team, who will hopefully make a difference in the state of the queues.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 40 Compatibility

The Firefox 40 compatibility blog post is up. The automatic compatibility validation will be run in a few weeks.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions.

There’s a small change to the timeline: Firefox 40 will only warn about unsigned extensions (for all channels), Firefox 41 will disable unsigned extensions  by default unless a preference is toggled (on Beta and Release), and Firefox 42 will not have the preference. This means that we’ll have an extra release cycle before signatures are enforced by default.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

Planet MozillaOver the Edge: Web Components are an endangered species

Last week I ran the panel and the web components/modules breakout session of the excellent Edge Conference in London, England and I think I did quite a terrible job. The reason was that the topic is too large and too fragmented and broken to be taken on as a bundle.

If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web. Along the way we can throw around lots of tools and ideas like NPM and ES6 imports or – as Alex Russell said it on the panel: “tooling will save you”.

It does. But that was always the case. When browsers didn’t support CSS, we had Dreamweaver to create horribly nested tables that achieved the same effect. There is always a way to make browsers do what we want them to do. In the past, we did a lot of convoluted things client-side with libraries. With the advent of node and others we now have even more environments to innovate and release “not for production ready” impressive and clever solutions.

When it comes to componentising the web, the rabbit hole is deep and also a maze. Many developers don’t have time to even start digging and use libraries like Polymer or React instead and call it a day and that the “de facto standard” (a term that makes my toenails crawl up – layout tables were a “de facto standard”, so was Flash video).

React did a genius thing: by virtualising the DOM, it avoided a lot of the problems with browsers. But it also means that you forfeit all the good things the DOM gives you in terms of accessibility and semantics/declarative code. It simply is easier to write a <super-button> than to create a fragment for it or write it in JavaScript.

Of course, either are easy for us clever and amazing developers, but the fact is that the web is not for developers. It is a publishing platform, and we are moving away from that concept at a ridiculous pace.

And whilst React gives us all the goodness of Web Components now, it is also a library by a commercial company. That it is open source, doesn’t make much of a difference. YUI showed that a truckload of innovation can go into “maintenance mode” very quickly when a company’s direction changes. I have high hopes for React, but I am also worried about dependencies on a single company.

Let’s rewind and talk about Web Components

Let’s do away with modules and imports for now, as I think this is a totally different discussion.

I always loved the idea of Web Components – allowing me to write widgets in the browser that work with it rather than against it is an incredible idea. Years of widget frameworks trying to get the correct performance out of a browser whilst empowering maintainers would come to a fruitful climax. Yes, please, give me a way to write my own controls, inherit from existing ones and share my independent components with other developers.

However, in four years, we haven’t got much to show.. When we asked the very captive and elite audience of EdgeConf about Web Components, nobody raised their hand that they are using them in real products. People either used React or Polymer as there is still no way to use Web Components in production otherwise. When we tried to find examples in the wild, the meager harvest was GitHub’s time element. I do hope that this was not all we wrote and many a company is ready to go with Web Components. But most discussions I had ended up the same way: people are interested, tried them out once and had to bail out because of lack of browser support.

Web Components are a chicken and egg problem where we are currently trying to define the chicken and have many a different idea what an egg could be. Meanwhile, people go to chicken-meat based fast food places to get quick results. And others increasingly mention that we should hide the chicken and just give people the eggs leaving the chicken farming to those who also know how to build a hen-house. OK, I might have taken that metaphor a bit far.

We all agreed that XHTML2 sucked, was overly complicated, and defined without the input of web developers. I get the weird feeling that Web Components and modules are going in the same direction.

In 2012 I wrote a longer post as an immediate response to Google’s big announcement of the foundation of the web platform following Alex Russel’s presentation at Fronteers 11 showing off what Web Components could do. In it I kind of lamented the lack of clean web code and the focus on developer convenience over clarity. Last year, I listed a few dangers of web components. Today, I am not too proud to admit that I lost sight of what is going on. And I am not alone. As Wilson’s post on Mozilla Hacks shows, the current state is messy to say the least.

We need to enable web developers to use “vanilla” web components

What we need is a base to start from. In the browser and in a browser that users have and doesn’t ask them to turn on a flag. Without that, Web Components are doomed to become a “too complex” standard that nobody implements but instead relies on libraries.

During the breakout session, one of the interesting proposals was to turn Bootstrap components into web components and start with that. Tread the cowpath of what people use and make it available to see how it performs.

Of course, this is a big gamble and it means consensus across browser makers. But we had that with HTML5. Maybe there is a chance for harmony amongst competitors for the sake of an extensible and modularised web that is not dependent on ES6 availability across browsers. We’re probably better off with implementing one sci-fi idea at a time.

I wished I could be more excited or positive about this. But it left me with a sour taste in my mouth to see that EdgeConf, that hot-house of web innovation and think-tank of many very intelligent people were as confused as I was.

I’d love to see a “let’s turn it on and see what happens” instead of “but, wait, this could happen”. Of course, it isn’t that simple – and the Mozilla Hacks post explains this well – but a boy can dream, right? Remember when using HTML5 video was just a dream?

Planet MozillaWho wants to be an alpha tester for Marionette?

Are you an early adopter type? Are you an avid user of WebDriver and want to use the latest and great technology? Then you are most definitely in luck.

Marionette, the Mozilla implementation of the FirefoxDriver, is ready for a very limited outing. There is a lot of things that have not been implemented or, since we are implementing things agains the WebDriver Specification they might not have enough prose to implement (This has been a great way to iron out spec bugs).

Getting Started

At the moment, since things are still being developed and we are trying to do things with new technologies (like writing part this project using Rust) we are starting out with supporting Linux and OS X first. Windows support will be coming in the future!

Getting the driver

We have binaries that you can download. For Linux and for OS X . The only bindings currently updated to work are the python bindings that are available in a branch on my fork of the Selenium project. Do the following to get it into a virtualenv:
  1. Create a virtualenv
  2. activate your virtualenv
  3. cd to where you have cloned my repository
  4. In a terminal type the following: ./go py_install

Running tests

Running tests against marionette requires that you do the following changes (which hopefully remains small)

Update the desired capability to have marionette:true and add binary:/path/to/Firefox/DeveloperEdition/or/Nightly. We are only supporting those two versions of at the moment because we have had a couple in compatible issues that we have fixed which means speaking to Marionette in the browser in the beta or release versions quite difficult.

Since you are awesome early adopters it would be great if we could raise bugs. I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support for Proxy (but will be there soon)
  • No support logging endpoint
  • I am sure there are other things we don't remember

Thanks for being an early adopter and thanks for raising bugs as you find them!

Planet MozillaDitching ElasticUtils on Input for elasticsearch-dsl-py

What was it?

ElasticUtils was a Python library for building and executing Elasticsearch searches. I picked up maintenance after the original authors moved on with their lives, did a lot of work, released many versions and eventually ended the project in January 2015.

Why end it? A bunch of reasons.

It started at PyCon 2014, when I had a long talk with Rob, Jannis, and Erik about ElasticUtils the new library Honza was working on which became elasticsearch-dsl-py.

At the time, I knew that ElasticUtils had a bunch of architectural decisions that turned out to be make life really hard. Doing some things was hard. It was built for a pre-1.0 Elasticsearch and it would have been a monumental effort to upgrade it past the Elasticsearch 1.0 hump. The code wasn't structured particularly well. I was tired of working on it.

Honza's library had a lot of promise and did the sorts of things that ElasticUtils should have done and did them better--even at that ridiculously early stage of development.

By the end of PyCon 2014, I was pretty sure elasticsearch-dsl-py was the future. The only question was whether to gut ElasticUtils and make it a small shim on top of elasticsearch-dsl-py or end it.

In January 2015, I decided to just end it because I didn't see a compelling reason to keep it around or rewrite it into something on top of elasticsearch-dsl-py. Thus I ended it.

Now to migrate to something different.

Read more… (7 mins to read)

Planet MozillaWhistler Wrap-up

What an amazing week!

Last week members of the Mozilla community met in beautiful Whistler, BC to celebrate, reflect, brainstorm, and plan (and eat snacks). While much of the week was spent in functional teams (that is, designers with designers and engineers with engineers), the Mozilla Learning Network website (known informally as “Teach”) team was able to convene for two meetings—one focused on our process, and the other on our roadmap and plans for the future.

Breakthroughs

From my perspective, the week inspired a few significant breakthroughs:

  1. The Mozilla Learning Network is one, unified team with several offerings. Those offerings can be summarized in this image: Networks, Groups, and Convenings.MLN ProgramsThe breakthrough was realizing that it’s urgent that the site reflects the full spectrum of offerings as soon as possible. We’ve adjusted our roadmap accordingly. First up: incorporate the Hive content in a way that makes sense to our audience, and provides clear pathways for engagement.
  2. Our Clubs pipeline is a bit off-balance. We have more interested Club Captains than our current (amazing) Regional Coordinators can support. This inspired an important conversation about changes to our strategy to better test out our model. We’ll be talking about how these changes are reflected on the site soon.
  3. The most important content to localize is our curriculum content. To be fair, we knew this before the work week, but it was definitely crystallized in Whistler. This gives useful shape to our localization plan.
  4. We also identified a few areas where we can begin the process of telling the full “Mozilla Learning” story. By that I mean the work that goes beyond what we call the Mozilla Learning Network—for example, we can highlight our Fellowship programs, curriculum from other teams (starting with Mozilla Science Lab!), and additional peer learning opportunities.
  5. Finally, we identified a few useful, targeted performance indicators that will help us gauge our success: 1) the # of curriculum hits, and 2) the % of site visitors who take the pledge to teach.

Site Updates

I also want to share a few site updates that have happened since I wrote last:

    • The flow for Clubs has been adjusted to reflect the “apply, connect, approve” model described in an earlier post.
    • We’ve added a Protect Your Data curriculum module with six great activities.
    • We added the “Pledge to Teach” action on the homepage. Visitors to the site can choose to take the pledge, and are then notified about an optional survey they can take. We’ll follow up with tailored offerings based on their survey responses.pledge

Questions? Ideas? Share ’em in the comments!


Planet MozillaTop 50 DOS Problems Solved: Squashing Files

Q: I post files containing DTP pages and graphics on floppy disks to a bureau for printing. Recently I produced a file that was too big to fit on the disk and I know that I will be producing more in the future. What’s the best way round the problem?

A. There are a number of solutions, most of them expensive. For example, both you and the bureau could buy modems. A modem is a device that allows computers to be connected via a phone line. You would need software, known as a comms program, to go with the modems. This will allow direct PC-to-PC transfer of files without the need for floppy disks. Since your files are so large, you would need a fast 9600 baud modem [Ed: approx 1 kilobyte per second] with MNP5 compression/error correction to make this a viable option.

In this case, however, I would get hold of a utility program called LHA which is widely available from the shareware/PD libraries that advertise in PC Answers. In earlier incarnations it was known as LHarc. LHA enables you to squash files into less space than they occupied before.

The degree of compression depends on the nature of the file. Graphics and text work best, so for you this is a likely solution. The bureau will need a copy of LHA to un-squash the files before it can use them, or you can use LHA in a special way that makes the compressed files self-unpacking.

LHA has a great advantage over rival utilities in that the author allows you to use it for free. There is no registration fee, as with the similar shareware program PKZip, for example.

Every time they brought out a new, larger hard disk, they used to predict the end of the need for compression…

Planet MozillaC++ Concepts TS could be voted for publication on July 20

In my report on the C++ standards meeting this May, I described the status of the Concepts TS as of the end of the meeting:

  • The committee’s Core Working Group (CWG) was working on addressing comments received from national standards bodies.
  • CWG planned to hold a post-meeting teleconference to complete the final wording including the resolutions of the comments.
  • The committee would then have the option of holding a committee-wide teleconference to vote the final wording for publication, or else delay this vote until the next face-to-face meeting in October.

I’m excited to report that the CWG telecon has taken place, final wording has been produced, and the committee-wide telecon to approve the final wording has been scheduled for July 20.

If this vote goes through, the final wording of the Concepts TS will be sent to ISO for publication, and the TS will be officially published within a few months!


Planet MozillaFirefox 41 will use less memory when running AdBlock Plus

Last year I wrote about AdBlock Plus’s effect on Firefox’s memory usage. The most important part was the following.

First, there’s a constant overhead just from enabling ABP of something like 60–70 MiB. […] This appears to be mostly due to additional JavaScript memory usage, though there’s also some due to extra layout memory.

Second, there’s an overhead of about 4 MiB per iframe, which is mostly due to ABP injecting a giant stylesheet into every iframe. Many pages have multiple iframes, so this can add up quickly. For example, if I load TechCrunch and roll over the social buttons on every story […], without ABP, Firefox uses about 194 MiB of physical memory. With ABP, that number more than doubles, to 417 MiB.

An even more extreme example is this page, which contains over 400 iframes. Without ABP, Firefox uses about 370 MiB. With ABP, that number jumps to 1960 MiB.

(This description was imprecise; the overhead is actually per document, which includes both top-level documents in a tab and documents in iframes.)

Last week Mozilla developer Cameron McCormack landed patches to fix bug 77999, which was filed more than 14 years ago. These patches enable sharing of CSS-related data — more specifically, they add data structures that share the results of cascading user agent style sheets — and in doing so they entirely fix the second issue, which is the more important of the two.

For example, on the above-mentioned “extreme example” (a.k.a. the Vim Color Scheme Test) memory usage dropped by 3.62 MiB per document. There are 429 documents on that page, which is a total reduction of about 1,550 MiB, reducing memory usage for that page down to about 450 MiB, which is not that much more than when AdBlock Plus is absent. (All these measurements are on a 64-bit build.)

I also did measurements on various other sites and confirmed the consistent saving of ~3.6 MiB per document when AdBlock Plus is enabled. The number of documents varies widely from page to page, so the exact effect depends greatly on workload. (I wanted to test TechCrunch again, but its front page has been significantly changed so it no longer triggers such high memory usage.) For example, for one of my measurements I tried opening the front page and four articles from each of nytimes.com, cnn.com and bbc.co.uk, for a total of 15 tabs. With Cameron’s patches applied Firefox with AdBlock Plus used about 90 MiB less physical memory, which is a reduction of over 10%.

Even when AdBlock Plus is not enabled this change has a moderate benefit. For example, in the Vim Color Scheme Test the memory usage for each document dropped by 0.09 MiB, reducing memory usage by about 40 MiB.

If you want to test this change out yourself, you’ll need a Nightly build of Firefox and a development build of AdBlock Plus. (Older versions of AdBlock Plus don’t work with Nightly due to a recent regression related to JavaScript parsing). In Firefox’s about:memory page you’ll see the reduction in the “style-sets” measurements. You’ll also see a new entry under “layout/rule-processor-cache”, which is the measurement of the newly shared data; it’s usually just a few MiB.

This improvement is on track to make it into Firefox 41, which is scheduled for release on September 22, 2015. For users on other release channels, Firefox 41 Beta is scheduled for release on August 11, and Firefox 41 Developer Edition is scheduled to be released in the next day or two.

Planet MozillaFirefox Hello Desktop: Behind the Scenes – UI Showcase

This is the third of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. You can find the previous posts here.

The Showcase

One of the most useful parts of development for Firefox Hello is the User Interface (UI) showcase. Since all of the user interface for Hello is written in html and JavaScript, and is displayed in the content scope, we are able to display them within a “normal” web page with very little adjustment.

So what we do is to put almost all our views onto a single html page at representative sizes. The screen-shot below shows just one view from the page, but those buttons at the top give easy access, and in reality there’s lots of them (about 55 at the time of writing).

UI Showcase showing a standalone (link-clicker) view

UI Showcase showing a standalone (link-clicker) view

Faster Development

The showcase has various advantages that help us develop faster:

  • Since it is a web page, we have all the developer tools available to us – inspector, css layout etc.
  • We don’t have to restart Firefox to pick up changes to the layout, nor do we have to recompile – a simple reload of the page is enough to pick up changes.
  • We also don’t have to go through the flow each time, e.g. if we’re changing some of the views which show the media (like the one above), we avoid needing to go through the conversation setup routines for each code/css change until we’re pretty sure its going to work the way we expect.
  • Almost all the views are shown – if the css is broken for one view its much easier to detect it than having to go through the user flow to get to the view you want.
  • We’ve recently added an RTL mode so that we can easily see what the views look like in RTL languages. Hence no awkward forcing of Firefox into RTL mode to check the views.

There’s one other “feature” of the showcase as we’ve got it today – we don’t pick up the translated strings, but rather the raw string label. This tends to give us longer strings than are used normally for English, which it turns out is an excellent way of being able to detect some of the potential issues for locales which need longer strings.

Structure of the showcase

The showcase is a series of iframes. We load individual react components into each iframe, sometimes loading the same component multiple times with different parameters or stores to get the different views. The rest of the page is basically just structure around display of the views.

The iframes does have some downsides – you can’t live edit css in the inspector and have it applied across all the views, but that’s minor compared to the advantages we get from this one page.

Future improvements

We’re always looking for ways we can improve how we work on Hello. We’ve recently improved the UI showcase quite a bit, so I don’t think we have too much on our outstanding list at the moment.

The only thing I’ve just remembered is that we’ve commented it would be nice to have some sort of screen-shot comparison, so that we can make changes and automatically check for side-effects on other views.

We’d also certainly be interested in hearing about similar tools which could do a similar job – sharing and re-using code is definitely a win for everyone involved.

Interested in learning more?

If you’re interested in learning more about the UI-showcase, then you can find the code here, try it out for yourself, or come and ask us questions in #loop on irc.

If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

Dev.OperaJavaScript Open Day and Edge Conf

Last week, Bruce and I were in London for two high-profile web standards events.

On Friday, Microsoft invited me to speak about installable web apps at their first JavaScript Open Day: I talked about various exciting recent additions to the web stack, such as Service Workers and device APIs, and of course also “add to Home screen”, which we’ve previewed in a labs build last week. You can find a video of my talk below.

<figure block="figure"> <iframe allowfullscreen="" elem="media" height="315" src="https://www.youtube.com/embed/uDn-C6IGdVE" width="560"></iframe> </figure>

On Saturday, it was time for Edge Conference, which is a “day of group discussion and debate on advanced web technologies for developers and browser vendors.” I moderated the “installing web apps” breakout session, which turned out to be an excellent one hour discussion with various browser representatives and mobile web developers, covering “add to Home screen” functionality, its UX, permissions, OS integration, and much more. It got the best rating of all Edge sessions, so you can tell we had a blast :)

For those interested, Bruce’s raw notes of the breakout are available on Google Drive and here’s an image of the session as well.

<figure block="figure"> </figure>

As a final note, we’d be very interested in talking to any developers who are implementing add to Home screen, Service Workers or Push Notifications on real, commercial websites, with a view to publishing an article on Dev.Opera (we even pay!). Tweet us at @ODevRel.

Internet Explorer blogInside Interop: How Medium Now Works on Microsoft Edge

Microsoft Edge was built from the ground-up with the modern web in mind. When we decided to make a “break from the past” for Windows 10, we didn’t just spend all of our time combing through complicated standards documents and implementing esoteric algorithms (though we did a fair amount of that!); in keeping with our ongoing focus on interoperability with the modern Web, we spent a great deal of our time dissecting modern website patterns, understanding the intent behind their code, and building a browser that intimately understands the expectations of the modern Web. As a step in that process, today we are excited to announce that Microsoft Edge users are getting an improved publishing experience due to recent interop efforts.

During the development process we began investigating customer reports of issues on Medium — one of the Web’s most popular personal publishing platforms. This lead us to an article, written by the Medium staff, which enumerated a handful of issues their editing platform had with Internet Explorer 10 and 11. These issues prevented IE from having full-access to Medium’s publishing platform.

We started the Microsoft Edge journey with the goal that “the web just works,” so we promptly reached out to Medium for a better understanding of the issues, and potentially identify workarounds. Medium determined that working around the issue would require a complex architectural change for them, so we instead determined that a fix would be needed in Microsoft Edge.

Our investigation revealed that the problem Medium was encountering was due to a non-standard legacy feature involving object-selection, which shipped in Internet Explorer 5. The feature was designed to provide a consistent modern forms-editor surface for partners (PowerPoint, Visual InterDev) who were building HTML-based forms back in the IE6 days. It gave the user the option to re-size any element which had layout on first click, and on second click would allow them to edit, if the element were editable.

Demonstration of Object Select feature in Internet Explorer 11.

In Medium’s case, there was a floating div element inside a contentEditable region which we allowed to be object-selectable and re-sizeable, resulting in gripper UI over the editable section. We had similar feedback bugs reported to us in the past, but due to the complexity of removing this behavior and the risks of a long bug trail, IE11 ultimately shipped with the functionality.

For Microsoft Edge, we were able to revisit this decision with the understanding that the web continues to evolve, and serves as the foundation for numerous rich editors. We are proudly moving towards a set of interoperable API’s that help web developers advance web-based editors without having to work around legacy features. With this we decided to remove nearly 1500 lines of C++ code from the browser’s engine, as well as any dependencies. The end-state is greater interoperability with other modern browsers, and reduced-complexity in our suite of editing features.

This is a good example of how we continue to prioritize interoperability over legacy compatibility. We would like to thank the team at Medium for this collaboration, and look forward to hearing your feedback on Twitter at @MSEdgeDev.

Nirankush Panchbhai, Jonathan Sampson, Greg Whitworth
Proud members of the Microsoft Edge Team

Planet MozillaSpark best practices

We have been running Spark for a while now at Mozilla and this post is a summary of things we have learned about tuning and debugging Spark jobs.

Spark execution model

Spark’s simplicity makes it all too easy to ignore its execution model and still manage to write jobs that eventually complete. With larger datasets having an understanding of what happens under the hood becomes critical to reduce run-time and avoid out of memory errors.

Let’s start by taking our good old word-count friend as starting example:

rdd = sc.textFile("input.txt")\
        .flatMap(lambda line: line.split())\
        .map(lambda word: (word, 1))\
        .reduceByKey(lambda x, y: x + y, 3)\
        .collect()

RDD operations are compiled into a Direct Acyclic Graph of RDD objects, where each RDD points to the parent it depends on:

<figure class="wp-caption aligncenter" id="attachment_1125" style="width: 660px;">DAGFigure 1</figure>

At shuffle boundaries, the DAG is partitioned into so-called stages that are going to be executed in order, as shown in figure 2. The shuffle is Spark’s mechanism for re-distributing data so that it’s grouped differently across partitions. This typically involves copying data across executors and machines, making the shuffle a complex and costly operation.

<figure class="wp-caption aligncenter" id="attachment_1136" style="width: 660px;">stagesFigure 2</figure>

To organize data for the shuffle, Spark generates sets of tasks – map tasks to organize the data and reduce tasks to aggregate it. This nomenclature comes from MapReduce and does not directly relate to Spark’s map and reduce operations. Operations within a stage are pipelined into tasks that can run in parallel, as shown in figure 3.

<figure class="wp-caption aligncenter" id="attachment_1140" style="width: 660px;">tasksFigure 3</figure>

Stages, tasks and shuffle writes and reads are concrete concepts that can be monitored from the Spark shell. The shell can be accessed from the driver node on port 4040, as shown in figure 4.

<figure class="wp-caption aligncenter" id="attachment_1138" style="width: 660px;">shellFigure 4</figure>

Best practices

Spark Shell

Running Spark jobs without the Spark Shell is like flying blind. The shell allows to monitor and inspect the execution of jobs. To access it remotely a SOCKS proxy is needed as the shell connects also to the worker nodes.

Using a proxy management tool like FoxyProxy allows to automatically filter URLs based on text patterns and to limit the proxy settings to domains that match a set of rules. The browser add-on automatically handles turning the proxy on and off when you switch between viewing websites hosted on the master node, and those on the Internet.

Assuming that you launched your Spark cluster with the EMR service on AWS, type the following command to create a proxy:

ssh -i ~/mykeypair.pem -N -D 8157 hadoop@ec2-...-compute-1.amazonaws.com

Finally, import the following configuration into FoxyProxy:

<?xml version="1.0" encoding="UTF-8"?>
<foxyproxy>
  <proxies>
    <proxy name="emr-socks-proxy" notes="" fromSubscription="false" enabled="true" mode="manual" selectedTabIndex="2" lastresort="false" animatedIcons="true" includeInCycle="true" color="#0055E5" proxyDNS="true" noInternalIPs="false" autoconfMode="pac" clearCacheBeforeUse="false" disableCache="false" clearCookiesBeforeUse="false" rejectCookies="false">
      <matches>
        <match enabled="true" name="*ec2*.amazonaws.com*" pattern="*ec2*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
        <match enabled="true" name="*ec2*.compute*" pattern="*ec2*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
        <match enabled="true" name="10.*" pattern="http://10.*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
        <match enabled="true" name="*10*.amazonaws.com*" pattern="*10*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
        <match enabled="true" name="*10*.compute*" pattern="*10*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
        <match enabled="true" name="*localhost*" pattern="*localhost*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
      </matches>
      <manualconf host="localhost" port="8157" socksversion="5" isSocks="true" username="" password="" domain="" />
    </proxy>
  </proxies>
</foxyproxy>

Once the proxy is enabled you can open the Spark Shell by visiting localhost:4040.

Use the right level of parallelism

Clusters will not be fully utilized unless the level of parallelism for each operation is high enough. Spark automatically sets the number of partitions of an input file according to its size and for distributed shuffles, such as groupByKey and reduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument to an operation. In general, 2-3 tasks per CPU core in your cluster are recommended. That said, having tasks that are too small is also not advisable as there is some overhead paid to schedule and run a task.

As a rule of thumb tasks should take at least 100 ms to execute; you can ensure that this is the case by monitoring the task execution latency from the Spark Shell. If your tasks take considerably longer than that keep increasing the level of parallelism, by say 1.5, until performance stops improving.

Reduce working set size

Sometimes, you will get terrible performance or out of memory errors, because the working set of one of your tasks, such as one of the reduce tasks in groupByKey, was too large. Spark’s shuffle operations (sortByKey, groupByKey, reduceByKey, join, etc) build a hash table within each task to perform the grouping, which can often be large.

Even though those tables spill to disk, getting to the point where the tables need to be spilled increases the memory pressure on the executor incurring the additional overhead of disk I/O and increased garbage collection. If you are using pyspark, the memory pressure will also increase the chance of Python running out of memory.

The simplest fix here is to increase the level of parallelism, so that each task’s input set is smaller.

Avoid groupByKey for associative operations

Both reduceByKey and groupByKey can be used for the same purposes but reduceByKey works much better on a large dataset. That’s because Spark knows it can combine output with a common key on each partition before shuffling the data.

In reduce tasks, key-value pairs are kept in a hash table that can spill to disk, as mentioned in “Reduce working set size“. However, the hash table flushes out the data to disk one key at a time. If a single key has more values than can fit in memory, an out of memory exception occurs. Pre-combining the keys on the mappers before the shuffle operation can drastically reduce the memory pressure and the amount of data shuffled over the network.

Avoid reduceByKey when the input and output value types are different

Consider the job of creating a set of strings for each key:

rdd.map(lambda p: (p[0], {p[1]}))\
    .reduceByKey(lambda x, y: x | y)\
    .collect()

Note how the input values are strings and the output values are sets. The map operation creates lots of temporary small objects. A better way to handle this scenario is to use aggregateByKey:

def seq_op(xs, x):
    xs.add(x)
    return xs

def comb_op(xs, ys):
    return xs | ys

rdd.aggregateByKey(set(), seq_op, comb_op).collect()

Avoid the flatMap-join-groupBy pattern

When two datasets are already grouped by key and you want to join them and keep them grouped, you can just use cogroup. That avoids all the overhead associated with unpacking and repacking the groups.

Python memory overhead

The spark.executor.memory option, which determines the amount of memory to use per executor process, is JVM specific. If you are using pyspark you can’t set that option to be equal to the total amount of memory available to an executor node as the JVM might eventually use all the available memory leaving nothing behind for Python.

Use broadcast variables

Using the broadcast functionality available in SparkContext can greatly reduce the size of each serialized task, and the cost of launching a job over a cluster. If your tasks use any large object from the driver program, like a static lookup table, consider turning it into a broadcast variable.

Cache judiciously

Just because you can cache a RDD in memory doesn’t mean you should blindly do so. Depending on how many times the dataset is accessed and the amount of work involved in doing so, recomputation can be faster than the price paid by the increased memory pressure.

It should go without saying that if you only read a dataset once there is no point in caching it, i it will actually make your job slower. The size of cached datasets can be seen from the Spark Shell.

Don’t collect large RDDs

When a collect operation is issued on a RDD, the dataset is copied to the driver, i.e. the master node. A memory exception will be thrown if the dataset is too large to fit in memory; take or takeSample can be used to retrieve only a capped number of elements instead.

Minimize amount of data shuffled

A shuffle is an expensive operation since it involves disk I/O, data serialization, and network I/O. As illustrated in figure 3, each reducer in the second stage has to pull data across the network from all the mappers.

As of Spark 1.3, these files are not cleaned up from Spark’s temporary storage until Spark is stopped, which means that long-running Spark jobs may consume all available disk space. This is done in order to don’t re-compute shuffles.

Know the standard library

Avoid re-implementing existing functionality as it’s guaranteed to be slower.

Use dataframes

A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a pandas Dataframe in Python.

props = get_pings_properties(pings,
                             ["environment/system/os/name",
                              "payload/simpleMeasurements/firstPaint",
                              "payload/histograms/GC_MS"],
                             only_median=True)

frame = sqlContext.createDataFrame(props.map(lambda x: Row(**x)))
frame.groupBy("environment/system/os/name").count().show()

yields:

environment/system/os/name count 
Darwin                     2368  
Linux                      2237  
Windows_NT                 105223

Before any computation on a DataFrame starts, the Catalyst optimizer compiles the operations that were used to build the DataFrame into a physical plan for execution. Since the optimizer generates JVM bytecode for execution, pyspark users will experience the same high performance as Scala users.


Planet MozillaThe End of Firefox 31 ESR

With the release of Firefox 39 today also comes the final release of the Firefox 31 ESR (barring any security updates in the next six weeks).
That means you have six weeks to manage your switch over to the Firefox 38 ESR.

If you've been wondering if you should use the ESR instead of keeping up with current Firefox releases, now might be a good time to switch. That's because there are a couple features coming in the Firefox mainline that might affect you. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.

It's much easier going from Firefox 38 to the Firefox 38 ESR then going from Firefox 39 to the Firefox 38 ESR.

If you want to continue on the Firefox mainline, you can use the CCK2 to bring back some of the distribution/bundles functionality, but I won't be able to do anything about the signing requirement.

Planet MozillaFirefox Hello Desktop: Behind the Scenes – Architecture

This is the second of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. The first post was about our use of Flux and React, this second post is about the architecture.

In this post, I will give an overview of the Firefox browser software architecture for Hello, which includes the standalone UI.

User-visible parts of Hello

Although there’s many small parts to Hello, most of it is shaped by what is user visible:

Firefox Hello Desktop UI (aka Link-Generator)

Hello Standalone UI (aka Link-clicker)

Firefox Browser Architecture for Hello

The in-browser part of Hello is split into three main areas:

  • The panel which has the conversation and contact lists. This is a special about: page that is run in the content process with access to additional privileged APIs.
  • The conversation window where conversations are held. Within this window, similar to the panel, is another about: page that is also run in the content process.
  • The backend runs in the privileged space within gecko. This ties together the communication between the panels and conversation windows, and provides access to other gecko services and parts of the browser to which Hello integrates.
Outline of Hello's Desktop Architecture

Outline of Hello’s Desktop Architecture

MozLoopAPI is our way of exposing small bits of the privileged gecko (link) code to the about: pages running in content. We inject a navigator.mozLoop object into the content pages when they are loaded. This allows various functions and facilities to be exposed, e.g. access to a backend cache of the rooms list (which avoids multiple caches per window), and a similar backend store of contacts.

Standalone Architecture

The Standalone UI is simply a web page that’s shown in any browser when a user clicks a conversation link.

The conversation flow is in the standalone UI is very similar to that of the conversation window, so most of the stores and supporting files are shared. Most of the views for the Standalone UI are currently different to those from the desktop – there’s been a different layout, so we need to have different structures.

Outline of Hello's Standalone UI Architecture

Outline of Hello’s Standalone UI Architecture

File Architecture as applied to the code

The authoritative location for the code is mozilla-central it lives in the browser/components/loop directory. Within that we have:

  • content/ – This is all the code relating to the panel and conversation window that is shipped on in the Firefox browser
  • content/shared/ – This code is used on browser as well as on the standalone UI
  • modules/ – This is the backend code that runs in the browser
  • standalone/ – Files specific to the standalone UI

Future Work

There’s a couple of likely parts of the architecture that we’re going to rework soon.

Firstly, with the current push to electrolysis, we’re replacing the current exposed MozLoopAPI with a message-based RPC mechanism. This will then let us run the panel and conversation window in the separated content process.

Secondly, we’re currently reworking some of the UX and it is moving to be much more similar between desktop and standalone. As a result, we’re likely to be sharing more of the view code between the two.

Interested in learning more?

If you’re interested in learning more about Hello’s architecture, then feel free to dig into the code, or come and ask us questions in #loop on irc.

If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

Planet MozillaFirefox 39 new contributors

With the release of Firefox 39, we are pleased to welcome the 64 developers who contributed their first code change to Firefox in this release, 55 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

  • agrigas: 1135270
  • bmax1337: 967319
  • leo: 1134993
  • schtroumps31: 1130045
  • zmiller12: 1138873
  • George Duan: 1135293
  • Abhinav Koppula: 732688, 951695, 1127337
  • Alex Verstak: 1113431, 1144816
  • Alexandre Ratchov: 1144087
  • Andrew Overholt: 1127552
  • Anish: 1135091, 1135383
  • Anush: 418517, 1113761
  • Bhargav Chippada: 1112605, 1130372
  • Boris Kudryavtsev: 1135364, 1144613
  • Cesar Guirao: 1139132
  • Chirag Bhatia: 1133211
  • Danilo Cesar Lemes de Paula: 1146020
  • Daosheng Mu: 1133391
  • Deepak: 1039540
  • Felix Janda: 1130164, 1130175
  • Gareth Aye: 1145310
  • Geoffroy Planquart: 942475, 1042859
  • Gerald Squelart: 1121774, 1135541, 1137583
  • Greg Arndt: 1142779
  • Henry Addo: 1084663
  • Jason Gersztyn: 1132673
  • Jeff Griffiths: 1138545
  • Jeff Lu: 1098415, 1106779, 1140044
  • Johan K. Jensen: 1096294
  • Johannes Vogel: 1139594
  • John Giannakos: 1134568
  • John Kang: 1144782, 1146252
  • Jorg K: 756984
  • Kyle Thomas: 1137004
  • Léon McGregor: 1115925, 1130741, 1136708
  • Manraj Singh [:manrajsingh|Away till 21st June]: 1120408
  • Mantaroh Yoshinaga: 910634, 1106905, 1130614
  • Markus Jaritz: 1139174
  • Massimo Gervasini: 1137756, 1144695
  • Matt Hammerly: 1124271
  • Matt Spraggs: 1036454
  • Michael Weisz : 736572, 782623, 935434
  • Mitchell Field: 987902
  • Mohamed Waleed: 1106938
  • NiLuJe: 1143411
  • Perry Wagle: 1122941
  • Ponç Bover: 1126978
  • Quentin Pradet: 1092544
  • Ravi Shankar: 1109608
  • Rishi Baldawa: 1143196
  • Stéphane SCHMIDELY: 935259, 1144619
  • Sushrut Girdhari (sg345): 1137248
  • Thomas Baquet: 1132078
  • Titi_Alone : 1133063
  • Tyler St. Onge: 1134927
  • Vaibhav Bhosale: 1135009
  • Vidit23: 1121317
  • Wickie Lee: 1136253
  • Zimon Dai: 983469, 1135435
  • atlanto: 1137615
  • farhaan: 1073234
  • pinjiz: 1124943, 1142260, 1142268
  • qasim: 1123431
  • ronak khandelwal: 1122767
  • uelis: 1047529
  • Planet MozillaCycling

    In January I finally decided to do something I'd wanted to do in a long time and that's the Gran Fondo from Vancouver to Whistler.

    I've been cycling for a while too and from work, but it was time to get more serious and hopefully lose some weight in the process. So I signed up and September I'll be racing up to Whistler with a few thousand other people.

    Last Saturday I was in Whistler for the Mozilla work week. I got to do some riding, including a quick trip to Pemberton:

    I had the opportunity to ride down from Whistler as a practice. I've enjoyed my cycling, but I looked at this ride with a mixture of anticipation and dread.

    Getting back is a 135.8km ride that looks like this:

    It also meant hanging out at a party with Mozilla on top of the mountain without drinking very much. Something that, if you know me, I tend to find a little difficult.

    As it turns out the ride was great fun. Riding in the sun with views of the Elaho and Howe Sound. The main annoyance was when I had stop for a traffic light in Squamish after so long of continually cycling.

    After Horseshoe Bay I got flat a tyre. When I cycle to and from work I don't carry spares - if anything happens I got on a bus. Repeating that was an error, but fortunately multiple people kindly stopped and offered to help. About 15 minutes later I was up and cycling again.

    Later on, as I was crossing North Vancouver with some other people, a truck pulled up besides us and shouted "You're doing 39km/h!". Somehow after 5 hours on the road I was still having fun and cycling fast.

    I've gone from doing a few days a week, to over 300k on a bike a week and still loving it. I've gone from just being happy to get to Whistler alive to thinking about setting a time for my race. We'll see how that goes.

    Planet MozillaWatch A Computing Engineer Live Coding

    Mike (Taylor) has been telling me about the live coding session of Mike Conley for a little while. So yesterday I decided to give it a try with the first episode of The Joy of Coding (mconley livehacks on Firefox). As mentionned in the description:

    Unscripted, unplanned, uncensored, and true to life, watch what a Firefox Desktop engineer does to close bugs and get the job done.

    And that is what is written on the jar. And sincerely this is good.

    Why Should You Watch Live Coding?

    I would recommend to watch this for:

    • Beginner devs: To understand that more experienced developers struggle and make mistakes too. To also learn a couple of good practices when coding.
    • Experienced devs: To see how someone is coding and learn a couple of new tricks and habits when coding. To encourage them to do the same kind of things than Mike is doing.
    • Managers: You are working as a manager in a Web agency, you are a project manager, watch this. You will not understand most of it, but focus exactly on what Mike is doing in terms of thoughts process and work organization. Even without a knowledge of programming, we discover the struggles, the trials and errors, and the success.

    There's a full series of them, currently 19.

    My Own (Raw) Notes When Watching The Video

    • Watching the first video of Live Coding
    • He started with 3 video scenes and switched to his main full screen
    • He's introducing himself
    • mconley: "Nothing is prepared"
    • He's introducing the bug and explained it in demonstrating it
    • mconley: "I like to take notes when working on bugs" (taken in evernote)
    • mconley: "dxr is better than mxr."
    • He's not necessary remembering everything. So he goes through other parts of the code to understand what others did.
    • Sometimes he just doesn't know, doesn't understand and he says it.
    • mconley: "What other people do?"
    • He's taking notes including some TODOs for the future to explore, understand, do.
    • He's showing his fails in compiling, in coding, etc.
    • (personal thoughts) It's hard to draw on a computer. Paper provides some interesting features for quickly drawing something. Computer loses, paper wins.
    • When recording, thinking with a loud voice gives context on what is happening.
    • Write comments in the code for memory even if you remove them later.
    • In your notes, cut and paste the code from the source. Paper loses, computer wins.
    • (personal thoughts): C++ code is ugly to read.
    • (personal thoughts): Good feeling for your own job after watching this. It shows you are not the only one struggling when doing stuff.

    Some Additional Thoughts

    We met Mike Conley in Whistler, Canada last week. He explained he used Open Broadcasting Project for recording his sessions. I'm tempted to do something similar for Web Compatibility work. I'm hesitating in between French and English. Maybe if Mike was doing something in English, I might do it in French. So people in the French community could benefit of it.

    So thanks Mike for telling me about this in the last couple of weeks.

    Otsukare!

    Planet WebKitManuel Rego: CSS Grid Layout is just around the corner (CSSConf US 2015)

    Coming back to real life after a wonderful week in New York City is not that easy, but here we’re on the other side of the pond writing about CSS Grid Layout again.

    First kudos to Bocoup for the CSSConf US 2015 organization. Specially to Adam Sontag and the rest of the conference staff. You were really supportive during the whole week. And the videos with live transcripts were available just a few days after the conference, awesome job! The only issue was the internet connection which was really flaky.

    So, yeah I attended CSSConf this year, but not only that, I was also speaking about CSS Grid Layout and the video of my talk is already online together with the slides.

    During the talk I described the basic concepts, syntax and features of CSS Grid with different live coding examples. Then I tried to explain the main tasks that the browser has to do in order to render a grid and gave some tips about grid performance. Finally, we reviewed the browsers adoption and the status of Chromium/Blink and Safari/WebKit implementations that Igalia is doing.

    CSS Grid Layout is just around the corner talk sketchnotes by Susan CSS Grid Layout is just around the corner talk sketchnotes by Susan

    The feedback about my talk was incredibly positive and everybody seemed really excited about what CSS Grid Layout can bring to the web platform. Big thanks to you all!

    Of course, there were other great talks at CSSConf as you can check in the videos. From the top of my head, I loved the one by Lea Verou, impressive talk as usual where she even released a polyfill for conic gradients on the stage. SVG and animations have two nice talks by Chris Coyier and Sarah Drasner. PostCSS and inline styles were also hot topics. Responsive (and responsible!) images, Fun.css and CSS? WTF! were also great (and probably I’m forgetting some other).

    Last, on Thursday’s night we attended BrooklynJS which had a great panel discussing about CSS. The inline styles vs stylesheets topic became hot, as projects like React are moving people away from stylesheets. Chris Coyier (one of the panelists and also speaker at CSSConf) wrote a nice post past week giving a good overview of this topic. Also The Four Fives were amazing!

    On top of that, as part of the collaboration between Igalia and Bloomberg, I was visiting their fancy office in Manhattan. I spent a great time there talking about grids with several people from the team. They really believe that CSS Grid Layout will change the future of the web benefiting lots of people in different use cases, and hopefully helping to alleviate performance issues in complex scenarios.

    Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

    Looking forward for the next opportunity to talk about CSS Grid Layout. Keeping the hard work to make it a reality as soon as possible!

    Planet MozillaDXR 2.0 (Part 2: Discussion)

    DXR 2.0 (Part 2: Discussion) A discussion of the roadmap for DXR after 2.0

    Planet MozillaDXR 2.0 (Part 1: Dog & Pony Show)

    DXR 2.0 (Part 1: Dog & Pony Show) Demo of features new in the upcoming 2.0 release of DXR, Mozilla's search and analysis tool for large codebases

    Planet MozillaCSS Working Group's future

    Hello everyone.

    Back in March 2008, I was extremely happy to announce my appointment as Co-chairman of the CSS Working Group. Seven years and a half later, it's time to move on. There are three main reasons to that change, that my co-chair Peter and I triggered ourselves with W3C Management's agreement:

    1. We never expected to stay in that role 7.5 years. Chris Lilley chaired the CSS Working Group 1712 days from January 1997 (IIRC) to 2001-oct-10 and that was at that time the longest continuous chairing in W3C's history. Bert Bos chaired it 2337 days from 2001-oct-11 to 2008-mar-05. Peter and I started co-chairing it on 2008-mar-06 and it will end at TPAC 2015. That's 2790 days so 7 years 7 months and 20 days! I'm not even sure those 2790 days hold a record, Steven Pemberton probably chaired longer. But it remains that our original mission to make the WG survive and flourish is accomplished, and we now need fresher blood. Stability is good, but smart evolution and innovation are better.
    2. Co-chairing a large, highly visible Working Group like the CSS Working Group is not a burden, far from it. But it's not a light task either. We start feeling the need for a break.
    3. There were good candidates for the role, unanimously respected in the Working Group.

    So the time has come. The new co-chairs, Rossen Atanassov from Microsoft and Alan Stearns from Adobe, will take over during the Plenary Meeting of the W3C held in Sapporo, japan, at the end of October and that is A Good Thing™. You'll find below a copy of my message to W3C.

    To all the people I've been in touch with while holding my co-chair's hat: thank you, sincerely and deeply. You, the community around CSS, made everything possible.

    Yours truly.

    Dear Tim, fellow ACs, fellow Chairs, W3C Staff, CSS WG Members,

    After seven years and a half, it's time for me to pass the torch of the CSS Working Group's co-chairmanship. 7.5 years is a lot and fresh blood will bring fresh perspectives and new chairing habits. At a time the W3C revamps its activities and WGs, the CSS Working Group cannot stay entirely outside of that change even if its structure, scope and culture are retained. Peter and I decided it was time to move on and, with W3M's agreement, look for new co-chairs.

    I am really happy to leave the Group in Alan's and Rossen's smart and talented hands, I'm sure they will be great co-chairs and I would like to congratulate and thank them for accepting to take over. I will of course help the new co-chairs on request for a smooth and easy transition, and I will stay in the CSS WG as a regular Member.

    I'd like to deeply thank Tim for appointing me back in 2008, still one of the largest surprises of my career!

    I also wish to warmly thank my good friends Chris Lilley, Bert Bos and Philippe Le Hégaret from W3C Staff for their crucial daily support during all these years. Thank you Ralph for the countless transition calls! I hope the CSS WG still holds the record for the shortest positive transition call!

    And of course nothing would have been possible without all the members of the CSS Working Group, who tolerated me for so long and accepted the changes we implemented in 2008, and all our partners in the W3C (in particular the SVG WG) or even outside of it, so thank you all. The Membership of the CSS WG is a powerful engine and, as I often say, us co-chairs have only been a drop of lubricant allowing that engine to run a little bit better, smoother and without too much abrasion.

    Last but not least, deep thanks to my co-chair and old friend Peter Linss for these great years; I accepted that co-chair's role to partner with Peter and enjoyed every minute of it. A long ride but such a good one!

    I am confident the CSS Working Group is and will remain a strong and productive Group, with an important local culture. The CSS Working Group has both style and class (pun intended), and it has been an honour to co-chair it.

    Thank you.

    </Daniel>

    Planet MozillaMozilla Weekly Project Meeting

    Mozilla Weekly Project Meeting The Monday Project Meeting

    Planet MozillaBeer and Tell – June 2015

    Once a month, web developers from across the Mozilla Project get together to try and transmute a fresh Stanford graduate into a 10x engineer. Meanwhile, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

    There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

    Osmose: SpeedKills and Advanced Open File

    First up was Osmose (that’s me!) presenting two packages for Atom, a text editor. SpeedKills is a playful package that plays a guitar solo and sets text on fire when the user types fast enough. Advanced Open File is a more useful package that adds a convenient dialog for browsing the file system and opening files by path rather than using the fuzzy finder. Both are available for install through the Atom package repository.

    new_one: Tab Origin

    Next was new_one, who shared Tab Origin, a Firefox add-on that lets you return to the webpage that launched the current tab, even if the parent tab has since been closed. It’s activated via a keyboard shortcut that can be customized.

    Potch: WONTFIX and Presentation Mode

    Continuing a fine tradition of batching projects, Potch stopped by to show off two Firefox add-ons. The first was WONTFIX, which adds a large red WONTFIX stamp to any Bugzilla bug that has been marked as WONTFIX. The second was Presentation Mode, which allows you to full-screen any content in a web page while hiding the browser chrome. This is especially useful when giving web-based presentations.

    Peterbe: premailer.io

    Peterbe shared premailer.io, which is a service wrapping premailer. Premailer takes a block of HTML with a style tag and applies the styles within as style attributes on each matching tag. This is mainly useful for HTML emails, which generally don’t support style tags that apply to the entire email.

    ErikRose: Spam-fighting Tips

    ErikRose learned a lot about the current state of spam-fighting while redoing his mail server:

    • Telling Postfix to be picky about RFCs is a good first pass. It eliminates some spam without having to do much computation.
    • spamassassin beats out dspam, which hasn’t seen an update since 2012.
    • Shared-digest detectors like Razor help a bit but aren’t sufficient on their own without also greylisting to give the DBs a chance to catch up.
    • DNS blocklists are a great aid: they reject 3 out of 4 spams without taking much CPU.
    • Bayes is still the most reliable (though the most CPU-intense) filtration method. Bayes poisoning is infeasible, because poisoners don’t know what your ham looks like, so don’t worry about hand-picking spam to train on. Train on an equal number of spams and hams: 400 of each works well. Once your bayes is performing well, crank up your BAYES_nn settings so spamassassin believes it.
    • Crank up spamc’s –max-size to 1024000, because spammers are now attaching images > 512K to mails to bypass spamc’s stock 512K threshold. This will cost extra CPU.

    With this, he gets perhaps a spam a week, with over 400 attempts per day.


    We were only able to get a 3x engineer this month, but at least they were able to get a decent job working on enterprise software.

    If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

    See you next month!

    Planet MozillaDharma


    dharma

    As soon as a developer at Mozilla starts integrating a new WebAPI feature, the Mozilla Security team begins working to help secure that API. Subtle programming mistakes in new code can introduce annoying crashes and even serious security vulnerabilities that can be triggered by malformed input which can lead to headaches for the user and security exposure.

    WebAPIs start life as a specification in the form of an Interface Description Language, or IDL. Since this is essentially a grammar, a grammar-based fuzzer becomes a valuable tool in finding security issues in new WebAPIs because it ensures that expected semantics are followed most of the time, while still exploring enough undefined behavior to produce interesting results.

    We came across a grammar fuzzer Ben Hawkes released in 2011 called “Dharma.” Sadly, only one version was ever made public. We liked Ben’s approach, but Dharma was missing some features which were important for us and its wider use for API fuzzing. We decided to sit down with our fuzzing mates at BlackBerry and rebuild Dharma, giving the results back to the public, open source and licensed as MPL v2.

    We redesigned how Dharma parses grammars and optimized the speed of parsing and the generating of fuzzed output, added new grammar features to the grammar specification, added support for serving testcases over a WebSocket server, and made it Python 3 ready. It comes with no dependencies and runs out of the box.

    In theory Dharma can be used with any data that can be represented as a grammar. At Mozilla we typically use it for APIs like WebRTC, WebAudio, or WebCrypto.


    dharma_demo

    Dharma has no integrated harness. Feel free to check out the Quokka project which provides an easy way for launching a target with Dharma, monitoring the process and bucketing any faults.

    Dharma is actively in use and maintained at Mozilla and more features are planned for the future. Ideas for improvements are always greatly welcomed.

    Dharma is available via GitHub (preferred and always up-to-date) or via PyPi by running “pip install dharma”.

    References
    https://github.com/mozillasecurity/dharma
    https://github.com/mozillasecurity/quokka
    https://code.google.com/p/dharma/

    Planet MozillaNot Working in Whistler

    A brief retrospective
    Whistler Rock - Shot with my Moto X 2nd Gen

    I just returned from Whistler, BC, along with 1300 other Mozillians.

    Because we’re a global organization, it’s important to get everyone together every now and then. We matched faces with IRC channels, talked shop over finger-food, and went hiking with random colleagues. Important people gave speeches. Teams aligned. There was a massive party at the top of a mountain.

    +++++

    Typically, I learn the most from experiences like these only after they have ended. Now that I’ve had 48 hours to process (read: recover), some themes have emerged…

    1. Organizing and mobilizing a bunch of smart, talented, opinionated people is hard work.
    2. It’s easy to say “you’re doing it wrong.” It takes courage to ask “how could we do it better?”
    3. Anyone can find short-term gains. The best leaders define a long-term vision, then enable individuals to define their own contribution.
    4. Success is always relative, and only for a moment in time.
    5. Accelerated change requires rapid adaptation. Being small is our advantage.
    6. The market is what you make it. So make it matter.
    7. It’s been said that when technology works, it’s magic. I disagree. Magic is watching passionate people – from all over the worldcreate technology together.

    Also, it has now been confirmed by repeatable experiment: I’m not a people person – especially when lots of them are gathered in small spaces. I’m fucking exhausted.

    Officially signing back on,

    -DCROBOT


    Planet MozillaRecap of Participation at Whistler

    <noscript>[<a href="http://storify.com/lucyeharris/participation-at-whistler" target="_blank">View the story &#8220;Participation at Whistler&#8221; on Storify</a>]</noscript>

    Planet MozillaVoice-controlled UI plus a TTS engine.

    A few days ago I’ve attended the launch of SohoXI – Hook and Loop’s latest product that aims to change the look and feel of the Enterprise applications forever. I am close to believing that and not only because I work for the same company as they do. They have delivered extremely revolutionary UI in … Continue reading Voice-controlled UI plus a TTS engine.

    Planet MozillaWhistler Work Week 2015

    Last week was Mozilla’s first work week of 2015 in Whistler, BC. It was my first visit to Whistler having joined Mozilla shortly after their last summit there in 2010, and it was everything I needed it to be. Despite currently feeling jetlagged, I have been recharged and I have renewed enthusiasm for the mission and I’m even more than a little excited about Firefox OS again! I’d like to share a few of my highlights from last week…

    • S’mores – The dinner on Wednesday evening was followed by a street party, where I had my first S’more. It was awesome!
    • Firefox OS – Refreshing honesty over past mistakes and a coherant vision for the future has actually made me enthusiastic about this project again. I’m no longer working directly on Firefox OS, but I’ve signed up for the dogfoxfooding program and I’m excited about making a difference again.
    • LEGO – I got to build a LEGO duck, and we heard from David Robertson about the lessons LEGO learned from near bankruptcy.
    • Milkshake – A Firefox Q&A was made infinitely better by taking a spontaneous walk to Cow’s for milkshakes and ice cream with my new team!
    • Running – I got to run with #running friends old and new on Tuesday morning around Lost Lake. Then on Thursday morning I headed back and took on the trails with Matt. These were my first runs since my marathon, and running through the beautiful scenary was exactly what I needed to get me back into it.
    • Istanbul – After dinner on Tuesday night, Stephen and I sat down with Bob to play the board game Istanbul.
    • Hacking – It’s always hard to get actual code written during these team events, but I’m pleased to say we thought through some challenging problems, and actually even managed to land some code.
    • Hike – On Friday morning I joined Justin and Matt on a short hike up Whistler mountain. We didn’t have long before breakfast, but it was great to spend more time with these guys.
    • Whistler Mountain – The final party was at the top of Whistler Mountain, which was just breathtaking. I can’t possibly do the experience justice – so I’m not even going to try.

    Thank you Whistler for putting up with a thousand Mozillians, and thank you Mozilla for organising such a perfect week. We’re going to keep rocking the free web!

    Planet MozillaDisponible cuentaFox 3.1.2 con añoradas funcionalidades

    Más rápido de lo que esperábamos ya está aquí una nueva versión de cuentaFox con interesantes características que desde hace algún tiempo los usuarios deseaban.

    Instalar cuentaFox 3.1.2

    Con esta liberación ya no tendremos que preocuparnos más porque el certificado no está añadido y se me abren x pestañas, ahora el UCICA.pem se añade automáticamente con sus niveles de confianza requeridos. Sin dudas es un gran paso de avance pues nos quita un gran peso de encima.

    Con el paso del tiempo a algunas personas se les olvida actualizar la extensión por lo que siguen utilizando versiones viejas que presentan algún error. Por esta razón, hemos decido alertar al usuario cuando estén disponibles nuevas versiones mostrando una alerta y abriendo en una nueva pestaña la URL para actualizar.

    v3.1.2-alerta-de-actualizacion

    Nuestro objetivo es que este proceso se realice de forma transparente al usuario pero actualmente no podemos hacerlo.

    Completan la lista de novedades

    • Al mostrar una alerta de error o de consumo se muestra el usuario que la ha generado #21.
    • Solucionado no se mostraba el usuario real del error al obtener los datos para varios usuarios #18.
    • Si existen varios usuarios almacenados, en el botón ubicado en la barra de herramientas se muestra el consumo del usuario que inició sesión #19.
    • Si al obtener los datos del usuario ocurre un error siempre se intenta borrar sus datos del administrador de contraseñas de Firefox #26.
    • Actualizado jQuery a v2.1.4

    Por último, pero no menos importante. En las opciones de configuración del add-on (no de la interfaz) pueden decidir si mostrar uno o más usuarios al mismo tiempo y elegir ocultar las cuotas gastadas.

    v3.1.2-opciones-de-configuracion

    Sirva este artículo para agradecer a todas las personas que han acercado a la página del proyecto en GitLab y nos han dejado sus ideas o errores encontrados. Esperamos que se sumen muchos más.

    Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia. Quizás encuentres una que te motive.

    Instalar cuentaFox 3.1.2

    Planet Mozilla“We are ALL Remoties” (Jun2015 edition)

    Since my last post on “remoties”, I’ve done several more presentations, some more consulting work for private companies, and even started writing this down more explicitly (exciting news coming here soon!). While I am always refining these slides, this latest version is the first major “refactor” of this presentation in a long time. I think this restructuring makes the slides even easier to follow – there’s a lot of material to cover here, so this is always high on my mind.

    Without further ado – you can get the latest version of these slides, in handout PDF format, by clicking on the thumbnail image.

    Certainly, the great responses and enthusiastic discussions every time I go through this encourages me to keep working on this. As always, if you have any questions, suggestions or good/bad stories about working remotely or as part of a geo-distributed teams, please let me know (either by email or in the comments below) – I’d love to hear them.

    Thanks
    John.

    Planet MozillaHello, 38 beta, it's nice to meet you

    And at long last the 38 beta is very happy to meet you too (release notes, downloads, hashes). Over the next few weeks I hope you and the new TenFourFox 38 will become very fond of each other, and if you don't, then phbbbbt.

    There are many internal improvements to 38. The biggest one specific to us, of course, is the new IonPower JavaScript JIT backend. I've invested literally months in making TenFourFox's JavaScript the fastest available for any PowerPC-based computer on any platform, not just because every day websites lard up on more and more crap we have to swim through (viva Gopherspace) but also because a substantial part of the browser is written in JavaScript: the chrome, much of the mid-level plumbing and just about all those addons you love to download and stuff on in there. You speed up JavaScript, you speed up all those things. So now we've sped up many browser operations by about 11 times over 31.x -- obviously the speed of JavaScript is not the only determinant of browser speed, but it's a big part of it, and I think you'll agree that responsiveness is much improved.

    JavaScript also benefits in 38 from a compacting, generational garbage collector (generational garbage collection was supposed to make 31 but was turned off at the last minute). This means recently spawned objects will typically be helplessly slaughtered in their tender youth in a spasm of murderous efficiency based on the empiric observation that many objects are created for brief usage and then never used again, reducing the work that the next-stage incremental garbage collector (which we spent a substantial amount of time tuning in 31 as you'll recall, including backing out background finalization and tweaking the timeslice for our slower systems) has to do for objects that survive this pediatric genocide. The garbage collector in 38 goes one step further and compacts the heap as well, which is to say, it moves surviving objects together contiguously in memory instead of leaving gaps that cannot be effectively filled. This makes both object cleanup and creation much quicker in JavaScript, which relies heavily on the garbage collector (the rest of the browser uses more simplistic reference counting to determine object lifetime), to say nothing of a substantial savings in memory usage: on my Quad G5 I'm seeing about 200MB less overhead with 48 tabs open.

    I also spent some time working on font enumeration performance because of an early showstopper where sites that loaded WOFF fonts spun and spun and spun. After several days of tearing my hair out in clumps the problem turned out to be a glitch in reference counting caused by the unusual way we load platform fonts: since Firefox went 10.6+ it uses CoreText exclusively, but we use almost completely different font code based on the old Apple Type Services which is the only workable choice on 10.4 and the most stable choice on 10.5. ATS is not very fast at instantiating lots of fonts, to say the least, so I made the user font cache stickier (please don't read that as "leaky" -- it's sticky because things do get cleaned up, but less aggressively to improve cache hit percentage) and also made a global font cache where the font's attribute tag directory is cached browser-wide to speed up loading font tables from local fonts on your hard disk. Previously this directory was cached per font entry, meaning if the font entry was purged for re-enumeration it had to be loaded all over again, which usually happened when the browser was hunting for a font with a particular character. This process used to take about fifteen to twenty seconds for the 700+ font faces on my G5. With the global font cache it now takes less than two.

    Speaking of showstoppers, here's an interesting one which I'll note here for posterity. nsChildView, the underlying system view which connects Cocoa/Carbon to Gecko, implements the NSTextInput protocol which allows it to accept Unicode input without (as much) mucking about with the Carbon Text Services Manager (Firefox also implements NSTextInputClient, which is the new superset protocol, but this doesn't exist in 10.4). To accept Unicode input, under the hood the operating system actually manipulates a special undocumented TSM input context called, surprisingly, NSTSMInputContext (both this and the undocumented NSInputContext became the documented NSTextInputContext in 10.6), and it gets this object from a previously undocumented method on NSView called (surprise again) inputContext. Well, turns out if you override this method you can potentially cause all sorts of problems, and Mozilla had done just that to handle complex text input for plugins. Under the 10.4 SDK, however, their code ended up returning a null input context and Unicode input didn't work, so since we don't support plugins anyhow the solution was just to remove it completely ... which took several days more to figure out. The moral of the story is, if you have an NSView that is not responding to setMarkedText or other text input protocol methods, make sure you haven't overridden inputContext or screwed it up somehow.

    I also did some trivial tuning to the libffi glue library to improve the speed of its calls and force it to obey our compiler settings (there was a moment of panic when the 7450 build did not start on the test machines because dyld said XUL was a 970 binary -- libffi had seen it was being built on a G5 and "helpfully" compiled it for that target), backed out some portions of browser chrome that were converted to CoreUI (not supported on 10.4), and patched out the new tab tile page entirely; all new tabs are now blank, like they used to be in previous versions of Firefox and as intended by God Himself. There are also the usual cross-platform HTML5 and CSS improvements you get when we leap from ESR to ESR like this, and graphics are now composited off-main-thread to improve display performance on multiprocessor systems.

    That concludes most of the back office stuff. What about user facing improvements? Well, besides the new blank tabs "feature," we have built-in PDF viewing as promised (I think you'll find this more useful to preview documents and load them into a quicker viewer to actually read them, but it's still very convenient) and Reader View as the biggest changes. Reader View, when the browser believes it can attempt it, appears in the address bar as a little book icon. Click on it and the page will render in a simplified view like you would get from a tool such as Readability, cutting out much of the extraneous formatting. This is a real godsend on slower computers, lemme tell ya! Click the icon again to go back. Certain pages don't work with this, but many will. I have also dragged forward my MP3 decoder support, but see below first, and we have prospectively landed Mozilla bug 1151345 to fix an issue with the application menu (modified for the 10.4 SDK).

    You will also note the new, in-content preferences (i.e., preferences appears in a browser tab now instead of a window, a la, natch, Chrome), and that the default search engine is now Yahoo!. I have not made this default to anything else since we can still do our part this way to support MoCo (but you can change it from the preferences, of course).

    I am not aware of any remaining showstopper bugs, so therefore I'm going ahead with the beta. However, there are some known issues ("bugs" or "features" mayhaps?) which are not critical. None of these will hold up final release currently, but for your information, here they are:

    • If you turn on the title bar, private browsing windows have the traffic light buttons in the wrong position. They work; they just look weird. This is somewhat different than issue 247 and probably has a separate, though possibly related, underlying cause. Since this is purely cosmetic and does not occur in the browser's default configuration, we can ship with this bug present but I'll still try to fix it since it's fugly (plus, I personally usually have the title bar on).

    • MP3 support is still not enabled by default because seeking within a track (except to the beginning) does not work yet. This is the last thing to do to get this support off the ground. If you want to play with it in its current state, however, set tenfourfox.mp3.enabled to true (you will need to create this pref). If I don't get this done by 38.0.2, the pref will stay off until I do, but the rest of it is working already and I have a good idea how to get this last piece functional.

    • I'm not sure whether to call this a bug or a feature, but scaling now uses a quick and dirty algorithm for many images and some non-.ico favicons apparently because we don't have Skia support. It's definitely lower quality, but it has a lot less latency. Images displayed by themselves still use the high-quality built-in scaler which is not really amenable to the other uses that I can tell. Your call on which is better, though I'm not sure I know how to go back the old method or if it's even possible anymore.

    • To reduce memory pressure, 31 had closed tab and window undos substantially reduced. I have not done that yet for 38 -- near as I can determine, the more efficient memory management means it is no longer worth it, so we're back to the default 10 and 3. See what you think.

    Builders: take note that you will need to install a modified strip ("strip7") if you intend to make release binaries due to what is apparently a code generation bug in gcc 4.6. If you want to use a different (later) compiler, you should remove the single changeset with the gcc 4.6 compatibility shims -- in the current changeset pack it's numbered 260681, but this number increments in later versions. See our new HowToBuildRightNow38 for the gory details and where to get strip7.

    Localizers: strings are frozen, so start your language pack engines one more time in issue 42. We'd like to get the same language set for 38 that we had for 31, and your help makes it possible. Thank you!

    As I mentioned before, it's probably 70-30 against there being a source parity version after 38ESR because of the looming threat of Electrolysis, which will not work as-is on 10.4 and is not likely to perform well or even correctly on our older systems. (If Firefox 45, the next scheduled ESR, still allows single process operation then there's a better chance. We still need to get a new toolchain up and a few other things, though, so it won't be a trivial undertaking.) But I'm pleased with 38 so far and if we must go it means we go out on a high note, and nothing says we can't keep improving the browser ourselves separate from Mozilla after we split apart (feature parity). Remember, that's exactly what Classilla does, except that we're much more advanced than Classilla will ever be, and in fact Pale Moon recently announced they're doing the same thing. So if 38 turns out to be our swan song as a full-blooded Mozilla tier 3 port, that doesn't mean it's the end of TenFourFox as a browser. I promise! Meanwhile, let's celebrate another year of updates! PowerPC forever!

    Finally, looking around the Power Mac enthusiast world, it appears that SeaMonkeyPPC has breathed its last -- there have been no updates in over a year. We will pour one out for them. On the other hand, Leopard Webkit continues with regular updates from Tobias, and our friendly builder in the land of the Rising Sun has been keeping up with Tenfourbird. We have the utmost confidence that there will be a Tenfourbird 38 in your hands soon as well.

    Some new toys to play with are next up in a couple days.

    Planet MozillaThis Week in Rust 85

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    From the Blogosphere

    Tips & Tricks

    In the News

    New Releases & Project Updates

    • rust-timsort. Rust implementation of the modified MergeSort used in Python and Java.
    • trust. Rust automated test runner.
    • mongo-rust-driver. Mongo Rust driver built on top of the Mongo C driver.
    • rust-ffi-omnibus. A collection of examples of using code written in Rust from other languages.
    • hyper is now at v0.6. An HTTP/S library for Rust.
    • rust-throw. A new experimental rust error handling library, meant to assist and build on existing error handling systems.
    • burrito. A monadic IO interface in Rust.
    • mimty. Fast, safe, self-contained MIME Type Identification for C and Rust.

    What's cooking on master?

    95 pull requests were merged in the last week.

    Breaking Changes

    Now you can follow breaking changes as they happen!

    Other Changes

    New Contributors

    • Andy Grover
    • Brody Holden
    • Christian Persson
    • Cruz Julian Bishop
    • Dirkjan Ochtman
    • Gulshan Singh
    • Jake Hickey
    • Makoto Kato
    • Yongqian Li

    Final Comment Period

    Every week the teams announce a 'final comment period' for RFCs which are reaching a decision. Express your opinions now. This week's RFCs entering FCP are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    Planet MozillaPillow 2-9-0 Is Almost Out

    Pillow 2.9.0 will be released on July 1, 2015.

    Pre-release

    Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

    Report issues

    As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

    Thank you!

    Planet MozillaMozilla’s Release Engineering now on Dr Dobbs!

    book coverLong time readers of this blog will remember when The Architecture of Open Source Applications (vol2) was published, containing a chapter describing the tools and mindsets used when re-building Mozilla’s Release Engineering infrastructure. (More details about the book, about the kindle and nook versions, and about the Russian version(!).

    Dr Dobbs recently posted an article here which is an edited version of the Mozilla Release Engineering chapter. As a long time fan of Dr Dobbs, seeing this was quite an honor, even with the sad news here.

    Obviously, Mozilla’s release automation continues to evolve, as new product requirements arise, or new tools help further streamline things. There is still lots of interesting work being done here – for me, top of mind is Task Cluster, and ScriptHarness (v0.1.0 and v0.2.0). Release Engineering at scale is both complex, and yet very interesting – so you should keep watching these sites for more details, and consider if they would also help in your current environment. As they are all open source, you can of course join in and help!

    For today, I just re-read the Dr. Dobbs article with a fresh cup of coffee, and remembered the various different struggles we went through as we scaled Mozilla’s infrastructure up so we could quickly grow the company, and the community. And then in the middle of it all, found time with armenzg, catlee and lsblakk to write about it all. While some of the technical tools have changed since the chapter was written, and some will doubtless change again in the future, the needs of the business, the company and the community still resonate.

    For anyone doing Release Engineering at scale, the article is well worth a quiet read.

    Planet MozillaA glance at unified FHR/Telemetry

    Lots is changing in Telemetry land. If you do occasionally run data analyses with our Spark infrastructure you might want to keep reading.

    Background

    The Telemetry and FHR collection systems on desktop are in the process of being unified. Both systems will be sending their data through a common data pipeline which has some features of both the current Telemetry pipeline as well the Cloud Services one that we use to ingest server logs.

    The goals of the unification are to:

    • avoid measuring the same metric in multiple systems on the client side;
    • reduce the latency from the time a measurement occurs until it can be analyzed on the server;
    • increase the accuracy of measurements so that they can be better correlated with factors in the user environment such as the specific build, enabled add-ons, and other hardware or software characteristics;
    • use a common data pipeline for client telemetry and service log data.

    The unified pipeline is currently sending data for Nightly, Aurora and Beta. Classic FHR and Telemetry pipelines are going to keep sending data to the very least until the new unified pipeline has not been fully validated. The plan is to land this feature in 40 Release. We’ll also continue to respect existing user preferences. If the user has opted out of FHR or Telemetry, we’ll continue to respect that for the equivalent data sets. Similarly, the opt-out and opt-in defaults will remain the same for equivalent data sets.

    Data format

    A Telemetry ping, stored as JSON object on the client, encapsulates the data sent to our backend. The main differences between the new unified Telemetry ping format (v4) and the classic Telemetry one (v2) are that:

    • multiple ping types are supported beyond the classic saved-session ping, like the main ping;
    • pings have a common top-level which contains basic information shared between types, like build-id and channel;
    • pings have an optional environment field which consists of data that is expected to be characteristic for performance and other behavior.

    From an analysis point of view, the most important addition is the main ping which includes the very same histograms and other performance and diagnostic data as the v2 saved-session pings. Unlike in “classic” Telemetry though, there can be multiple main pings during a single session. A main ping is triggered by different scenarios, which are documented by the reason field:

    • aborted-session: periodically saved to disk and deleted at shutdown – if a previous aborted session ping is found at startup it gets sent to our backend;
    • environment-change: generated when the environment changes;
    • shutdown: triggered when the browser session ends;
    • daily: a session split triggered in 24h hour intervals at local midnight; this is needed to make sure we keep receiving data also from clients that have very long sessions.

    Data access through Spark

    Once you connect to a Spark enabled IPython notebook launched from our self-service dashboard, you will be prompted with a new tutorial based on the v4 dataset. The v4 data is fetched through the get_pings function by passing “v4″ as the schema parameter. The following parameters are valid for the new data format:

    • app: an application name, e.g.: “Firefox”;
    • channel: a channel name, e.g.: “nightly”;
    • version: the application version, e.g.: “40.0a1″;
    • build_id: a build id or a range of build ids, e.g.:”20150601000000″ or (“20150601000000″, “20150610999999”)
    • submission_date: a submission date or a range of submission dates, e.g: “20150601” or (“20150601″, “20150610”)
    • doc_type: ping type, e.g: “main”, set to “saved_session” by default
    • fraction: the fraction of pings to return, set to 1.0 by default

    Once you have a RDD, you can further filter the pings down by reason. There is also a new experimental API that returns the history of submissions for a subset of profiles, which can be used for longitudinal analyses.


    Planet MozillaGoogle Voice Search and the Appearance of Trustworthiness

    Last week there were several bug reports [1] [2] [3] about how Chrome (the web browser), even in its fully-open-source Chromium incarnation, downloads a closed-source, binary extension from Google’s servers and installs it, without telling you it has done this, and moreover this extension appears to listen to your computer’s microphone all the time, again without telling you about it. This got picked up by the trade press [4] [5] [6] and we rapidly had a full-on Internet panic going.

    If you dig into the bug reports and/or the open source part of the code involved, which I have done, it turns out that what Chrome is doing is not nearly as bad as it looks. It does download a closed-source binary extension from Google, install it, and hide it from you in the list of installed extensions (technically there are two hidden extensions involved, only one of which is closed-source, but that’s only a detail of how it’s all put together). However, it does not activate this extension unless you turn on the voice search checkbox in the settings panel, and this checkbox has always (as far as I can tell) been off by default. The extension is labeled, accurately, as having the ability to listen to your computer’s microphone all the time, but of course it does not get to do this until it is activated.

    As best anyone can tell without access to the source, what the closed-source extension actually does when it’s activated is monitor your microphone for the code phrase OK Google. When it detects this phrase it transmits the next few words spoken to Google’s servers, which convert it to text and conduct a search for the phrase. This is exactly how one would expect a voice search feature to behave. In particular, a voice-activated feature intrinsically has to listen to sound all the time, otherwise how could it know that you have spoken the magic words? And it makes sense to do the magic word detection with code running on the local computer, strictly as a matter of efficiency. There is even a non-bogus business reason why the detector is closed source; speech recognition is still in the land where tiny improvements lead to measurable competitive advantage.

    So: this feature is not actually a massive privacy violation. However, Google could and should have put more care into making this not appear to be a massive privacy violation. They wouldn’t have had mud thrown at them by the trade press about it, and the general public wouldn’t have had to worry about it. Everyone wins. I will now dissect exactly what was done wrong and how it could have been done better.

    It was a diagnostic report, intended for use by developers of the feature, that gave people the impression the extension was listening to the microphone all the time. Below is a screen shot of this diagnostic report (click for full width). You can see it on your own copy of Chrome by typing chrome://voicesearch into the URL bar; details will probably differ a little (especially if you’re not using a Mac).

    <figure class="aligncenter"> Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X. The most important lines of text are 'Microphone: Yes', 'Audio Capture Allowed: Yes', 'Hotword Search Enabled: No', and 'Extension State: ENABLED. Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X. </figure>

    Google’s first mistake was not having anyone check this over for what it sounds like it means to someone who isn’t familiar with the code. It is very well known that when faced with a display like this, people who aren’t familiar with the code will pick out whatever bits they think they understand and ignore everything else, even if that means they completely misunderstand it. [7] In this case, people see Microphone: Yes and Audio Capture Allowed: Yes and maybe also Extension State: ENABLED and assume that this means the extension is actively listening right now. (What the developers know it means is this computer has a microphone, the extension could listen to it if it had been activated, and it’s connected itself to the checkbox in the preferences so it can be activated. And it’s hard for them to realize that anyone could think it would mean something else.)

    They didn’t have anyone check it because they thought, well, who’s going to look at this who isn’t a developer? Thing is, it only takes one person to look at it, decide it looks hinky, mention it online, and now you have a media circus on your hands. Obscurity is no excuse for not doing a UX review.

    Now, mistake number two becomes evident when you consider what this screen ought to say in order not to scare people who haven’t turned the feature on (and maybe this is the first they’ve heard of it even): something like

    Voice Search is inactive.

    (A couple of sentences about what Voice Search is and why you might want it.) To activate Voice Search, go to the preferences screen and check the box.

    It would also be okay to have a duplicate checkbox right there on this screen, and to have all the same debugging information show up after you check the box. But wait—how do developers diagnose problems with downloading the extension, which happens before the box has been checked? And that’s mistake number two. The extension should not be downloaded until the box is checked. I am not aware of any technical reason why that couldn’t have been the way it worked in the first place, and it would go a long way to reassure people that this closed-source extension can’t listen to them unless they want it to. Note that even if the extension were open source it might still be a live question whether it does anything hinky. There’s an excellent chance that it’s a generic machine recognition algorithm that’s been trained to detect OK Google, which training appears in the code as a big lump of meaningless numbers—and there’s no way to know whether those numbers train it to detect anything besides OK Google. Maybe if you start talking about bombs the computer just quietly starts recording…

    Mistake number three, finally, is something they got half-right. This is not a core browser feature. Indeed, it’s hard for me to imagine any situation where I would want this feature on a desktop computer. Hands-free operation of a mobile device, sure, but if my hands are already on a keyboard, that’s faster and less bothersome for other people in the room. So, Google implemented this frill as a browser extension—but then they didn’t expose that in the user interface. It should be an extension, and it should be visible as such. Then it needn’t take up space in the core preferences screen, even. If people want it they can get it from the Chrome extension repository like any other extension. And that would give Google valuable data on how many people actually use this feature and whether it’s worth continuing to develop.

    Planet MozillaReleng & Relops weekly highlights - June 26, 2015

    Friday, foxyeah!

    It’s been a very busy and successful work week here in beautiful Whistler, BC. People are taking advantage of being in the same location to meet, plan, hack, and socialize. A special thanks to Jordan for inviting us to his place in beautiful Squamish for a BBQ!

    (Note: No release engineering folks were harmed by bears in the making of this work week.)

    tl;dr

    Whistler: Keynotes were given by our exec team and we learned we’re focusing on quality, dating our users to get to know them better, and that WE’RE GOING TO SPACE!! We also discovered that at LEGO, Everything is Awesome now that they’re thinking around the box instead of inside or outside of it. Laura’s GoFaster project sounds really exciting, and we got a shoutout from her on the way we manage the complexity of our systems. There should be internal videos of the keynotes up next week if you missed them.

    Internally, we talked about Q3 planning and goals, met with our new VP, David, met with our CEO, Chris, presented some lightning talks, and did a bunch of cross-group planning/hacking. Dustin, Kim, and Morgan talked to folks at our booth at the Science Fair. We had a cool banner and some cards (printed by Dustin) that we could hand out to tell people about try. SHIP IT!

    Taskcluster: Great news; the TaskCluster team is joining us in Platform! There was lots of evangelism about TaskCluster and interest from a number of groups. There were some good discussions about operationalizing taskcluster as we move towards using it for Firefox automation in production. Pete also demoed the Generic Worker!

    Puppetized Windows in AWS: Rob got the nxlog puppet module done. Mark is working on hg and NSIS puppet modules in lieu of upgrading to MozillaBuild 2.0. Jake is working on the metric-collective module. The windows folks met to discuss the future of windows package management. Q is finishing up the performance comparison testing in AWS. Morgan, Mark, and Q deployed runner to all of the try Windows hosts and one of the build hosts.

    Operational: Amy has been working on some additional nagios checks. Ben, Rail, and Nick met and came up with a solid plan for release promotion. Rail and Nick worked on releasing Firefox 39 and two versions of Firefox ESR. Hal spent much of the week working with IT. Dustin and catlee got some work on on migrating treestatus to relengapi. Hal, Nick, Chris, and folks from IT, sheriffs, dev-services debugged problems with b2g jobs. Callek deployed a new version of slaveapi. Kim, Jordan, Chris, and Ryan worked on a plan for addons. Kim worked with some new buildduty folks to bring them up to speed on operational procedures.

    Thank you all, and have a safe trip home!

    And here are all the details:

    Taskcluster

    • We got to spend some quality time with the our new TaskCluster teammates, Greg, Jonas, Wander, Pete, and John. We’re all looking forward to working together more closely.
    • Morgan convinced lots of folks that Taskcluster is super amazing, and now we have a lot of people excited to start hacking on it and moving their workloads to it.
    • We put together a roadmap for TaskCluster in Trello and identified the blockers to turning Buildbot Scheduling off.

    Puppetized Windows in AWS

    • Rob has pushed out the nxlog puppet module to get nxlog working in scl3 (bug 1146324). He has a follow-on bug to modify the ec2config file for AWS to reset the log-aggregator host so that we’re aggregating to the local region instead of where we instantiate the instance (like we do with linux). This will ensure we have Windows system logs in AWS (bug 1177577).
    • The new version of MozillaBuild was released, and our plan was to upgrade to that on Windows (bug 1176111). An attempt at that showed that the way hg was compiled requires an external dll (likely something from cygwin), and needs to be run from bash. Since this would require significant changes, we’re going to install the old version of MozillaBuild and put upgrades of hg (bug 1177740) and NSIS on top of that (like we’re doing with GPO now). Future work will include splitting out all the packages and not using MozillaBuild. Jake is working on the puppet module for metric-collective, our host-level stats gathering software for windows (similar to collectd on windows/OS X). This will give use Windows system metrics in graphite in AWS (bug 1097356).
    • We met to talk about Windows packaging and how to best integrate with puppet. Rob is starting to investigate using NuGet and Chocolatey to handle this (bugs 1175133 and 1175107).
    • Q spun up some additional instance types in AWS and is in the process of getting some more data for Windows performance after the network modifications we made earlier (bug 1159384).
    • Jordan added a new puppetized path for all windows jobs, fixing a problem we were seeing with failing sendchanges on puppetized machines (bug 1175701).
    • Morgan, Mark, and Q deployed runner to all of the try Windows hosts (bug 1055794).

    Operational

    • The relops team met to perform a triage of their two bugzilla queues and closed almost 20% of the open bugs as either already done or wontfix based on changes in direction.
    • Amy has been working on some additional nagios checks for some Windows services and for AWS subnets filling up (bugs 1164441 and 793293).
    • Ben, Rail, and Nick met and came up with a solid plan for the future of release promotion.
    • Rail and Nick worked on getting Firefox 39 (and the related ESR releases) out to our end users.
    • Hal spent lots of time working with IT and the MOC, improving our relationships and workflow.
    • Dustin and catlee did some hacking to start the porting of treestatus to relengapi (one of the blockers to moving us out of PHX1).
    • Hal, Nick, Chris, and folks from IT, sheriffs, dev-services tracked down an intermittent problem with the repo-tool impacting only b2g jobs (bug 1177190).
    • Callek deployed the new version of slaveapi to support slave loans using the AWS API (bug 1177932).
    • Kim, Jordan, Chris, and Ryan discussed the initial steps for future addon support.
    • Coop (hey, that’s me) held down the buildduty fort while everyone else was in Whistler

    See you next week!

    Planet Mozilla31.8.0 available (say goodbye)

    31.8.0 is available, the last release for the 31 series (release notes, downloads, hashes). Download it and give it one last spin. 31 wasn't a high water mark for us in terms of features or performance, but it was pretty stable and did the job, so give it a salute as it rides into the sunset. It finalizes Monday PM Pacific time as usual.

    I'm trying very hard to get you the 38.0.1 beta by sometime next week, probably over the July 4th weekend assuming the local pyros don't burn my house down with errant illegal fireworks, but I keep hitting showstoppers while trying to dogfood it. First it was fonts and then it was Unicode input, and then the newtab crap got unstuck again, and then the G5 build worked but the 7450 build didn't, and then, and then, and then. I'm still working on the last couple of these major bugs and then I've got some additional systems to test on before I introduce them to you. There are a couple minor bugs that I won't fix before the beta because we need enough time for the localizers to do their jobs, and MP3 support is present but is still not finished, but there will be a second beta that should address most of these problems prior to our launch with 38.0.2. Be warned of two changes right away: no more tiles in the new tab page (I never liked them anyway, but they require Electrolysis now, so that's a no-no), and Check for Updates is now moved to the Help menu, congruent with regular Firefox, since keeping it in its old location now requires substantial extra code that is no longer worth it. If you can't deal with these changes, I will hurt you very slowly.

    Features that did not make the cut: Firefox Hello and Pocket, and the Cisco H.264 integration. Hello and Pocket are not in the ESR, and I wouldn't support them anyway; Hello needs WebRTC, which we still don't really support, and you can count me in with the people who don't like a major built-in browser component depending exclusively on a third-party service (Pocket). As for the Cisco integration, there will never be a build of those components for Tiger PowerPC, so there. Features that did make the cut, though, are pdf.js and Reader View. Although PDF viewing is obviously pokier compared to Preview.app, it's still very convenient, generally works well enough now that we have IonPower backing it, and is much safer. However, Reader View on the other hand works very well on our old systems. You'll really like it especially on a G3 because it cuts out a lot of junk.

    After that there are two toys you'll get to play with before 38.0.2 since I hope to introduce them widely with the 38 launch. More on that after the beta, but I'll whet your appetite a little: although the MacTubes Enabler is now officially retired, since as expected the MacTubes maintainer has thrown in the towel, thanks to these projects the MTE has not one but two potential successors, and one of them has other potential applications. (The QuickTime Enabler soldiers on, of course.)

    Last but not least, I have decided to move the issues list and the wiki from Google Code to Github, and leave downloads with SourceForge. That transition will occur sometime late July before Google Code goes read-only on August 24th. (Classilla has already done this invisibly but I need to work on a stele so that 9.3.4 will be able to use Github effectively.) In the meantime, I have already publicly called Google a bunch of meaniepants and poopieheads for their shameful handling of what used to be a great service, so my work here is done.

    Planet MozillaPromises: Code vs. Policy

    A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.

    If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)

    If they were keeping it in policy, they would add “We don’t store information on your location” to their privacy policy or Terms of Service. The documents can be reviewed, but in general you have to trust the company that they are sticking to their word. This is particularly so if the policy states that it does not create a binding obligation on the company. So this is a function of your view of the company’s reputation.

    Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.

    But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.

    Question: is this distinction, about where to keep your promises, useful when designing new features?

    Question: is it reasonable or misguided for geeks to prefer promises kept in code?

    Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?

    Planet MozillaAnnouncing the Content Performance program

    Introduction

    Aaron Klotz, Avi Halachmi and I have been studying Firefox’s performance on Android & Windows over the last few weeks as part of an effort to evaluate Firefox “content performance” and find actionable issues. We’re analyzing and measuring how well Firefox scrolls pages, loads sites, and navigates between pages. At first, we’re focusing on 3 reference sites: Twitter, Facebook, and Yahoo Search.

    We’re trying to find reproducible, meaningful, and common use cases on popular sites which result in noticeable performance problems or where Firefox performs significantly worse than competitors. These use cases will be broken down into tests or profiles, and shared with platform teams for optimization. This “Content Performance” project is part of a larger organizational effort to improve Firefox quality.

    I’ll be regularly posting blog posts with our progress here, but you can can also track our efforts on our mailing list and IRC channel:

    Mailing list: https://mail.mozilla.org/listinfo/contentperf
    IRC channel: #contentperf
    Project wiki page: Content_Performance_Program

    Summary of Current Findings (June 18)

    Generally speaking, desktop and mobile Firefox scroll as well as other browsers on reference sites when there is only a single tab loaded in a single window.

    • We compared Firefox vs Chrome and IE:
      • Desktop Firefox scrolling can badly deteriorate when the machine is in power-saver mode1 (Firefox performance relative to other browsers depends on the site)
      • Heavy activity in background tabs badly affects desktop Firefox’s scrolling performance1 (much worse than other browsers — we need E10S)
      • Scrolling on infinitely-scrolling pages only appears janky when the page is waiting on additional data to be fetched
    • Inter-page navigation in Firefox can exhibit flicker, similar to other browsers
    • The Firefox UI locks up during page loading, unlike other browsers (need E10S)
    • Scrolling in desktop E10S (with heavy background tab activity) is only as good as the other browsersn1 when Firefox is in the process-per-tab configuration (dom.ipc.processCount >> 1)

    1 You can see Aaron’s scrolling measurements here: http://bit.ly/1K1ktf2

    Potential scenarios to test next:

    • Check impact of different Firefox configurations on scrolling smoothness:
      • Hardware acceleration disabled
      • Accessibility enabled & disabled
      • Maybe: Multiple monitors with different refresh rate (test separately on Win 8 and Win 10)
      • Maybe: OMTC, D2D, DWrite, display & font scaling enabled vs disabled
        • If we had a Telemetry measurement of scroll performance, it would be easier to determine relevant characteristics
    • Compare Firefox scrolling & page performance on Windows 8 vs Windows 10
      • Compare Firefox vs Edge on Win 10
    • Test other sites in Alexa top 20 and during random browsing
    • Test the various scroll methods on reference sites (Avi has done some of this already): mouse wheel, mouse drag, arrow key, page down, touch screen swipe and drag, touchpad drag, touchpad two finger swipe, trackpoints (special casing for ThinkPads should be re-evaluated).
      • Check impact of pointing device drivers
    • Check performance inside Google web apps (Search, Maps, Docs, Sheets)
      • Examine benefits of Chrome’s network pre-fetcher on Google properties (e.g. Google search)
      • Browse and scroll simple pages when top Google apps are loaded in pinned tabs
    • Compare Firefox page-load & page navigation performance on HTTP/2 sites (Facebook & Twitter, others?)
    • Check whether our cache and pre-connector benefit perceived performance, compare vs competition

    Issues to report to Platform teams

    • Worse Firefox scrolling performance with laptop in power-save mode
    • Scrolling Twitter feed with YouTube HTML5 videos is jankier in Firefox
    • bug 1174899: Scrolling on Facebook profile with many HTML5 videos eventually causes 100% CPU usage on a Necko thread + heavy CPU usage on main thread + the page stops loading additional posts (videos)

    Tooling questions:

    • Find a way to to measure when the page is “settled down” after loading, i.e. time until last page-loading event. This could be measured by the page itself (similar to Octane), which would allow us to compare different browsers
    • How to reproduce dynamic websites offline?
    • Easiest way to record demos of bad Firefox & Fennec performance vs other browsers?

    Decisions made so far:

    • Exclusively focus on Android 5.0+ and Windows 7, 8.1 & 10
    • Devote the most attention to single-process Nightly on desktop, but do some checks of E10S performance as well
    • Desktop APZC and network pre-fetcher are a long time away, don’t wait

    Planet MozillaWeb Literacy Map v2.0

    I’m delighted to see that development of Mozilla’s Web Literacy Map is still continuing after my departure a few months ago.

    Read, Write, Participate

    Mark Surman, Executive Director of the Mozilla Foundation, wrote a blog post outlining the way forward and a working group has been put together to drive forward further activity. It’s great to see Mark Lesser being used as a bridge to previous iterations.

    Another thing I’m excited to see is the commitment to use Open Badges to credential Web Literacy skills. We tinkered with badges a little last year, but hopefully there’ll be a new impetus around this.

    The approach to take the Web Literacy Map from version 1.5 to version 2.0 is going to be different from the past few years. It’s going to be a ‘task force’ approach with people brought in to lend their expertise rather than a fully open community approach. That’s probably what’s needed at this point.

    I’m going to give myself some space to, as my friend and former colleague Laura Hilliger said, 'disentangle myself’ from the Web Literacy Map and wider Mozilla work. However, I wish them all the best. It’s important work.


    Comments? Questions? I’m @dajbelshaw on Twitter or you can email: mail@dougbelshaw.com

    Planet MozillaWillie Cheong: Maximum Business Value; Minimum Effort

    enhanced-4875-1433865112-1

    Dineapple is an online food delivery gig that I have been working on recently. In essence, a new food item is introduced periodically, and interested customers place orders online to have their food delivered the next day.

    Getting down to the initial build of the online ordering site, I started to think about the technical whats and hows. For this food delivery service, a customer places an order by making an online payment. The business then needs to know of this transaction, and have it linked to the contact information of the customer.

    Oh okay, easy. Of course I’ll set up a database. I’ll store the order details inside a few tables. Then I’ll build a mini application to extract this information and generate a daily report for the cooks and delivery people to operate on. Then I started to build these things in my head. But wait, there is a simpler way to get the operations people aware of orders. We could just send an email to the people on every successful transaction to notify them of a new incoming order. But this means the business loses visibility and data portability. Scraping for relational data from a bunch of automated emails, although possible, will be a nightmare. The business needs to prepare to scale, and that means analytics.

    Then I saw something that now looks so obvious I feel pretty embarrassed. Payments on the ordering service are processed using Stripe. When the HTTP request to process a payment is made, Stripe provides an option to submit additional metadata that will be tagged to the payment. There is a nice interface on the Stripe site that allows business owners to do some simple analytics on the payment data. There is also the option to export all of that data (and metadata) to CSV for more studying.

    Forget about ER diagrams, forget about writing custom applications, forget about using automated emails to generate reports. Stripe is capable of doing the reporting for Dineapple, we just had to see a way to adapt the offering to fit the business’s use case.

    Beyond operations reporting through Stripe, there are so many existing web services out there that can be integrated into Dineapple. Just to name a few, an obvious one would be to use Google Analytics to study site traffic. Customers’ reviews on food and services could (and probably should) be somehow integrated to work using Yelp. Note that none of these outsourced alternatives, although significantly easier to implement, compromise on the quality of the solution for the business. Because at the end of the day, all that really matters is that the business gets what it needs.

    So here’s a reminder to my future self. Spend a little more time looking around for simpler alternatives that you can take advantage of before jumping into development for a custom solution.

    Engineers are builders by instinct, but that isn’t always a good thing.

    Planet Mozilla#Mozlove for Tad

    I truly believe, that to make Mozilla a place worth ‘hanging your hat‘, we need to get better at being ‘forces of good for each other’.  I like to think this idea is catching on, but only time will tell.

    This month’s #mozlove post is for Tom Farrow AKA ‘Tad’,  a long time contributor, in numerous initiatives across Mozilla.  Although Tad’s contribution focus is in Community Dev Ops, it’s his interest in teaching youth digital literacy that first led to a crossing of our paths. You’ll probably find it interesting to know that despite being in his sixth(!!) year of contribution to Mozilla –  Tad is still a High School in Solihull Birmingham, UK.

    Tad starting contribution to Mozilla after helping a friend install Firefox on their government-issued laptop, which presented some problems. He found help on SUMO, and through being helped was inspired to become a helper and contributor himself.  Tad speaks fondly of starting with SUMO, of finding friends, training and mentorship.

    Originally drawn to IT and DevOps contribution for the opportunity of ‘belonging to something’, Tad has become a fixture in this space helping design hosting platforms, and the evolution of a multi-tenant WordPress hosting. When I asked what was most rewarding about contributing to Community Dev Ops, he shared that pride in innovating a quality solution.

    I’m also increasingly curious about the challenges of participation and asked about this as well.  Tad expressed some frustration around ‘access and finding the right people to unlock resources’.  I think that’s probably something that speaks to the greater challenges for the Mozilla community in understanding pathways for support.

    Finally my favorite question:  “How do your friends and family relate to your volunteer efforts? Is it easy or hard to explain volunteering at Mozilla?”.

    I don’t really try to explain it – my parents get the general idea, and are happy I’m gaining skills in web technology.

    I think it’s very cool that in a world of ‘learn to code’ merchandizing, that Tad found his opportunity to learn and grow technical skills in participation at Mozilla :)

    I want to thank Tad for taking the time to chat with me, for being such an amazing contributor, and inspiration to others around the project.

    * I set a reminder in my calendar every month, which this month happens to be during Mozilla’s Work Week in Whistler.  Tad is also in Whistler, make sure you look out for him – and say hello!

     

     

     

     

     

    Planet Mozillaon configuration

    A few people have suggested I look at other packages for config solutions. I thought I'd record some of my thoughts on the matter. Let's look at requirements first.

    Requirements

    1. Commandline argument support. When running scripts, it's much faster to specify some config via the commandline than always requiring a new config file for each config change.

    2. Default config value support. If a script assumes a value works for most cases, let's make it default, and allow for overriding those values in some way.

    3. Config file support. We need to be able to read in config from a file, and in some cases, several files. Some config values are either too long and unwieldy to pass via the commandline, and some config values contain characters that would be interpreted by the shell. Plus, the ability to use diff and version control on these files is invaluable.

    4. Multiple config file type support. json, yaml, etc.

    5. Adding the above three solutions together. The order should be: default config value -> config file -> commandline arguments. (The rightmost value of a configuration item wins.)

    6. Config definition and validation. Commandline options are constrained by the options that are defined, but config files can contain any number of arbitrary key/value pairs.

    7. The ability to add groups of commandline arguments together. Sometimes familes of scripts need a common set of commandline options, but also need the ability to add script-specific options. Sharing the common set allows for consistency.

    8. The ability to add config definitions together. Sometimes families of scripts need a common set of config items, but also need the ability to add script-specific config items.

    9. Locking and/or logging any changes to the config. Changing config during runtime can wreak havoc on the debugability of a script; locking or logging the config helps avoid or mitigate this.

    10. Python 3 support, and python 2.7 unicode support, preferably unicode-by-default.

    11. Standardized solution, preferably non-company and non-language specific.

    12. All-in-one solution, rather than having to use multiple solutions.

    Packages and standards

    argparse

    Argparse is the standardized python commandline argument parser, which is why configman and scriptharness have wrapped it to add further functionality. Its main drawbacks are lack of config file support and limited validation.

    1. Commandline argument support: yes. That's what it's written for.

    2. Default config value support: yes, for commandline options.

    3. Config file support: no.

    4. multiple config file type support: no.

    5. Adding the above three solutions together: no. The default config value and the commandline arguments are placed in the same Namespace, and you have to use the parser.get_default() method to determine whether it's a default value or an explicitly set commandline option.

    6. Config definition and validation: limited. It only covers commandline option definition+validation, and there's the required flag but not a if foo is set, bar is required type validation. It's possible to roll your own, but that would be script-specific rather than part of the standard.

    7. Adding groups of commandline arguments together: yes. You can take multiple parsers and make them parent parsers of a child parser, if the parent parsers have specified add_help=False

    8. Adding config definitions together: limited, as above.

    9. The ability to lock/log changes to the config: no. argparse.Namespace will take changes silently.

    10. Python 3 + python 2.7 unicode support: yes.

    11. Standardized solution: yes, for python. No for other languages.

    12. All-in-one solution: no, for the above limitations.

    configman

    Configman is a tool written to deal with configuration in various forms, and adds the ability to transform configs from one type to another (e.g., commandline to ini file). It also adds the ability to block certain keys from being saved or output. Its argparse implementation is deeper than scriptharness' ConfigTemplate argparse abstraction.

    Its main drawbacks for scriptharness usage appear to be lack of python 3 + py2-unicode-by-default support, and for being another non-standardized solution. I've given python3 porting two serious attempts, so far, and I've hit a wall on the dotdict __getattr__ hack working differently on python 3. My wip is here if someone else wants a stab at it.

    1. Commandline argument support: yes.

    2. Default config value support: yes.

    3. Config file support: yes.

    4. Multiple config file type support: yes.

    5. Adding the above three solutions together: not as far as I can tell, but since you're left with the ArgumentParser object, I imagine it'll be the same solution to wrap configman as argparse.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: yes.

    8. Adding config definitions together: not sure, but seems plausible.

    9. The ability to lock/log changes to the config: no. configman.namespace.Namespace will take changes silently.

    10. Python 3 support: no. Python 2.7 unicode support: there are enough str() calls that it looks like unicode is a second class citizen at best.

    11. Standardized solution: no.

    12. All-in-one solution: no, for the above limitations.

    docopt

    Docopt simplifies the commandline argument definition and prettifies the help output. However, it's purely a commandline solution, and doesn't support adding groups of commandline options together, so it appears to be oriented towards relatively simple script configuration. It could potentially be added to json-schema definition and validation, as could the argparse-based commandline solutions, for an all-in-two solution. More on that below.

    json-schema

    This looks very promising for an overall config definition + validation schema. The main drawback, as far as I can see so far, is the lack of commandline argument support.

    A commandline parser could generate a config object to validate against the schema. (Bonus points for writing a function to validate a parser against the schema before runtime.) However, this would require at least two definitions: one for the schema, one for the hopefully-compliant parser. Alternately, the schema could potentially be extended to support argparse settings for various items, at the expense of full standards compatiblity.

    There's already a python jsonschema package.

    1. Commandline argument support: no.

    2. Default config value support: yes.

    3. Config file support: I don't think directly, but anything that can be converted to a dict can be validated.

    4. Multiple config file type support: no.

    5. Adding the above three solutions together: no.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: no.

    8. Adding config definitions together: sure, you can add dicts together via update().

    9. The ability to lock/log changes to the config: no.

    10. Python 3 support: yes. Python 2.7 unicode support: I'd guess yes since it has python3 support.

    11. Standardized solution: yes, even cross-language.

    12. All-in-one solution: no, for the above limitations.

    scriptharness 0.2.0 ConfigTemplate + LoggingDict or ReadOnlyDict

    Scriptharness currently extends argparse and dict for its config. It checks off the most boxes in the requirements list currently. My biggest worry with the ConfigTemplate is that it isn't fully standardized, so people may be hesitant to port all of their configs to it.

    An argparse/json-schema solution with enough glue code in between might be a good solution. I think ConfigTemplate is sufficiently close to that that adding jsonschema support shouldn't be too difficult, so I'm leaning in that direction right now. Configman has some nice behind the scenes and cross-file-type support, but the python3 and __getattr__ issues are currently blockers, and it seems like a lateral move in terms of standards.

    An alternate solution may be BYOC. If the scriptharness Script takes a config object that you built from somewhere, and gives you tools that you can choose to use to build that config, that may allow for enough flexibility that people can use their preferred style of configuration in their scripts. The cost of that flexibility is familiarity between scriptharness scripts.

    1. Commandline argument support: yes.

    2. Default config value support: yes, both through argparse parsers and script initial_config.

    3. Config file support: yes. You can define multiple required config files, and multiple optional config files.

    4. Multiple config file type support: no. Mozharness had .py and .json. Scriptharness currently only supports json because I was a bit iffy about execfileing python again, and PyYAML doesn't always install cleanly everywhere. It's on the list to add more formats, though. We probably need at least one dynamic type of config file (e.g. python or yaml) or a config-file builder tool.

    5. Adding the above three solutions together: yes.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: yes.

    8. Adding config definitions together: yes.

    9. The ability to lock/log changes to the config: yes. By default Scripts use LoggingDict that logs runtime changes; StrictScript uses a ReadOnlyDict (sams as mozharness) that prevents any changes after locking.

    10. Python 3 and python 2.7 unicode support: yes.

    11. Standardized solution: no. Extended/abstracted argparse + extended python dict.

    12. All-in-one solution: yes.

    Corrections, additions, feedback?

    As far as I can tell there is no perfect solution here. Thoughts?



    comment count unavailable comments

    Planet Mozillahyper v0.6

    A bunch of goodies are included in version 0.6 of hyper.

    Highlights

    • Experimental HTTP2 support for the Client! Thanks to tireless work of @mlalic.
    • Redesigned Ssl support. The Server and Client can accept any implementation of the Ssl trait. By default, hyper comes with an implementation for OpenSSL, but this can now be disabled via the ssl cargo feature.
    • A thread safe Client. As in, Client is Sync. You can share a Client over multiple threads, and make several requests simultaneously.
    • Just about 90% test coverage. @winding-lines has been bumping the number ever higher.

    Also, as a reminder, hyper has been following semver more closely, and so, breaking changes mean bumping the minor version (until 1.0). So, to reduce unplanned breakage, you should probably depend on a specific minor version, such as 0.6, and not *.

    Planet MozillaRust 1.1 stable, the Community Subteam, and RustCamp

    We’re happy to announce the completion of the first release cycle after Rust 1.0: today we are releasing Rust 1.1 stable, as well as 1.2 beta.

    Read on for details the releases, as well as some exciting new developments within the Rust community.

    What’s in 1.1 Stable

    One of the highest priorities for Rust after its 1.0 has been improving compile times. Thanks to the hard work of a number of contributors, Rust 1.1 stable provides a 32% improvement in compilation time over Rust 1.0 (as measured by bootstrapping).

    Another major focus has been improving error messages throughout the compiler. Again thanks to a number of contributors, a large portion of compiler errors now include extended explanations accessible using the --explain flag.

    Beyond these improvements, the 1.1 release includes a number of important new features:

    • New std::fs APIs. This release stabilizes a large set of extensions to the filesystem APIs, making it possible, for example, to compile Cargo on stable Rust.
    • musl support. It’s now possible to target musl on Linux. Binaries built this way are statically linked and have zero dependencies. Nightlies are on the way.
    • cargo rustc. It’s now possible to build a Cargo package while passing arbitrary flags to the final rustc invocation.

    More detail is available in the release notes.

    What’s in 1.2 Beta

    Performance improvements didn’t stop with 1.1 stable. Benchmark compilations are showing an additional 30% improvement from 1.1 stable to 1.2 beta; Cargo’s main crate compiles 18% faster.

    In addition, parallel codegen is working again, and can substantially speed up large builds in debug mode; it gets another 33% speedup on bootstrapping on a 4 core machine. It’s not yet on by default, but will be in the near future.

    Cargo has also seen some performance improvements, including a 10x speedup on large “no-op” builds (from 5s to 0.5s on Servo), and shared target directories that cache dependencies across multiple packages.

    In addition to all of this, 1.2 beta includes our first support for MSVC (Microsoft Visual C): the compiler is able to bootstrap, and we have preliminary nightlies targeting the platform. This is a big step for our Windows support, making it much easier to link Rust code against code built using the native toolchain. Unwinding is not yet available – code aborts on panic – but the implementation is otherwise complete, and all rust-lang crates are now testing on MSVC as a first-tier platform.

    Rust 1.2 stable will be released six weeks from now, together with 1.3 beta.

    Community news

    In addition to the above technical work, there’s some exciting news within the Rust community.

    In the past few weeks, we’ve formed a new subteam explicitly devoted to supporting the Rust community. The team will have a number of responsibilities, including aggregating resources for meetups and other events, supporting diversity in the community through leadership in outreach, policies, and awareness-raising, and working with our early production users and the core team to help guide prioritization.

    In addition, we’ll soon be holding the first official Rust conference: RustCamp, on August 1, 2015, in Berkeley, CA, USA. We’ve received a number of excellent talk submissions, and are expecting a great program.

    Contributors to 1.1

    As with every release, 1.1 stable is the result of work from an amazing and active community. Thanks to the 168 contributors to this release:

    • Aaron Gallagher
    • Aaron Turon
    • Abhishek Chanda
    • Adolfo Ochagavía
    • Alex Burka
    • Alex Crichton
    • Alexander Polakov
    • Alexis Beingessner
    • Andreas Tolfsen
    • Andrei Oprea
    • Andrew Paseltiner
    • Andrew Straw
    • Andrzej Janik
    • Aram Visser
    • Ariel Ben-Yehuda
    • Avdi Grimm
    • Barosl Lee
    • Ben Gesoff
    • Björn Steinbrink
    • Brad King
    • Brendan Graetz
    • Brian Anderson
    • Brian Campbell
    • Carol Nichols
    • Chris Morgan
    • Chris Wong
    • Clark Gaebel
    • Cole Reynolds
    • Colin Walters
    • Conrad Kleinespel
    • Corey Farwell
    • David Reid
    • Diggory Hardy
    • Dominic van Berkel
    • Don Petersen
    • Eduard Burtescu
    • Eli Friedman
    • Erick Tryzelaar
    • Felix S. Klock II
    • Florian Hahn
    • Florian Hartwig
    • Franziska Hinkelmann
    • FuGangqiang
    • Garming Sam
    • Geoffrey Thomas
    • Geoffry Song
    • Graydon Hoare
    • Guillaume Gomez
    • Hech
    • Heejong Ahn
    • Hika Hibariya
    • Huon Wilson
    • Isaac Ge
    • J Bailey
    • Jake Goulding
    • James Perry
    • Jan Andersson
    • Jan Bujak
    • Jan-Erik Rediger
    • Jannis Redmann
    • Jason Yeo
    • Johann
    • Johann Hofmann
    • Johannes Oertel
    • John Gallagher
    • John Van Enk
    • Jordan Humphreys
    • Joseph Crail
    • Kang Seonghoon
    • Kelvin Ly
    • Kevin Ballard
    • Kevin Mehall
    • Krzysztof Drewniak
    • Lee Aronson
    • Lee Jeffery
    • Liigo Zhuang
    • Luke Gallagher
    • Luqman Aden
    • Manish Goregaokar
    • Marin Atanasov Nikolov
    • Mathieu Rochette
    • Mathijs van de Nes
    • Matt Brubeck
    • Michael Park
    • Michael Rosenberg
    • Michael Sproul
    • Michael Wu
    • Michał Czardybon
    • Mike Boutin
    • Mike Sampson
    • Ms2ger
    • Nelo Onyiah
    • Nicholas
    • Nicholas Mazzuca
    • Nick Cameron
    • Nick Hamann
    • Nick Platt
    • Niko Matsakis
    • Oliver Schneider
    • P1start
    • Pascal Hertleif
    • Paul Banks
    • Paul Faria
    • Paul Quint
    • Pete Hunt
    • Peter Marheine
    • Philip Munksgaard
    • Piotr Czarnecki
    • Poga Po
    • Przemysław Wesołek
    • Ralph Giles
    • Raphael Speyer
    • Ricardo Martins
    • Richo Healey
    • Rob Young
    • Robin Kruppe
    • Robin Stocker
    • Rory O’Kane
    • Ruud van Asseldonk
    • Ryan Prichard
    • Sean Bowe
    • Sean McArthur
    • Sean Patrick Santos
    • Shmuale Mark
    • Simon Kern
    • Simon Sapin
    • Simonas Kazlauskas
    • Sindre Johansen
    • Skyler
    • Steve Klabnik
    • Steven Allen
    • Steven Fackler
    • Swaroop C H
    • Sébastien Marie
    • Tamir Duberstein
    • Theo Belaire
    • Thomas Jespersen
    • Tincan
    • Ting-Yu Lin
    • Tobias Bucher
    • Toni Cárdenas
    • Tshepang Lekhonkhobe
    • Ulrik Sverdrup
    • Vadim Chugunov
    • Valerii Hiora
    • Wangshan Lu
    • Wei-Ming Yang
    • Wojciech Ogrodowczyk
    • Xuefeng Wu
    • York Xiang
    • Young Wu
    • bors
    • critiqjo
    • diwic
    • gareins
    • inrustwetrust
    • jooert
    • klutzy
    • kwantam
    • leunggamciu
    • mdinger
    • nwin
    • parir
    • pez
    • robertfoss
    • sinkuu
    • tynopex
    • らいどっと

    Planet MozillaDisponible cuentaFox 3.1.1

    Pocos días después de presentarse la versión que corregía el problema presentado con el servicio para obtener el estado de las cuotas, ya está aquí cuentaFox 3.1.1.

    ¿Qué hay de nuevo?

    • Ahora se muestra la lista de todos los usuarios que han almacenado sus contraseñas en Firefox.

    v31.1-userlist

    • Las alertas de consumo ahora se muestran pero sin iconos pues al agregarle un icono no se muestran (probado en Linux).

    v3.1.1-alertas

    • También se corrigieron algunos errores menores.

    Firmando el complemento

    A partir de Firefox 41 se introducirán algunos cambios en la gestión de los complementos en el navegador y solo se podrán instalar complementos firmados por Mozilla. Que un complemento esté firmado por Mozilla significa más seguridad para las usuarios ante extensiones malignas y programas de terceros que intentan instalar add-ons en Firefox.

    Para estar preparados cuando llegue Firefox 41 hemos enviando cuentaFox para su revisión en AMO y dentro de poco lo tendremos por aquí.

    Aún muchas personas utilizan versiones viejas, actualiza a cuentaFox 3.1.1

    Desde el panel estadísticas de AMO nos hemos dado cuenta que muchas personas siguen usando versiones viejas que no funcionan y no son recomendadas. Desde aquí hacemos el llamado para que actualicen y difundan la noticia sobre la nueva liberación.

    No obstante, cuando el add-on sea aprobado, Firefox lo actualizará según la configuración del usuario. La idea que tenemos es que el complemento se actualice desde Firefoxmanía y no desde Mozilla pero el certificado autofirmado y otros problemas impiden que lo hagamos.

    stats-cuentafox

    El usuario o la contraseña es incorrecta

    Muchas personas han manifestado que al intentar obtener sus datos se muestra una alerta donde dice “El usuario no es válido o lo contraseña es incorrecta” y nos piden solucionar esto pero no podemos. Nosotros no somos responsables del servicio que brinda cuotas.uci.cu y tampoco sabemos que utiliza para verificar que esos datos son correctos.

    Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia.

    Instalar cuentaFox 3.1.1

    Planet WebKitWeb Inspector Console Improvements

    The console is an essential part of Web Inspector. Evaluating expressions in the quick console is one of the primary ways of interacting with the inspected page. Logs, errors, and warnings emitted from the page show up in the console and exploring or interacting with these objects is a given while debugging.

    We recently improved both the Console and Object views in Web Inspector to make it more powerful and fun to use. Our main focus was getting quicker access to useful data and modernizing it to work better with the new changes in JavaScript.

    Basics – Object Previews, Trees, and $n

    Object previews allow you to see the first few properties without needing to expand them. You’ll notice that each evaluation provides you with a “$n” debugger variable to refer back to that object later. These special variables are known only to the tools, so you won’t be cluttering the page with temporary variables. $0 still exists and refers to the current selected node in the DOM Tree.

    Object Preview

    When expanded, the full object tree view cleanly separates properties and API. Again, we use object previews where possible to reveal more data at a glance. The icons for each property correspond to the type of the value for that property. For example, in the image below you’ll see properties with number values have a blue N icon, strings with a red S, functions a green F, etc. The icons give objects a visual pattern, and makes it easy to visually find a particular property or an unexpected change in the normal data an object holds.

    Object Tree

    Supporting New Types

    Web Inspector has always had great support for inspecting certain built-in JavaScript types such as Arrays and DOM types like Nodes. Web Inspector has improved those views and now has comprehensive support for all of the built-in JavaScript types. This including the new ES6 types (Symbol, Set, Map, WeakSet, WeakMap, Promises, Classes, Iterators).

    Array, Set, and Map object trees

    WebKit’s tools are most useful when they show internal state of objects, known only to the engine, that is otherwise inaccessible. For example, showing the current status of Promises:

    Promises

    Or upcoming values of native Iterators:

    Iterators

    Other interesting cases are showing values in WeakSets and WeakMaps, or showing the original target function and bound arguments for bound functions.

    API View

    When expanding an object’s prototype you get a great API view showing what methods you can call on the object. The API view always provides parameter names for user functions and even provides curated versions for native functions. The API view makes it really convenient to lookup or discover the ways that you can interact with objects already available to you in the console.

    Array API ViewLocal Storage Object Tree

    As an added bonus, if you are working with ES6 Classes and log a class by its name or its constructor you immediately get the API view for that class.

    Interactivity

    Object trees are more interactive. Hover a property icon to see the property’s descriptor attributes. Hover the property name to see the exact path you can use to access the property. Getters can be invoked, and their results can be further explored.

    Property Descriptor tooltipProperty Path tooltip

    Context menus also provide more options. One of the most powerful features is that with any value in an Object tree you can use the context menu and select “Log Value” to re-log the value to the Console. This immediately creates a $n reference to the live object, letting you interact with it or easily reference it again later.

    <video controls="controls" height="320" src="/blog-files/console-improvements/interactivity.m4v" width="440"></video>

    Console Messages

    Console messages have also had a UI refresh, making logs, errors, warnings, and their location links stand out more:

    Console Messages

    Feedback

    These enhancements are available to use in WebKit Nightly Builds. We would love to hear your feedback! You can send us quick feedback on Twitter (@JosephPecoraro, @xeenon), file a bug report, or even consider contributing your own enhancements!

    Planet MozillaUpdating our on-ramps for contributors

    I got to sit in on a great debrief / recap of the recent Webmaker go-to-market strategy. A key takwaway: we’re having promising early success recruiting local volunteers to help tell the story, evangelize for the product, and (crucially) feed local knowledge into making the product better. In short:

    Volunteer contribution is working. But now we need to document, systematize and scale up our on-ramps.

    Documenting and systematizing

    It’s been a known issue that we need to update and improve our on-ramps for contributors across MoFo. They’re fragmented, out of date, and don’t do enough to spell out the value for contributors. Or celebrate their stories and successes.

    We should prioritize this work in Q3. Our leadership development work, local user research, social marketing for Webmaker, Mozilla Club Captains and Regional Co-ordinators recruitment, the work the Participation Lab is doing — all of that is coming together at an opportune moment.

    Ryan is a 15-year-old volunteer contributor to Webmaker for Android — and currently the second-most-active Java developer on the entire project.

    Get the value proposition right

    A key learning is: we need to spell out the concrete value proposition for contributors. Particularly in terms of training and gaining relevant work experience.

    Don’t assume we know in advance what contributors actually want. They will tell us.

    We sometimes assume contributors want something like certification or a badge — but what if what they *really* want is a personalized letter of recommendation, on Mozilla letterhead, from an individual mentor at Mozilla that can vouch for them and help them get a job, or get into a school program? Let’s listen.

    An on-boarding and recruiting checklist

    Here’s some key steps in the process the group walked through. We can document / systematize / remix these as we go forward.

    • Value proposition. Start here first. What’s in it for contributors? (e.g., training, a letter of recommendation, relevant work experience?) Don’t skip this! It’s the foundation for doing this in a real way.
    • Role description. Get good at describing those skills and opportunities, in language people can imagine adding to their CV, personal bio or story, etc.
    • Open call. Putting the word out. Having the call show up in the right channels, places and networks where people will see and hear about it.
    • Application / matching. How do people express interest? How do we sort and match them?
    • On-boarding and training. These processes exist, but aren’t well-documented. We need a playbook for how newcomers get on-boarded and integrated.
    • Assigning  to a specific team and individual mentor. So that they don’t feel disconnected or lost. This could be an expectation for all MoFo: each staff member will mentor at least one apprentice each quarter.
    • Goal-setting / tasking. Tickets or some other way to surface and co-ordinate the work they’re doing.
    • A letter of recommendation. Once the work is done. Written by their mentor. In a language that an employer / admission officer / local community members understand and value.
    • Certification. Could eventually also offer something more formal. Badging, a certificate, something you could share on your linked in profile, etc.

    Next steps

    Internet Explorer blogBuilding Flight Arcade: Behind the scenes with WebGL, WebAudio and GamePad API

    During Microsoft’s //build conference day 2 keynote, we demonstrated some of the advancements in Microsoft Edge’s platform features with a new demo from Pixel Lab, Flight Arcade. Today I’d like to highlight some of the new Web technologies behind Flight Arcade and show developers how the demo came together behind the scenes.

    <iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="390" src="http://www.youtube.com/embed/xyaq9TPmXrA?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="640"></iframe>

    Flight Simulator as Inspiration

    A flight demo seemed a natural fit to showcase the new platform features, bringing together 3D graphics, audio modulation and using a game pad to control the plane. Flight Simulator had a huge impact on the PC hardware ecosystem starting 30+ years ago, pushing the boundaries of early personal computers like the Apple II and Commodore 64 and demonstrating how the PC could be used for more than just spreadsheets.

    When our team began development on Flight Arcade, they originally hoped to recreate as much of the Flight Simulator experience as possible, even going to the effort of accessing the original code and 3D assets. It’s fair to say that ambition far outstripped the time and resources we had to bring the demo together in time, as we realized the complexity of the original simulator code base. The original goal of complete simulation included modelling complex factors like wind simulation, voltage drop across on-board circuits, weight and many other aspects that impact real-world flight.

    After evaluating the scope of the simulation problem, we decided instead to hone our focus to exhibit three major new features of the platform, and simplify gameplay to demonstrate those features more effectively.

    Web Platform Features Highlighted

    WebGL

    The team used the popular 3D framework Babylon.JS to build the visual components of Flight Arcade. Some of the challenges they faced included how to lay out terrain,  build in heightmaps for relief, and lay down textures on top of the terrain map to make it look as realistic as possible while still working well in the browser. You can learn more about this process on the Flight Arcade detail page.

    WebAudio API

    Prior to the WebAudio API, developers and content authors were limited to using HTML audio tags to embed a sound file for playback within their page and then layout around the control.

    The HTML5 WebAudio API takes audio on the web to another level, with powerful and easy to use APIs providing a broad spectrum of audio manipulation techniques.

    Check out the team’s detailed breakdown of how the team used the WebAudio API to modulate engine sound and distort the flight instructors voice, with code samples so you can try out WebAudio for yourself.

    GamePad API

    To add more fidelity to the Arcade flying experience and showcase another web platform feature, the team used the GamePad API and wrote a helper class for other developers to use that maps the button and axis indices to the more familiar names as labeled on the Xbox controller. You can grab the open-sourced code and get a full breakdown of how we used the API with FlightArcade here.

    More to Come

    We’ll be bringing more tech demos to the web on the new Microsoft Edge Dev Test Drive site over the coming months as we continue to add more web platform features to Microsoft Edge. You can keep in touch on Twitter to find out about new projects, and check out our community projects on the Microsoft Edge Dev Site where we’re working to build a better, more interoperable web.

    – Jason McConnell, Microsoft Edge Developer Relations

    Planet WebKitJavier Fernández: Performance analysis of Grid Layout

    Now that we have a quite complete implementation of CSS Grid Layout specification it’s time to take care of performance analysis and optimizations. In this essay, which is the first of a series of posts about performance, I’ll first introduce briefly how to use Blink (Chrome) and WebKit (Safari) performance analysis tools, some of the most interesting cases I’ve seen during my work on the implementation of this spec and, finally, a basic case to compare Flexbox and Grid layout models, which I’d like to evolve and analyze further in the coming months.

    Performance analysis tools

    Both WebKit and Blink projects provide several useful and easy to use scrips (python) to run a set of test cases and take different measurements and early analysis. They were written before the fork, that’s why related documentation can be found at WebKit’s track, but both engines still uses them, for the time being.

    Tools/Scripts/run-perf-tests
    Tools/Scripts/webkitpy/performance_tests/

    There are a wide set of performance tests under PerformanceTest folder, at Blink’s/WebKit’s root directory, but even though both engines share a substantial number of tests, there are some differences.

    (blink’s root directory) $ ls PerformanceTests/
    Bindings BlinkGC Canvas CSS DOM Dromaeo Events inspector Layout Mutation OWNERS Parser resources ShadowDOM Skipped SunSpider SVG XMLHttpRequest XSSAuditor

    Chromium project has introduced a new performance tool, called Telemetry, which in addition of running the above mentioned tests, it’s designed to execute more complex cases like running specific PageSets or doing benchmarking to compare results with a preset recording (WebPageRelay). It’s also possible to send patches to performance try bots, directly from gclient or git (depot_tools) command line. There are quite much information available in the following links:

    Regarding profiling tools, it’s possible both in Webkit and Blink to use the –profiler option when running the performance tests so we can collect profiling data. However, while WebKit recommends perf for linux, Google’s Blink engine provides some alternatives.

    CSS Grid Layout performance tests and current status

    While implementing a new browser feature is not easy to measure performance while code evolves so much and quickly and, what it’s worst, be aware of regressions introduced by new logic. When the feature’s syntax changes or there are missing or incomplete functionality, it’s not always possible to establish a well defined baseline for performance. It’s also a though decision to determine which use cases we might care about; obviously the faster the better, but adding performance optimizations usually complicates code, it may affect its robustness and it could lead to unexpected, and even worst, hard to find bugs.

    At the time of this writing, we had 3 basic performance tests:

    Why we have selected those uses cases to measure and keep track of performance regression ? First of all, note that auto-sizing one of the most expensive branches inside the grid track sizing algorithm, so we are really interested on both, improving it and keeping track of regressions on this code path.

    body {
        display: grid;
        grid-template-rows: repeat(100, auto);
        grid-template-columns: repeat(20, auto);
    }
    .gridItem {
        height: 200px;
        width: 200px;
    }

    On the other hand, fixed-sized is the easiest/fastest path of the algorithm, so besides the importance of avoiding regressions (when possible), it’s also a good case to compare with auto-sized.

    body {
        display: grid;
        grid-template-rows: repeat(100, 200px);
        grid-template-columns: repeat(20, 200px);
    }
    .gridItem {
        height: 200px;
        width: 200px;
    }

    Finally, a stretching use cases was added because it’s the default alignment value for grid items and the two test cases already described use fixed size items, hence no stretch (even though items fill the whole grid cell area). Given that I implemented CSS Box Alignment support for grid I was conscious of how expensive the stretching logic is, so I considered it an important use case to analyze and optimize as much as possible. Actually, I’ve already introduced several optimizations because the early implementation was quite slow, around 40% slower than using any other basic alignment (start, end, center). We will talk more about this later when we analyze a case to compare Flexbox and Grid performance in layout.

    body {
        display: grid;
        grid-template-rows: repeat(100, 200px);
        grid-template-columns: repeat(20, 200px);
    }
    .gridItem {
        height: auto;
        width: auto;
    }

    The basic HTML body of these 3 tests is quite simple because we want to analyze performance of very specific parts of the Grid Layout logic, in order to detect regressions in sensible code paths. We’d like to have eventually some real use cases to analyze and create many more performance tests, but chrome performance platform it’s definitively not the place to do so. The following graphs show performance evolution during 2015 for the 3 tests we have defined so far.

    grid-performance-overview

    Note that yellow trace shows data taken from a reference build, so we can discount temporary glitches on the machine running the performance tests of target build, which are shown in the blue trace; this reference trace is also useful to detect invalid regression alerts.

    Why performance is so different for these cases ?

    The 3 tests we have for Grid Layout use runs/second values as a way to measure performance; this is the preferred method for both WebKit and Blink engines because we can detect regressions with relatively small tests. It’s possible, though, to do other kind of measurements. Looking at the graphs above we can extract the following data:

    • auto-sized grid: around 650 runs/sec
    • fixed-sized grid: around 1400 runs/sec
    • fixed-sized stretched grid: around 1250 runs/sec

    Before analyzing possible causes of performance drop for each case, I’ve defined some additional tests to stress even more these 3 cases, so we can realize how grid size affect to the obtained results. I defined 20 tests for these cases, each one with different grid items; from 10×10 up to 200×200 grids. I run those tests in my own laptop, so let’s take the absolute numbers of each case with a grain of salt; although differences between each of these 3 scenarios should be coherent. The table below shows some numeric results of this experiment.

    grid-fixed-VS-auto-VS-stretch

    First of all, recall that these 3 tests produce the same web visualization, consisting of grids with NxN items of 100px each one. The only difference is the grid layout strategy used to produce such result: auto-sizing, fixed-sizing and stretching. So now, focusing on previous table’s data we can evaluate the cost, in terms of layout performance, of using auto-sized tracks for defining the grid (which may be the only solution for certain cases). Performance drop is even growing with the number of grid items, but we can conclude that it’s stabilized around 60%. On the other hand stretching is also slower but, unlike auto-sized, in this case performance drop does not show a high dependency of grid size, more or less constant around 15%.

    grid-performance-graphs-2

    Impact of auto-sized tracks in layout performance

    Basically, the track sizing algorithm can be described in the following 4 steps:

    • 1- Initialize per Grid track variables.
    • 2- Resolve content-based TrackSizingFunctions.
    • 3- Grow all Grid tracks in GridTracks from their baseSize up to their growthLimit value until freeSpace is exhausted.
    • 4- Grow all Grid tracks having a fraction as the MaxTrackSizingFunction.

    These steps will be executed twice, first cycle for determining column tracks’s size and another cycle to set row tracks’s size which it may depend on grid’s width. When using just fixed-sized tracks in the very simple case we are testing, the only computation required to determine grid’s size is completing step 1 and determining free available space based on the specified fixed-size values of each track.

    // 1. Initialize per Grid track variables.
    for (size_t i = 0; i < tracks.size(); ++i) {
        GridTrack& track = tracks[i];
        GridTrackSize trackSize = gridTrackSize(direction, i);
        const GridLength& minTrackBreadth = trackSize.minTrackBreadth();
        const GridLength& maxTrackBreadth = trackSize.maxTrackBreadth();
     
        track.setBaseSize(computeUsedBreadthOfMinLength(direction, minTrackBreadth));
        track.setGrowthLimit(computeUsedBreadthOfMaxLength(direction, maxTrackBreadth, track.baseSize()));
     
        if (trackSize.isContentSized())
            sizingData.contentSizedTracksIndex.append(i);
        if (trackSize.maxTrackBreadth().isFlex())
            flexibleSizedTracksIndex.append(i);
    }
    for (const auto& track: tracks) {
        freeSpace -= track.baseSize();
    }

    Focusing now on the auto-sized scenario, we will have the overhead of resolving content-sized functions for all the grid items.

    // 2. Resolve content-based TrackSizingFunctions.
    if (!sizingData.contentSizedTracksIndex.isEmpty())
        resolveContentBasedTrackSizingFunctions(direction, sizingData);

    I didn’t add source code of resolveContentBasedTrackSizingFunctions because it’s quite complex, but basically it implies a cost proportional to the number of grid tracks (minimum of 2x), in order to determine minContent and maxContent values for each grid item. It might imply additional computation overhead when using spanning items; it would require to sort them based on their spanning value and iterate over them again to resolve their content-sized functions.

    Some issues may be interesting to analyze in the future:

    • How much each content-sized track costs ?
    • What is the impact on performance of using flexible-sized tracks ? Would it be the worst case scenario ? Considering it will require to follow the four steps of track sizing algorithm, it likely will.
    • Which are the performance implications of using spanning items ?

    Why stretching is so performance drain ?

    This is an interesting issue, given that stretch is the default value for both Grid and Flexbox items. Actually, it’s the root cause of why Grid beats Flexbox in terms of layout performance for the cases when stretch alignment is used. As I’ll explain later, Flexbox doesn’t have the optimizations I’ve implemented for Grid Layout.

    Stretching logic takes place during the grid container layout operations, after all tracks have their size precisely determined and we have properly computed all grid track’s positions relatively to the grid container. It happens before the alignment logic is executed because stretching may imply changing some grid item’s size, hence they will be marked for layout (if they wasn’t already).

    Obviously, stretching only takes place when the corresponding Self Alignment properties (align-self, justify-self) have either auto or stretch as value, but there are other conditions that must be fulfilled to trigger this operation:

    • box’s computed width/height (as appropriate to the axis) is auto.
    • neither of its margins (in the appropriate axis) are auto
    • still respecting the constraints imposed by min-height/min-width/max-height/max-width

    In that scenario, stretching logic implies the following operations:

    LayoutUnit stretchedLogicalHeight = availableAlignmentSpaceForChildBeforeStretching(gridAreaBreadthForChild, child);
    LayoutUnit desiredLogicalHeight = child.constrainLogicalHeightByMinMax(stretchedLogicalHeight, -1);
     
    bool childNeedsRelayout = desiredLogicalHeight != child.logicalHeight();
    if (childNeedsRelayout || !child.hasOverrideLogicalContentHeight())
        child.setOverrideLogicalContentHeight(desiredLogicalHeight - child.borderAndPaddingLogicalHeight());
    if (childNeedsRelayout) {
        child.setLogicalHeight(0);
        child.setNeedsLayout();
    }
     
    LayoutUnit LayoutGrid::availableAlignmentSpaceForChildBeforeStretching(LayoutUnit gridAreaBreadthForChild, const LayoutBox& child) const
    {
        LayoutUnit childMarginLogicalHeight = marginLogicalHeightForChild(child);
     
        // Because we want to avoid multiple layouts, stretching logic might be performed before
        // children are laid out, so we can't use the child cached values. Hence, we need to
        // compute margins in order to determine the available height before stretching.
        if (childMarginLogicalHeight == 0)
            childMarginLogicalHeight = computeMarginLogicalHeightForChild(child);
     
        return gridAreaBreadthForChild - childMarginLogicalHeight;
    }

    In addition to the extra layout required for changing grid item’s size, computing the available space for stretching adds an additional overhead, overall if we have to compute grid item’s margins because some layout operations are still incomplete.

    Given that grid container relies on generic block’s layout operations to determine the stretched width, this specific logic is only executed for determining the stretched height. Hence performance drop is alleviated, compared with the auto-sized tracks scenario.

    Grid VS Flexbox layout performance

    One of the main goals of CSS Grid Layout specification is to complement Flexbox layout model for 2 dimensions. It’s expectable that creating grid designs with Flexbox will be more inefficient than using a layout model specifically designed for these cases, not only regarding CSS syntax, but also regarding layout performance.

    However, I think it’s interesting to measure Grid Layout performance in 1-dimensional cases, usually managed using Flexbox, so we can have comparable scenarios to evaluate both models. In this post I’ll start with such cases, using a very simple one in this occasion. I’d like to get more complex examples in future posts, the ones more usual in Flexbox based designs.

    So, let’s consider the following simple test case:

    <div class="className">
       <div class="i1">Item 1</div> 
       <div class="i2">Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</div>
       <div class="i3">Item 3 longer</div>
    </div>

    I evaluated the simple HTML example above with both Flexbox and Grid layouts to measure performance. I used a CPU profiler to figure out where the bottlenecks are for each model, trying to explain where differences came from. So, I defined 2 CSS classes for each layout model, as follows:

    .flex {
        background-color: silver;
        display: flex;
        height: 100px;
        align-items: start;
    }
    .grid {
        background-color: silver;
        display: grid;
        grid-template-columns: 100px 1fr auto;
        grid-template-rows: 100px;
        align-items: start;
        justify-items: start;
    }
    .i1 { 
        background-color: cyan;
        flex-basis: 100px; 
    }
    .i2 { 
        background-color: magenta;
        flex: 1; 
    }
    .i3 { 
        background-color: yellow; 
    }

    Given that there is not concept of row in Flexbox, I evaluated performance of 100 up to 2000 grid or flex containers, creating 20 tests to be run inside the chrome performance framework, described at the beginning of this post. You can check out resources and a script to generate them at our github examples repo.

    flexVSgrid

    When comparing both layout models targeting layout times, we see clearly that Grid Layout beats Flexbox using the default values for CSS properties controlling layout itself and alignment, which is stretch for these containers. As it was explained before, the stretching logic adds an important computation overhead, which as we can see now in the numeric table above, has more weight for Flexbox than Grid.

    Looking at the plot about differences in layout time, we see that for the default case, Grid performance improvement is stabilized around 7%. However, when we avoid the stretching logic, for instance by using any other alignment value, layout performance it’s considerable worse than Flexbox, for this test case, around 15% slower. This is something sensible, as this test case is the idea for Flexbox, while a bit artificial for Grid; using a single Grid with N rows improves performance considerably, getting much better numbers than Flexbox, but we will see these cases in future analysis.

    Grid layout better results for the default case (stretch) are explained because I implemented several optimizations for Grid. Probably Flexbox should do the same, as it’s the default value and it could affect many sites using this layout model in their designs.

    Thanks to Bloomberg for sponsoring this work, as part of the efforts that Igalia has been doing all these years pursuing a better and more open web.

    Igalia & Bloomberg logos

    Planet MozillaIntroducing PluotSorbet

    PluotSorbet is a J2ME-compatible virtual machine written in JavaScript. Its goal is to enable users you run J2ME apps (i.e. MIDlets) in web apps without a native plugin. It does this by interpreting Java bytecode and compiling it to JavaScript code. It also provides a virtual filesystem (via IndexedDB), network sockets (through the TCPSocket API), and other common J2ME APIs, like Contacts.

    The project reuses as much existing code as possible, to minimize its surface area and maximize its compatibility with other J2ME implementations. It incorporates the PhoneME reference implementation, numerous tests from Mauve, and a variety of JavaScript libraries (including jsbn, Forge, and FileSaver.js). The virtual machine is originally based on node-jvm.

    PluotSorbet makes it possible to bring J2ME apps to Firefox OS. J2ME may be a moribund platform, but it still has non-negligible market share, not to mention a number of useful apps. So it retains residual value, which PluotSorbet can extend to Firefox OS devices.

    PluotSorbet is also still under development, with a variety of issues to address. To learn more about PluotSorbet, check out its README, clone its Git repository, peruse its issue tracker, and say hello to its developers in irc.mozilla.org#pluotsorbet!

    Dev.OperaInstallable Web Apps and Add to Home screen

    Today, we’re excited to release a special labs build of Opera for Android with an experimental new feature, called “Add to Home screen”, which you can find when clicking the small plus button on the left of the address bar.

    When a user clicks “Add to Home screen” after loading your site in Opera, a shortcut to your site is placed on the Home screen of their device, allowing for direct access and increased visibility.

    There is more though: “Add to Home screen” also supports the new Web Manifest spec, which means that a website added to the Home screen can be configured to open with a particular UI mode, orientation and more. It’s even run as a separate process, just like a native app. We call this “Installable Web Apps”.

    Why?

    We’re very excited about Installable Web Apps, as it bridges the gap between native and web apps in a most elegant way: it allows you to build applications using the full web stack that run in the browser as well as outside of it, without sacrificing crucial functionality like hyperlinking, and without the need for app stores or gatekeepers.

    If you want to read more about why this is an exciting evolution, Alex Russell of the Google Chrome team has an excellent write-up called Progressive Apps: Escaping Tabs Without Losing Our Soul that explains many of the advantages from both a developer and a consumer perspective.

    Here are two more that appeal to us:

    No update distribution lag

    With a centralised app store distribution model, the user receives a notification that a new version of your app is available; they download it and install it. However, in many parts of the world, data is expensive or WiFi is a luxury, so people don’t update over their mobile connections. This means that outdated versions of your app continue to be used long after you’ve released an update. If the outdated version has a security flaw, this is a problem.

    With Installable Web Apps, the app is actually on your web server, so the instant you update it, everybody gets it, at the time they need it — there’s no update distribution lag, and the user isn’t wasting their precious data to download a complete new version of your app.

    Less storage space required

    The average app user has 36 apps on their smartphone (PDF). 25% are used daily (social/comms/gaming); 25% are never used.

    Those occasionally used apps are taking up a lot of storage on a device that may be inexpensive and therefore have little space. We know from the 2015 Google I/O keynote that

    Over a quarter of new Android devices have only 512 MB of RAM

    and, according to Techrepublic,

    Internal storage is particularly important, because this is where your apps are stored. If you buy a budget or entry-level phone, you’ll probably find around 512 MB of internal storage. With this low amount of storage, you’ll only be able to install a few apps.

    Installable Web Apps only store an icon, a text-based JSON manifest and some cached data on the device, which is likely to use less storage.

    Installation mechanisms

    In this labs build, site visitors can add a website to their Home screen by tapping the + icon on the left of the address bar.

    <figure block="figure"> <video controls="" elem="media" height="640" mod="center" width="360"> <source src="/blog/installable-web-apps/screen.mp4" type="video/mp4"> <source src="/blog/installable-web-apps/screen.webm" type="video/webm"> </video> </figure>

    In a future release, we’ll make it more discoverable: under certain circumstances Opera will prompt the user to add the site they’re visiting to the Home screen.

    Defining icon and start-up characteristics

    To make your site “installable”, you need to declare some characteristics in a special manifest file, and make sure it’s served over HTTPS. If it’s served over HTTP or the manifest is missing, a shortcut is placed on the Home screen, but you don’t get a real installable app, display modes and orientation settings don’t take effect, etc.

    Spec author Marcos Cáceres and Bruce documented what goes in the manifest in an HTML5 Doctor article called HTML App Manifest.

    You can check out the one we made for Dev.Opera on GitHub.

    Display modes

    The specification defines display modes: in essence, these are different ways to show your web app.

    Opera for Android supports

    • fullscreen — the app will take all the screen; hardware keys and the status bar will not be shown. Note, this is not the same as HTML5 fullscreen mode.
    • standalone — no browser UI is shown, but the hardware keys and status bar will be displayed.
    • browser — the app will be shown with normal browser UI, ie. as a normal website. Note that custom orientations are not yet supported in this mode.

    The minimal-ui mode is not supported; it falls back to browser as the spec requires.

    If the user follows a link that takes them outside the domain of an installed web app, the browser will flash 68 times, the device will vibrate like a walrus possessed by Satan and a klaxon will sound. Just kidding! In that case, the externally linked site will open in a new tab in Opera, showing the address bar, so you know where you are.

    Conclusion

    At Opera, we’re excited to promote Web Apps to be “first-class citizens” on Android devices and beyond, with visibility alongside native apps; we love the Web and we want to see it thrive.

    Planet MozillaMozilla Tech Speakers: A pilot for technical evangelism

    The six-week pilot version of the Mozilla Tech Speakers program wrapped up at the end of May. We learned a lot, made new friends on several continents, and collected valuable practical feedback on how to empower and support volunteer Mozillians who are already serving their regional communities as technical evangelists and educators. We’ve also gathered some good ideas for how to scale a speaker program that’s relevant and accessible to technical Mozillians in communities all over the world. Now we’re seeking your input and ideas as well.

    During the second half of 2015, we’ll keep working with the individuals in our pilot group (our pilot pilots) to create technical workshops and presentations that increase developer awareness and adoption of Firefox, Mozilla, and the Open Web platform. We’ll keep in touch as they submit talk proposals and develop Content Kits during the second half of the year, work with them to identify relevant conferences and events, fund speaker travel as appropriate, make sure speakers have access to the latest information (and the latest swag to distribute), and offer them support and coaching to deliver and represent!

    Why we did it

    Our aim is to create a strong community-driven technical speaker development program in close collaboration with Mozilla Reps and the teams at Mozilla who focus on community education and participation. From the beginning we benefited from the wisdom of Rosana Ardila, Emma Irwin, Soumya Deb, and other Mozillian friends. We decided to stand up a “minimum viable” program with trusted, invited participants—Mozillians who are active technical speakers and are already contributing to Mozilla by writing about and presenting Mozilla technology at events around the world. We were inspired by the ongoing work of the Participation Team and Speaker Evangelism program that came before us, thanks to the efforts of @codepo8, Shezmeen Prasad, and many others.

    We want this program to scale and stay sustainable, as individuals come and go, and product and platform priorities evolve. We will incorporate the feedback and learnings from the current pilot into all future iterations of the Mozilla Tech Speaker program.

    What we did

    Participants met together weekly on a video call to practice presentation skills and impromptu storytelling, contributed to the MDN Content Kit project for sharing presentation assets, and tried out some new tools for building informative and inspiring tech talks.

    Each participant received one session of personalized one-to-one speaker coaching, using “techniques from applied improvisation and acting methods” delivered by People Rocket’s team of coaching professionals. For many participants, this was a peak experience, a chance to step out of their comfort zone, stretch their presentation skills, build their confidence, and practice new techniques.

    In our weekly meetings, we worked with the StoryCraft technique, and hacked it a little to make it more geek- and tech speaker-friendly. We also worked with ThoughtBox, a presentation building tool to “organize your thoughts while developing your presentation materials, in order to maximize the effectiveness of the content.” Dietrich took ThoughtBox from printable PDF to printable web-based form, but we came to the conclusion it would be infinitely more usable if it were redesigned as an interactive web app. (Interested in building this? Talk to us on IRC. You’ll find me in #techspeakers or #devrel, with new channels for questions and communication coming soon.)

    We have the idea that an intuitive portable tool like ThoughtBox could be useful for any group of Mozillians anywhere in the world who want to work together on practicing speaking and presentation skills, especially on topics of interest to developers. We’d love to see regional communities taking the idea of speaker training and designing the kind of programs and tools that work locally. Let’s talk more about this.

    What we learned

    The pilot was ambitious, and combined several components—speaker training, content development, creating a presentation, proposing a talk—into an aggressive six-week ‘curriculum.’ The team, which included participants in eight timezones, spanning twelve+ hours, met once a week on a video call. We kicked off the program with an introduction by People Rocket and met regularly for the next six weeks.

    Between scheduled meetings, participants hung out in Telegram, a secure cross-platform messaging app, sharing knowledge, swapping stickers (the virtual kind) and becoming friends. Our original ambitious plan might have been feasible if our pilots were not also university students, working developers, and involved in multiple projects and activities. But six weeks turned out to be not quite long enough to get it all done, so we focused on speaking skills—and, as it turned out, on building a global posse of talented tech speakers.

    What’s next

    We’re still figuring this out. We collected feedback from all participants and discovered that there’s a great appetite to keep this going. We are still fine-tuning some of the ideas around Content Kits, and the first kits are becoming available for use and re-use. We continue to support Tech Speakers to present at conferences organize workshops and trainings in their communities. And create their own Mozilla Tech Speakers groups with local flavor and focus.

    Stay tuned: we’ll be opening a Discourse category shortly, to expand the conversation and share new ideas.

    And now for some thank yous…

    I’d like to quickly introduce you to the Mozilla Tech Speakers pilot pilots. You’ll be hearing from them directly in the days, weeks, months ahead, but for today, huge thanks and hugs all around, for the breadth and depth of their contributions, their passion, and the friendships we’ve formed.

    Adrian Crespo, Firefox Marketplace reviewer, Mozilla Rep, student, and technical presenter from Madrid, Spain, creator of the l10n.js Content Kit, for learning and teaching localization through the native JavaScript method.

    Ahmed Nefzaoui, @AhmedNefzaoui, recent graduate and active Mozillian, Firefox OS contributor, Arabic Mozilla localizer, RTL (right-to-left) wizard, and web developer from Tozeur, Tunisia.

    Andre Garzia, @soapdog, Mozilla Rep from Rio de Janeiro, Brazil, web developer, app developer and app reviewer, who will be speaking about Web Components at Expotec at the end of this month. Also, ask him about the Webmaker team LAN Houses program just getting started now in Rio.

    Andrzej Mazur, @end3r, HTML5 game developer, active Hacks blog and MDN contributor, creator of a content kit on HTML5 Game Development for Beginners, active Firefox app developer, Captain Rogers creator, and frequent tech speaker, from Warsaw, Poland.

    István “Flaki” Szmozsánszky, @slsoftworks, Mozillian and Mozilla Rep, web and mobile developer from Budapest, Hungary. Passionate about Rust, Firefox OS, the web of things. If you ask him anything “mildly related to Firefox OS, be prepared with canned food and sleeping bags, because the answer might sometimes get a bit out of hand.”

    Kaustav Das Modak, @kaustavdm, Mozilla Rep from Bengalaru, India; web and app developer; open source evangelist; co-founder of Applait. Ask him about Grouphone. Or, catch his upcoming talk at the JSChannel conference in Bangalore in July.

    Michaela R. Brown, @michaelarbrown, self-described “feisty little scrapper,” Internet freedom fighter, and Mozillian from Michigan. Michaela will share skills in San Francisco next week at the Library Freedom Project: Digital Rights in Libraries event.

    Rabimba Karanjai, @rabimba, a “full-time graduate researcher, part-time hacker and FOSS enthusiast,” and 24/7 Mozillian. Before the month is out, Rabimba will speak about Firefox OS at OpenSourceBridge in Portland and at the Hong Kong Open Source conference.

    Gracias. شكرا. धन्यवाद. Köszönöm. Obrigada. Dziękuję. Thank you. #FoxYeah.

    Planet MozillaWhistler Hike

    I'm at the Mozilla all-hands week in Whistler. Today (Monday) was a travel day, but many of us arrived yesterday, so today I had most of the day free and chose to go on a long time organized by Sebastian --- because I like hiking, but also lots of exercise outside should help me adjust to the time zone. We took a fairly new trail, the Skywalk South trail: starting in the Alpine Meadows settlement at the Rick's Roost trailhead at the end of Alpine Way, walking up to connect with the Flank trail, turning up 19 Mile Creek to wind up through forest to Iceberg lake above the treeline, then south up and over a ridge on the Skywalk South route, connecting with the Rainbow Ridge Loop route, then down through Switchback 27 to finally reach Alta Lake Rd. This took us a bit over 8 hours including stops. We generally hiked quite fast, but some of the terrain was tough, especially the climb up to and over the ridge heading south from Iceberg Lake, which was more of a rock-climb than a hike in places! We had to get through snow in several places. We had a group of eight, four of us who did the long version and four who did a slightly shorter version by returning from Iceberg Lake the way we came. Though I'm tired, I'm really glad we did this hike the way we did it; the weather was perfect, the scenery was stunning, and we had a good workout. I even went for a dip in Iceberg Lake, which was a little bit crazy and well worth it!

    Planet MozillaTrip to Whistler for Mozilla’s work week

    Our work week hasn’t started yet, but since I got to Whistler early I have had lots of adventures.

    First the obligatory nostril-flaring over what it is like to travel with a wheelchair. As we started the trip to Vancouver I had an interesting experience with United Airlines as I tried to persuade them that it was OK for me to fold up my mobility scooter and put it into the overhead bin on the plane. Several gate agents and other people got involved telling me many reasons why this could not, should not, and never has or would happen:

    * It would not fit
    * It is illegal
    * The United Airlines handbook says no
    * The battery has to go into the cargo hold
    * Electric wheelchairs must go in the cargo hold
    * The scooter might fall out and people might be injured
    * People need room for their luggage in the overhead bins
    * Panic!!

    The Air Carrier Access Act of 1986 says,

    Assistive devices do not count against any limit on the number of pieces of carry-on baggage. Wheelchairs and other assistive devices have priority for in-cabin storage space over other passengers’ items brought on board at the same airport, if the disabled passenger chooses to preboard.

    In short I boarded the airplane, and my partner Danny folded up the scooter and put it in the overhead bin. Then, the pilot came out and told me that he could not allow my battery on board. One of the gate agents had told him that I have a wet cell battery (like a car battery). It is not… it is a lithium ion battery. In fact, airlines do not allow lithium batteries in the cargo hold! The pilot, nicely, did not demand proof it is a lithium battery. He believed me, and everyone backed down.

    The reason I am stubborn about this is that I specially have a very portable, foldable electric wheelchair so that I can fold it up and take it with me. Two times in the past few years, I have had my mobility scooters break in the cargo hold of a plane. That made my traveling very difficult! The airlines never reimbursed me for the damage. Another reason is that the baggage handlers may lose the scooter, or bring it to the baggage pickup area rather than to the gate of the plane.

    Onward to Whistler! We took a shuttle and I was pleasantly (and in a way, sadly) surprised that the shuttle liason, and the driver, both just treated me like any other human being. What a relief! It is not so hard! This experience is so rare for me that I am going to email the shuttle company to compliment them and their employees.

    The driver, Ivan, took us through Vancouver, across a bridge that is a beautiful turquoise color with stone lions at its entrance, and through Stanley Park. I particularly noticed the tiny beautiful harbor or lagoon full of boats as we got off the bridge. Then, we went up Highway 99, or the Sea to Sky Highway, to Squamish and then Whistler.

    Sea to sky highway

    When I travel to new places I get very excited about the geology and history and all the geography! I love to read about it beforehand or during a trip.

    The Sea to Sky Highway was improved in preparation for the Winter Olympics and Paralympics in 2010. Before it was rebuilt it was much twistier with more steeply graded hills and had many bottlenecks where the road was only 2 lanes. I believe it must also have been vulnerable to landslides or flooding or falling rocks in places. As part of this deal the road signs are bilingual in English and Squamish. I read a bit on the way about the ongoing work to revitalize the Squamish language.

    The highway goes past Howe Sound, on your left driving up to Squamish. It is a fjord, created by retreated glaciers around 11,000 years ago. Take my geological knowledge with a grain of salt (or a cube of ice) but here is a basic narrative of the history. AT some point it was a shallow sea here but a quite muddy one, not one with much of a coral reef system, and the mountains were an archipelago of island volcanoes. So there are ocean floor sediments around, somewhat metamorphosed; a lot of shale.

    There is a little cove near the beginning of the highway with some boats and tumble-down buildings, called Porteau Cove. Interesting history there. Then you will notice a giant building up the side of a hill, the Britannia Mining Museum. That was once the Britannia Mines, producing billions of dollars’ worth of copper, gold, and other metals. The entire hill behind the building is honeycombed with tunnels! While a lot of polluted groundwater has come out of this mine damaging the coast and the bay waters, it was recently plugged with concrete: the Millenium Plug, and that improved water quality a lot, so that shellfish, fish, and marine mammals are returning to the area. The creek also has trout and salmon returning. That’s encouraging!

    Then you will see huge granite cliffs and Shannon Falls. The giant monolith made me think of El Capitan in Yosemite. And also of Enchanted Rock, a huge pink granite dome in central Texas. Granite weathers and erodes in very distinctive ways. Once you know them you can recognize a granite landform from far away! I haven’t had a chance to look close up at any rocks on this trip…. Anyway, there is a lot of granite and also basalt or some other igneous extrusive rock. Our shuttle driver told me that there is columnar basalt near by at a place called French Fry Hill.

    The mountain is called Stawamus Chief Mountain. Squamish history tells us it was a longhouse turned to stone by the Transformer Brothers. I want to read more about that! Sounds like a good story! Rock climbers love this mountain.

    There are some other good stories, I think one about two sisters turned to stone lions. Maybe that is why there are stone lions on the Vancouver bridge.

    The rest of the drive brought us up into the snowy mountains! Whistler is only 2000 feet above sea level but the mountains around it are gorgeous!

    The “village” where tourists stay is sort of a giant, upscale, outdoor shopping mall with fake streets in a dystopian labyrinth. It is very nice and pretty but it can also feel, well, weird and artificial! I have spent some time wandering around with maps, backtracking a lot when I come to dead ends and stairways. I am also playing Ingress (in the Resistance) so I have another geographical overlay on the map.

    Whistler bridge lost lake

    On Sunday I got some groceries and went down paved and then gravel trails to Lost Lake. It was about an hour long trip to get there. The lake was beautiful, cold, and full of people sunbathing, having picnics, and swimming. Lots of bikes and hikers. I ran out of battery (nearly), then realized that the lake is next to a parking lot. I got a taxi back to the Whistler Village hotel! Better for me anyway since the hour long scooter trip over gravel just about killed me (I took painkiller halfway there and then was just laid flat with pain anyway.) Too ambitious of an expedition, sadly. I had many thoughts about the things I enjoyed when I was younger (going down every trail, and the hardest trails, and swimming a lot) Now I can think of those memories, and I can look at beautiful things and also read all the information about an area which is enjoyable in a different way. This is just how life is and you will all come to it when you are old. I have this sneak preview…. at 46…. When I am actually old, I will have a lot of practice and will be really good at it. Have you thought about what kind of old person you would like to be, and how you will become that person?

    Today I stayed closer to home just going out to Rebagliati Park. This was fabulous since it wasn’t far away, seriously 5 minutes away! It was very peaceful. I sat in a giant Adirondack chair in a flower garden overlooking the river and a covered bridge. Watching the clouds, butterflies, bees, birds, and a bear! And of course hacking the portals (Ingress again). How idyllic! I wish I had remembered to bring my binoculars. I have not found a shop in the Whistler Mall-Village that stocks binoculars. If I find some, I will buy them.

    I also went through about 30 bugs tracked for Firefox 39, approved some for uplift, wontfixed others, emailed a lot of people for work, and started the RC build going. Releng was heroic in fixing some issues with the build infrastructure! But, we planned for coverage for all of us. Good planning! I was working Sunday and Monday while everyone else travelled to get here…. Because of our release schedule for Firefox it made good sense for me to get here early. It also helps that I am somewhat rested from the trip!

    I went to the conference center, found the room that is the home base for the release management and other platform teams, and got help from a conference center setup guy to lay down blue tape on the floor of the room from the doorway to the back of the room. The tape marks off a corridor to be kept clear, not full of backpacks or people standing and talking in groups, so that everyone can freely get in and out of the room. I hope this works to make the space easy for me to get around in, in my wheelchair, and it will surely benefit other people as well.

    Travel lane

    At this work week I hope to learn more about what other teams are doing, any cool projects etc, especially in release engineering and in testing and automated tools and to catch up with the Bugzilla team too. And will be talking a bunch about the release process, how we plan and develop new Firefox features, and so on! Looking forward now to the reception and seeing everyone who I see so much online!

    Related posts:

    Planet MozillaThe Canadian, Day 5

    Today I woke up on the outskirts of Greater Vancouver, judging by the signs in this industrial area. The steward has just announced we’ll be arriving in Vancouver in one hour.

    Thus concludes my journey. Though the journey spanned five calendar days, I left late on Thursday night and I’m arriving early Monday morning, so it’s more like four nights and three days. (A total travelling time of 3 days and 14.5 hours.) Because I travelled over a weekend, I only lost one day of office time and gained a scenic weekend.

    The trip roughly breaks down to one day in the forests of Ontario, one day across the plains of Manitoba and Saskatchewan, and one day across the mountains of Alberta and British Columbia.

    In retrospect, I should have done my work on the second day, when my mobile internet connection was good and the scenery was less interesting. (Sorry, Manitoba. Sorry, Saskatchewan.)

    From here I take a bus to Whistler to attend what Mozilla calls a “work week” — a collection of presentations, team meetings, and planning sessions. It ends with a party on Friday night, the theme of which is “lumberjack”. (Between the bears and the hipsters, don’t I already get enough of people dressing like lumberjacks back in Toronto?)

    Because I’m a giant hypocrite, I’ll be flying back to Toronto. But I heartily recommend travel by Via Rail. Their rail passes (good for visiting multiple destinations, or doing a round trip) are an especially good deal.

    I wonder what Kylie Minogue has to say about rail travel.


    Footnotes

    Updated: .  Michael(tm) Smith <mike@w3.org>