Planet MozillaBreaking out of the Tetris mindset

This is a longer version of this year’s keynote at Beyond Tellerrand conference. I posted earlier about the presentation with the video and slides. Here you get the longer transcript and explanation.

It is amazing to work on the web. We have a huge gift in the form of a decentralized network of devices. A network that helps people around the world to communicate and learn. And a huge resource of entertainment on demand. A gift we can tweak to our needs and interests rather than unwrap and consume. Some of us even know the technologies involved in creating interfaces of the web. Interfaces that empower others to consume content and become creators.

And yet there is a common feeling of failure. The web seems to be never good enough. Despite all our efforts the web doesn’t feel like a professional software platform. Our efforts to standardise the web to allow becoming a “web developer” easier seem not to work out. We keep re-defining what that even is.

Lately a lot of conferences have talks about mental health. Talks about us working too much and not going anywhere. Talks about how fragmented, insecure and unprofessional our tech stack is.

There’s been a meme going around for a while that summarises this feeling quite well. It is about the game Tetris:

“If Tetris has taught me anything, it’s that errors pile up and accomplishments disappear”

There is some truth to that. We worked for almost two decades on the web and the current state of it isn’t too encouraging. The average web site is too big to consume on mobile devices and too annoying to do so on laptops. We limit things too much on mobile. We cram a proverbial kitchen sink of features, code and intrusive advertising into our desktop solutions.

Let’s see what’s going on there and how we can shift ourselves away from this Tetris mindset. I’ll start by zooming out and talk a bit about the way we communicate about technologies.

Back when this was all fields…

<figure>Tweet
One of my older tweets – This was my old computer setup. Notice the photos of contacts on the wall.

When I started using computers my setup was basic by current standards. And yet I didn’t use the computer to play. I wanted to use it to communicate. That’s why I reached out to other creators and makers and we shared the things we wrote. We shared tools that made our life easier. We got to know each other writing long notes and explanations. We communicated by mail, sending floppy disks to each other. Once we got to know each other better, we shared photos and visited each other in person. It was a human way to create. Communication came at a high price and you had to put a lot of effort into it. Which is why we ensured that it was high quality, cordial communication. You had to wait weeks for an answer. You made sure that you said everything you wanted to say before you created the package and mailed it.

Speed cheapens communication

Fast forward to now. Communication is a dime a dozen. We’re connected all the time. Instead of spending time and effort to get to know others and explain and lead, we shout. We aim to be the most prolific communicator out there. Or to prove someone wrong who is loud and prolific. Or make fun of everything others spent a lot of effort on.

The immediacy and the ubiquity of communication is cheapening the way we act. We don’t feel like putting up much effort upfront. A stream of information gives you the impression that it is easy to undo a mistake right after you made it. But the constant barrage of information makes others in the conversation tune out. They only prick up their ears when something seems to be out of the ordinary. And this is where polarisation comes in. A drive to impress the audience in a positive way leads to unhealthy competitiveness. Giving out strong negative messages leads to witch hunts and Twitter drama. The systems we use have competition built in. They drive us having to win and release the most amazing things all the time. Gamification is what social networks are about. You get likes and views and retweets and favorites. Writing more and interacting more gets you to a higher level of access and features. You’re encouraged to communicate using in-built features like emoji, GIFs and memes. It is not about accumulating sensible content and finding solutions. It is about keeping the conversations and arguments going. All to get more clicks, views and interactions to show us more ads – the simple way to monetise on the web.

What this model brings is drama and dogma. The more outrageous your views, the more interaction you cause. The more dogmatic your approach, the more ammunition you have proving others wrong. It is not about finding consensus, it is about winning. Winning meaningless internet points and burning quite a few bridges whilst you do it. And it is addictive. The more you do it, the higher the rush to win. And the more devastating the feeling of non-accomplishment when you don’t win.

The scattered landscape of today

Let’s zoom in again into what we do as web developers. Web development is a spectrum of approaches. I’ll try to explain this using Tetronimos (that’s the official name of Tetris blocks).

<figure>All Tetris blocksAll the Tetronimos, depicting the spectrum of web development approaches, from trusting and conservative to innovative and controlling</figure>

On the one end we have a conservative group that doesn’t trust any browser or device and keeps it all on the server. On the other we have people who want to run everything on the client and replace web standards with their own abstractions. Conservatives allow the user to customise their device and environment to their needs. The other extreme is OK to block users who don’t fulfil certain criteria.

The following are not quotes by me. I encountered them over the years, anonymised and edited them for brevity. Let’s start.

The right, conservative block…

The Right Block

This is a group of people that most likely have been around the web for a long time. They’ve had a lot of promises and seen new technology proving too flaky to use over and over again. People that remember a web where we had to support terrible environments. A web where browsers were dumb renderers without any developer tools.

Here’s some of the quotes I keep hearing:

You can’t trust the client, but you can trust your server.
Web standards are good, but let’s not get overboard with fancy things.

bq/It is great that there are evergreen browsers, but that doesn’t apply to our customers…

This approach results in bullet-proof, working interfaces. Interfaces that are likely to bore end users. It seems clumsy these days to have to re-load a page on every interaction. And it is downright frustrating on a mobile connection. Mobile devices offer a lof of web browsing features using client-side technologies. What’s even worse is that this approach doesn’t reward end users who keep their environment up to date.

The right, conservative piece…

The right leaning block

These people are more excited about the opportunities client-side functionality give you. But they tread carefully. They don’t want to break the web or lock out end users based on their abilities or technical setup.

Some quotes may be:

Progressive enhancement is the way to build working solutions. You build on static HTML generated from a backend and you will never break the user experience.
You have no right to block any user. The web is independent of device and ability. It is our job to make sure people can use our products. We do that by relying on standards.

Web products do not have to look and work the same in every environment.
Working, great looking interfaces that vary with the environment.

This is the best of both worlds. But it also means that you need to test and understand limits of older environments. Browser makers learned a lot from this group. It wanted to make the web work and get more control over what browsers do. That helps standard bodies a lot.

The Square

The Square

This is the group most concerned about the HTML we create. They are the people advocating for semantic HTML and sensible structures.

Some common phrases you’d hear from them are:

No matter what you do, HTML is always the thing the browser gets. HTML is fault tolerant and will work wherever and forever.
Using semantic HTML gives you a lot of things for free. Accessibility, caching, fast rendering. You can’t lose.

HTML is the final truth, that’s 100% correct. But we dropped the ball when HTML5 leap-frogged innovation. Advanced form interactions aren’t as reliable as we want them to be. Adding ARIA attributes to HTML elements isn’t a silver bullet to make them accessible. Browser makers should have spent more time fixing this. But instead, we competed with each other trying to be the most innovative browser. When it comes to basic HTML support, there was hardly any demand from developers to improve. HTML is boring to most developers and browsers are forgiving. This is why we drown in DIVs and SPANs. We keep having to remind people of the benefits of using real buttons, anchors and headings.

The straight and narrow…

The straight Block

This is a group advocating for a web that allows developers to achieve more by writing less code. They don’t want to have to worry about browser differences. They use libraries, polyfills and frameworks. These give us functionality not yet supported in browsers today.

Some famous ways of saying this are:

Browser differences are annoying and shouldn’t be in the way of the developer. That’s why we need abstraction libraries to fix these issues.
Understanding standards while they are in the making is nothing we have time for.
$library is so much easier – why don’t we add it to browsers?

There is no doubt that jQuery, Modernizr and polyfills are to thank for the web we have today. The issue is that far too many things depend on stop-gap solutions that never went away. Developers became dependent on them and never looked at the problems they solve. Problems that don’t exist in current browsers and yet we keep their fixes alive. That said, browser makers and standard creators learned a lot from these abstractions. We have quite a few convenience methods in JavaScript now because jQuery paved the way.

The T-Block

The T Block

In Tetris, this is the most versatile block. It helps you to remove lines or build a solid foundation for fixing a situation that seems hopeless. On the web, this is JavaScript. It is the only language that spans the whole experience. You can use it on the server and on the client. In an environment that supports JavaScript, you can create CSS and HTML with it.

This leads to a lot of enthusiastic quotes about it:

I can do everything in JavaScript. Every developer on the web should know it.
Using JavaScript, I can test if something I wanted to happen happened. There is no hoping that the browser did it right – we know.

JavaScript was the necessary part to make the web we have now happen. It can make things more accessible and much more responsive. You can even find out what’s possible in a certain environment and cater a fitting solution to it. It is a fickle friend though, many things can go wrong until the script loads and executes. And it is unforgiving. One error and nothing happens.

The innovation piece…

The Left Leaning Block

This group considers JavaScript support a given. They want to have development tool chains. Tooling as rich as that of other programming environments.

This leads to quotes like these:

It is OK to rely on JavaScript to be available. The benefits of computational values without reloads are too many to miss out on.

I don’t want to have to think about older browsers and broken environments. Frameworks and build processes can take care of that.

The concept of starting with a text editor and static files doesn’t exist any more. We have so much more benefits from using a proper toolchain. If that’s too hard for you, then you’re not a web developer.

Web standards like CSS, JavaScript and HTML are conversion targets. Sass, CoffeeScript, Elm, Markdown and Jade gives us more reliable control right now. We should not wait until browsers catch up.

It is a waste of time to know about browser differences and broken implementations. It is much more efficient to start with an abstraction.

Developer convenience trumps end users experience here. This can result in bloated solutions. We know about that bloat and we create a lot of technologies to fix that issue. Browser makers can’t help much here. Except creating developer tools that connect the abstraction with the final output (sourcemaps).

The innovative blocker…

The Left Block

These are the bleeding edge developers. They want to re-invent what browsers and standards do as they’re not fast enough.

Common quotes coming from this end of the spectrum are:

Browsers and web standards are too slow and don’t give us enough control. We want to know what’s going on and control every part of the interface.
CSS is broken. The DOM is broken. We have the technologies in our evergreen browsers to fix all that reliably as we have insight into what’s happening and can optimise it.

This approach yields high fidelity, beautiful and responsive interfaces. Interfaces that lock out a large group of users as they demand a lot from the device they run on. We assume that everybody has access to a high end environment and we don’t cater to others. Any environment with a high level of control also comes with high responsibility. If you replace web technologies with your own, you are responsible for everything – including maintenance. Browser makers can take these new ideas and standardise them. The danger is that they will never get used once we have them.

<figure>ExplanationsThe more innovative you are, the more you have to be responsible for the interface working for everybody. The more conservative you are, the more you have to trust the browser to do the right thing.</figure>

Every single group in this spectrum have their place on the web. In their frame of reference, each result in better experiences for our users. The difference is responsibility and support.

If we create interfaces dependent on JavaScript we’re responsible to make them accessible. If we rely on preprocessors it is up to us to explain these to the maintainers of our products. We also demand more of the devices and connectivity of our end users. This can block out quite a large chunk of people.

The less we rely on browsers and devices, the more we allow end users to customise the interface to their needs. Our products run everywhere. But are they also delivering an enjoyable experience? If I have a high-end device with an up-to-date, evergreen browser I don’t want a interface that was hot in 1998.

Who defines what and who is in charge?

Who defines who is allowed to use our products?

Who has the final say how an interface looks and what it is used for?

The W3C has covered this problem quite a long time ago. In its design principles you can find this gem:

Users over authors over implementors over specifiers over theoretical purity…

If our users are the end goal, findings and best practices should go both ways. Advocates of a sturdy web can learn from innovators and vice versa. We’re not quite doing that. Instead we build silos. Each of the approaches has dedicated conferences, blogs, slack channels and communities. Often our argumentation why one or the other is better means discrediting the other one. This is not helpful in the long run.

This is in part based on the developer mindset. We seem to be hard-wired to fix problems, as fast as possible, and with a technological approach. Often this means that we fix some smaller issue and cause a much larger problem.

How great is the web for developers these days?

It is time to understand that we work in a creative, fast moving space that is in constant flux. It is a mess, but mess can be fun and working in a creative space needs a certain attitude.

Creatives in the film industry know this. In the beautiful documentary about the making of Disney’s Zootopia the creative director explains this in no minced terms:

As a storyboard artist in Disney you know that most of your work will be thrown away. The idea is to fail fast and fail often and create and try out a lot of ideas. Using this process you find two or three great ideas that make the movie a great story. (paraphrased)

Instead of trying to fix all the small niggles, let’s celebrate how far we’ve come with the standards and technologies we use. We have solid foundations.

  • JavaScript: JavaScript is eating the world. We use it from creating everything on the web up to creating APIs and Web Servers. We write lots of tools in it to build our solutions. JavaScript engines are Open Source and embeddable. NodeJS and NPM allow us to build packages and re-use them on demand. In ES6 we got much more solid DOM access and traversal methods. Inspired by jQuery, we have querySelector() and classList() and many more convenience methods. We even replaced the unwieldy XMLHttpRequest with fetch(). And the concept or Promises and Async/Await allow us to build much more responsive systems.
  • CSS: CSS evolved beyond choosing fonts and setting colours. We have transitions to get from one unknown state to another in a smooth fashion. We have animations to aid our users along an information flow. Both of these fire events to interact with JavaScript. We have Flexbox and Grids to lay out elements and pages. And we have Custom Properties, which are variables in CSS but so much more. Custom Properties are a great way to have CSS and JavaScript interact. You change the property value instead of having to add and remove classes on parent elements. Or – even worse – having to manipulate inline styles.
  • Progressive Web Apps: The concept of Progressive Web Apps is an amazing opportunity. By creating a manifest file we can define that what we built is an app and not a site. That way User Agents and Search Engines can offer install interfaces. And browsers can allow for access to device and Operating System functionality. Service Workers offer offline functionality and work around the problem of unreliable connectivity. They also allow us to convert content on the fly before loading and showing the document. Notifications can make our content stickier without even having to show the apps.
  • Tooling: Our developers tools have also gone leaps and bounds. Every browser is also a debugging environment. We haveh ackable editors written in web technologies. Toolchains allow us to produce what we need when it makes sense. No more sending code to environments that don’t even know how to execute it.
  • Information availability: Staying up up-to-date is also simple these days. Browser makers are available for feedback and information. We have collaboration tools by the truckload. We have more events than we can count and almost all publish videos of their talks.

It is time for us to fill the gaps. It is time we realise that not everything we do has to last forever and needs to add up to a perfect solution. It is OK for our accomplishments to vanish. It is not OK for them to become landfill of the web.

Our job right now is to create interfaces that are simple, human and fun to use. There is no such thing as a perfect user – we need to think inclusive. It isn’t about allowing access but about avoiding barriers.

We have come a long way. We made the world much smaller and more connected. Let’s stop fussing over minor details, and show more love to the web, love to the craft and much more respect for another. Things are looking up, and I can’t wait to see what we – together – will come up with next.

You don’t owe the world perfection, but you have a voice that should be heard and your input matters. Get creative – no creativity is a waste.

</figure>

Planet Mozilla"When in doubt, WebIDL"

Yesterday I was fixing a bug in some IndexedDB code we use for Thimble's browser-based filesystem. When I'm writing code against an API like IndexedDB, I love knowing the right way to use it. The trouble is, it can be difficult to find the proper use of an API, or in my case, its edge cases.

Let's take a simple question: should I wrap certain IndexedDB methods in try...catch blocks? In other words, will this code ever throw?

There are a number of obvious ways to approach this problem, from reading the docs on MDN to looking for questions/answers on StackOverflow to reading open source code on GitHub for projects that do similar things to you.

I did all of the above, and the problem is that people don't always know, or at least don't always bother to do the right thing in all cases. Some IndexedDB wrapper libraries on GitHub I read do use try...catch, others don't; some docs discuss it, others glaze over it. Even DOM tests I read sometimes use it, and sometimes don't bother. I hate that level of ambiguity.

Another option is to ask someone smart, which I did yesterday. I thought the reply I got from @asutherland was a great reminder of something I'd known, but didn't think to use in this case:

"When in doubt, WebIDL"

WebIDL is an Interface Definition Language (IDL) for the web that defines interfaces (APIs), their methods, properties, etc. It also clearly defines what might throw an error at runtime.

While implementing DOM interfaces in Gecko, I've had to work with IDL in the past; however, it hadn't occurred to me to consult it in this case, as a user of an API vs. its implementer. I thought it was an important piece of learning, so I wanted to write about it.

Let's take the case of opening a database using IndexedDB. To do this, you need to do something like this:

var db;  
var request = window.indexedDB.open("data");  
request.onerror = function(event) {  
  handleError(request.error);
};
request.onsuccess = function() {  
  db = request.result;
};

Now, given that indexedDB.open() returns an IDBOpenDBRequest object to which one can attach an onerror handler, it's easy to think that you've done everything correctly simply by handling the error case.

However, window.indexedDB implements the IDBFactory interface, so we can also check its WebIDL definition to see if the open method might also throw. It turns out that it does:

/**
 * Interface that defines the indexedDB property on a window.  See
 * http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#idl-def-IDBFactory
 * for more information.
 */
[Exposed=(Window,Worker,System)]
interface IDBFactory {  
  [Throws, NeedsCallerType]
  IDBOpenDBRequest
  open(DOMString name,
       [EnforceRange] unsigned long long version);
...

This is really useful to know, since we should really be wrapping this call in a try...catch if we care about failing in ways to which a user can properly respond (e.g., showing error messages, retrying, etc.).

Why might it throw? You can see for yourself in the source for WebKit or Gecko, everything from:

  • Passing a database version number of 0
  • Not specifying a database name
  • Failing a security check (e.g., origin check)
  • The underlying database can't be opened for some reason

The list goes on. Try it yourself, by opening your dev console and entering some code that is known to throw, based on what we just saw in the code (e.g., no database name, using 0 as a version):

indexedDB.open()  
Uncaught TypeError: Failed to execute 'open' on 'IDBFactory': 1 argument required, but only 0 present.  
...
indexedDB.open("name", 0)  
Uncaught TypeError: Failed to execute 'open' on 'IDBFactory': The version provided must not be 0.  

The point is, it might throw in certain cases, and knowing this should guide your decisions about how to use it. The work I'm doing relates to edge case uses of a database, in particular when disk space is low, the database is corrupted, and other things we're still not sure about. I want to handle these errors, and try to guide users toward a solution. To accomplish that, I need try...catch.

When in doubt, WebIDL.

Planet MozillaTry to find the patch which caused a crash.

For some categories of crashes, we are automatically able to pinpoint the patch which introduced the regression.

The issue

Developers make mistakes, not because they’re bad but most of the time because the code is complex and sometimes just because the modifications they made are so trivial that they don’t pay too much attention.

In parallel, the sooner we can catch these mistakes, the easier it is for developers to fix them. At the end, this strongly improves the user experience.
Indeed, if developers are quickly informed about new regressions introduced by their changes, it becomes much easier for them to fix issues as they still remember the changes.

How do we achieve that?

When a new crash signature shows up, we retrieve the stack trace of the crash, i.e. the sequence of called functions which led to the crash: https://crash-stats.mozilla.com/report/index/53b199e7-30f5-4c3d-8c8a-e39c82170315#frames .

For each function, we have the file name where it is defined and the mercurial changeset from which Firefox was built, so in querying https://hg.mozilla.org  it is possible to know what the last changes on this file were.

The strategy is the following:

  1. we retrieve the crashes which just appeared in the last nightly version (no crash in the last three days);
  2. we bucketize crashes by their proto-signature;
  3. for each bucket, we get a crash report and then get the functions and files which appear in the stack trace;
  4. for each file, we query mercurial to know if a patch has been applied to this file in the last three days.

The last stage is to analyze the stack traces and the corresponding patches to infer that a patch is probably the responsible for a crash and finally just report a bug.

Results

As an example:

https://bugzilla.mozilla.org/show_bug.cgi?id=1347836

The patch https://hg.mozilla.org/mozilla-central/diff/99e3488b1ea4/layout/base/nsLayoutUtils.cpp modified the function nsLayoutUtils::SurfaceFromElement and the crash occured in this function (https://crash-stats.mozilla.com/report/index/53b199e7-30f5-4c3d-8c8a-e39c82170315#frames), few lines after the modified line.

Finally the issue was a function which returned a pointer which could be dangling (the patch).

https://bugzilla.mozilla.org/show_bug.cgi?id=1347461

The patch https://hg.mozilla.org/mozilla-central/diff/bf33ec027cea/security/manager/ssl/DataStorage.cpp modified the line where the crash occured (https://crash-stats.mozilla.com/report/index/c7ba45aa-99a9-448b-91df-37da82170314#frames).

Finally the issue was an attempt to use an uninitialized object.

https://bugzilla.mozilla.org/show_bug.cgi?id=1342217

The patch https://hg.mozilla.org/mozilla-central/diff/e6fa8ff0d0be/dom/media/platforms/wrappers/MediaDataDecoderProxy.cpp added the function where the crash occured (https://crash-stats.mozilla.com/report/index/6a96375e-5c83-4ebe-9078-2d4472170222#frames).

Finally the issue was just a missing return in a function (the patch).

In these differents bugs, the volume is very low so almost nobody care about them but finally they reveal true mistakes in the code, so the volume could be higher in beta or release.
For the future, we hope that it will be possible to automate most of that process and file automatically a bug.

Planet MozillaI’ve stopped hiking the PCT

Every year’s PCT is different. The path routinely changes, mostly as fresh trail is built to replace substandard trail (or no trail at all, if the prior trail was a road walk). But the reason for an ever-shifting PCT, even within hiking seasons, should be obvious to any California resident: fires.

The 2013 Mountain Fire near Idyllwild substantially impacted the nearby trail. A ten-mile stretch of trail is closed to hikers even today, apparently (per someone I talked to at the outfitter in Idyllwild) because the fire burned so hot that essentially all organic matter was destroyed, so they have to rebuild the entire trail to have even a usable tread. (The entire section is expected to open midsummer next year – too late for northbound thru-hikers, but maybe not for southbounders.)

These circumstances lead to an alternate route for hikers to use. Not all do: many hikers simply hitchhike past the ten closed miles and the remote fifteen miles before them.

But technically, there’s an official reroute, and I’ve nearly completed it. Mostly it involves lots of walking on the side of roads. The forest service dirt roads are rough but generally empty, so not the worst. The walking on a well-traveled highway with no usable shoulder, however, was the least enjoyable hiking I’ve ever done. (I mean ever.) I really can’t disagree with the people who just hitchhiked to Idyllwild and skipped it all.

I’ll be very glad to get back on the real trail several miles from now.

Planet MozillaWebdev Beer and Tell: May 2017

Webdev Beer and Tell: May 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: May 2017

Webdev Beer and Tell: May 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaShowcasing your WebVR experiences

WebVR combines the powerful reach of the Internet with the immersive appeal of virtual reality content. With WebVR, a VR experience is never more than one URL away. Nevertheless, VR equipment is still very expensive and not quite fully adopted for consumer use. For this reason, it is useful to be able to record your VR projects for others to experience and enjoy, at least from a viewer perspective.

Recording VR content

This video tutorial teaches you how to record a virtual experience you’ve created using the mirror mode in SteamVR. Capturing one eye allows your audience to enjoy the video in a regular 2D mode but capturing both eyes will enable a more immersive experience thanks to stereoscopic video.

This video tutorial assumes you have a SteamVR compatible setup and Open Broadcast Software installed.

There are other options for capturing your VR experiences. If you’re a Windows 10 user, perhaps you prefer to use Game DVR, which works out of the box.

Extract a GIF from your favorite cut

Now that you have a video with your VR content, you can make a GIF from it with Instagiffer for Windows. Instagiffer is not the fastest software out there, but the output quality of the GIFs is superb.

Start by installing and launching Instagiffer. The UI is split into three sections or steps.

A window with three sections for choosing the video, settings and preview

Click on Load Video in the Step 1 section, and select the video from which you want to extract the GIF.

When clicking load video, the Windows file selection dialog appears

Locate the sequence you want to convert into a GIF and fill the options in the Step 2 section. In this case, I want to extract 5.5 seconds of video starting from second 18; a sequence in which I shot an enemy bullet.

Three slide bars allow to modify start time, framerate (smoothness) and frame size. A text box indicates the length of the clip.

Length, Smoothness and Frame Size will affect the size of your GIF: the higher the values, the higher the size of the resulting file.

In Step 3 section, you can crop the image by dragging the square red helpers. In this case, I’m removing the black bands around the video. You can also use it to isolate each eye.

A red rectangle with two handlers in each corner represent the cropping area

Notice that the size of the GIF is shown in the bottom-right corner of the preview. You can adjust this size by moving the Frame Size slider in the Step 2 section.

Finally, click on the Create GIF! button at the bottom of the window to start the conversion.

A progress bar shows how it remains until completion

One of the things I love about Instagiffer is that, after finishing, it will display compatibility warnings about the GIF, testing on some of the most popular Internet services.

The notice shows warnings for Tumblr, Imgur and Twitter, pointing out problems with sizes and dimensions

Click on the final result to see the animation. It’s really good!

Capture of a A-Blast gameplay

If you are more into old-school tools, check out Kevin’s CLI utility Gifpardy and see how it goes.

Make a 3D YouTube video

One of the advantages of recording both eyes is that you can assemble stereoscopic side-by-side 3D videos. You can use YouTube, for instance.

Just upload your video and edit it. Go to the Advanced settings tab inside the Info & Settings view.

Browser content screenshot at Info & Settings tab of a YouTube video

Check the box that says This video is 3D and select Side by side: Left video on the left side in the combo box.

Checkbox for enabling 3D video. A warning reads:

The deprecation warning encourages you to do this step offline, with your favorite video editor.

Once you are done, YouTube will select the best option for displaying 3D content, applying the proper filters or corrections as needed.

For instance, you’ll see an anaglyph representation when viewing your video with the Firefox browser on desktop.

An anaglyph red/green representation of the 3D video

You can switch to a 2D representation as well.

Regular 2D representation chooses only one eye to show

When you view the video with Firefox for Android you will see both eyes side by side.

Video on Firefox Android is shown side by side with no distortion (as the original video)

And if you try with the YouTube native app, an icon for Cardboard/Daydream VR will appear, transporting you to a virtual cinema where you can enjoy the clip.

In the YouTube app, a Cardboard is shown in the bottom-right corner to enter VR mode

Theater mode applies the proper distortion to each eye and provides a cinematic view

In conclusion

Virtual reality is not widely adopted or easily accessible yet, but the tools are available now to reach more people and distribute your creative work by recording your WebVR demos in video. Discover VR galleries on Twitter, GIPHY or Tumblr, choose your best clips and share them!

Do you prefer high quality video? Check out the VR content on YouTube or Vimeo.

At Mozilla, we support the success of WebVR and aim to demonstrate that people can share and enjoy virtual reality experiences on the Web! Please share your WebVR projects with the world. We’d love to see what you’re making. Let us know on Twitter by tagging your project with #aframevr, and we’ll RT it! Follow @AframeVR and @MozillaVR for the latest developments and new creative work.

Planet MozillaGecko And Native Profiler

Gecko And Native Profiler Ehsan Akhgari: Gecko And Native Profiler. May 18, 2017 Ehsan and Markus etherpad is here: https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Gecko_Profiler_FAQ

Planet MozillaGecko And Native Profiler

Gecko And Native Profiler Ehsan Akhgari: Gecko And Native Profiler. May 18, 2017 Ehsan and Markus etherpad is here: https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Gecko_Profiler_FAQ

Planet Mozillacurl: 5000 stars

The curl project has been hosted on github since March 2010, and we just now surpassed 5000 stars. A star on github of course being a completely meaningless measurement for anything. But it does show that at least 5000 individuals have visited the page and shown appreciation this low impact way.

5000 is not a lot compared to the really popular projects.

On August 12, 2014 I celebrated us passing 1000 stars with a beer. Hopefully it won’t be another seven years until I can get my 10000 stars beer!

Planet MozillaQuantum Flow Engineering Newsletter #10

Let’s start this week’s updates with looking at the ongoing efforts to improve the usefulness of the background hang reports data.  With Ben Miroglio’s help, we confirmed that we aren’t blowing up telemetry ping sizes yet by sending native stack traces for BHR hangs, and as a result we can now capture a deeper call stack depth, which means the resulting data will be easier to analyze.  Doug Thayer has also been hard at work at creating a new BHR dashboard based on the perf-html UI.  You can see a sneak peak here, but do note that this is work in progress!  The raw BHR data is still available for your inspection.
Kannan Vijayan has been working on adding some low level instrumentation to SpiderMonkey in order to get some detailed information on the relative runtime costs of various builtin intrinsic operations inside the JS engine in various workloads using the rdtsc instruction on Windows.  He now has a working setup that allows him to take a real world JS workload and get some detailed data on what builtin intrinsics were the most costly in that workload.  This is extremely valuable because it allows us to focus our optimization efforts on these builtins where the most gains are to be achieved first.  He already has some initial results of running this tool on the Speedometer benchmark and on a general browsing workload and some optimization work has already started to happen.
Dominik Strohmeier has been helping with running startup measurements on the reference Acer machine to track the progress of the ongoing startup improvements using an HDMI video capture card.  For these measurements, we are tracking two numbers, one is the first paint times (the time at which we paint the first frame from the browser window) and the other is the hero element time (the time at which we paint the “hero element” which is the search box in about:home in this case.)  The baseline build here is the Nightly of Apr 1st as a date before active work on startup optimizations started.  At that time, our median first paint time was 1232.84ms (with a standard deviation of 16.58ms) and our hero element time was
1849.26ms (with a standard deviation of 28.58ms).  On the Nightly of May 18, our first paint time is 849.66ms (with a standard deviation of 11.78ms) and our hero element time is 1616.02ms (with a standard deviation of 24.59ms).
Next week we’re going to have a small work week with some people from the DOM, JS, Layout, Graphics and Perf teams here in Toronto.  I expect to be fully busy at the work week, so you should expect the next issue of this newsletter in two weeks!  With that, it is time to acknowledge the hard work of those who helped make Firefox faster this past week.  I hope I’m not dropping any names by accident!

Planet MozillaTeaching Programming: Proactive vs Reactive

I’ve been thinking about this a lot these days. In part because of an idea I had but also due to this twitter discussion.

When teaching most things, there are two non-mutually-exclusive ways of approaching the problem. One is “proactive”1, which is where the teacher decides a learning path beforehand, and executes it. The other is “reactive”, where the teacher reacts to the student trying things out and dynamically tailors the teaching experience.

Most in-person teaching experiences are a mix of both. Planning beforehand is very important whilst teaching, but tailoring the experience to the student’s reception of the things being taught is important too.

In person, you can mix these two, and in doing so you get a “best of both worlds” situation. Yay!

But … we don’t really learn much programming in a classroom setup. Sure, some folks learn the basics in college for a few years, but everything they learn after that isn’t in a classroom situation where this can work2. I’m an autodidact, and while I have taken a few programming courses for random interesting things, I’ve taught myself most of what I know using various sources. I care a lot about improving the situation here.

With self-driven learning we have a similar divide. The “proactive” model corresponds to reading books and docs. Various people have proactively put forward a path for learning in the form of a book or tutorial. It’s up to you to pick one, and follow it.

The “reactive” model is not so well-developed. In the context of self-driven learning in programming, it’s basically “do things, make mistakes, hope that Google/Stackoverflow help”. It’s how a lot of people learn programming; and it’s how I prefer to learn programming.

It’s very nice to be able to “learn along the way”. While this is a long and arduous process, involving many false starts and a lack of a sense of progress, it can be worth it in terms of the kind of experience this gets you.

But as I mentioned, this isn’t as well-developed. With the proactive approach, there still is a teacher – the author of the book! That teacher may not be able to respond in real time, but they’re able to set forth a path for you to work through.

On the other hand, with the “reactive” approach, there is no teacher. Sure, there are Random Answers on the Internet, which are great, but they don’t form a coherent story. Neither can you really be your own teacher for a topic you do not understand.

Yet plenty of folks do this. Plenty of folks approach things like learning a new language by reading at most two pages of docs and then just diving straight in and trying stuff out. The only language I have not done this for is the first language I learned3 4.

I think it’s unfortunate that folks who prefer this approach don’t get the benefit of a teacher. In the reactive approach, teachers can still tell you what you’re doing wrong and steer you away from tarpits of misunderstanding. They can get you immediate answers and guidance. When we look for answers on stackoverflow, we get some of this, but it also involves a lot of pattern-matching on the part of the student, and we end up with a bad facsimile of what a teacher can do for you.

But it’s possible to construct a better teacher for this!

In fact, examples of this exist in the wild already!

The Elm compiler is my favorite example of this. It has amazing error messages

The error messages tell you what you did wrong, sometimes suggest fixes, and help correct potential misunderstandings.

Rust does this too. Many compilers do. (Elm is exceptionally good at it)

One thing I particularly like about Rust is that from that error you can try rustc --explain E0373 and get a terminal-friendly version of this help text.

Anyway, diagnostics basically provide a reactive component to learning programming. I’ve cared about diagnostics in Rust for a long time, and I often remind folks that many things taught through the docs can/should be taught through diagnostics too. Especially because diagnostics are a kind of soapbox for compiler writers — you can’t guarantee that your docs will be read, but you can guarantee that your error messages will. These days, while I don’t have much time to work on stuff myself I’m very happy to mentor others working on improving diagnostics in Rust.

Only recently did I realize why I care about them so much – they cater exactly to my approach to learning programming languages! If I’m not going to read the docs when I get started and try the reactive approach, having help from the compiler is invaluable.

I think this space is relatively unexplored. Elm might have the best diagnostics out there, and as diagnostics (helping all users of a language – new and experienced), they’re great, but as a teaching tool for newcomers; they still have a long way to go. Of course, compilers like Rust are even further behind.

One thing I’d like to experiment with is a first-class tool for reactive teaching. In a sense, clippy is already something like this. Clippy looks out for antipatterns, and tries to help teach. But it also does many other things, and not all are teaching moments are antipatterns.

For example, in C, this isn’t necessarily an antipattern:

struct thingy *result;
if (result = do_the_thing()) {
    frob(*result)
}

Many C codebases use if (foo = bar()). It is a potential footgun if you confuse it with ==, but there’s no way to be sure. Many compilers now have a warning for this that you can silence by doubling the parentheses, though.

In Rust, this isn’t an antipattern either:

fn add_one(mut x: u8) {
    x += 1;
}

let num = 0;
add_one(num);
// num is still 0

For someone new to Rust, they may feel that the way to have a function mutate arguments (like num) passed to it is to use something like mut x: u8. What this actually does is copies num (because u8 is a Copy type), and allows you to mutate the copy within the scope of the function. The right way to make a function that mutates arguments passed to it by-reference would be to do something like fn add_one(x: &mut u8). If you try the mut x thing for non-Copy values, you’d get a “reading out of moved value” error when you try to access num after calling add_one. This would help you figure out what you did wrong, and potentially that error could detect this situation and provide more specific help.

But for Copy types, this will just compile. And it’s not an antipattern – the way this works makes complete sense in the context of how Rust variables work, and is something that you do need to use at times.

So we can’t even warn on this. Perhaps in “pedantic clippy” mode, but really, it’s not a pattern we want to discourage. (At least in the C example that pattern is one that many people prefer to forbid from their codebase)

But it would be nice if we could tell a learning programmer “hey, btw, this is what this syntax means, are you sure you want to do this?”. With explanations and the ability to dismiss the error.

In fact, you don’t even need to restrict this to potential footguns!

You can detect various things the learner is trying to do. Are they probably mixing up String and &str? Help them! Are they writing a trait? Give a little tooltip explaining the feature.

This is beginning to remind me of the original “office assistant” Clippy, which was super annoying. But an opt-in tool or IDE feature which gives helpful suggestions could still be nice, especially if you can strike a balance between being so dense it is annoying and so sparse it is useless.

It also reminds me of well-designed tutorial modes in games. Some games have a tutorial mode that guides you through a set path of doing things. Other games, however, have a tutorial mode that will give you hints even if you stray off the beaten path. Michael tells me that Prey is a recent example of such a game.

This really feels like it fits the “reactive” model I prefer. The student gets to mold their own journey, but gets enough helpful hints and nudges from the “teacher” (the tool) so that they don’t end up wasting too much time and can make informed decisions on how to proceed learning.

Now, rust-clippy isn’t exactly the place for this kind of tool. This tool needs the ability to globally “silence” a hint once you’ve learned it. rust-clippy is a linter, and while you can silence lints in your code, you can’t silence them globally for the current user. Nor does that really make sense.

But rust-clippy does have the infrastructure for writing stuff like this, so it’s an ideal prototyping point. I’ve filed this issue to discuss this topic.

Ultimately, I’d love to see this as an IDE feature.

I’d also like to see more experimentation in the department of “reactive” teaching — not just tools like this.

Thoughts? Ideas? Let me know!

thanks to Andre (llogiq) and Michael Gattozzi for reviewing this


  1. This is how I’m using these terms. There seems to be precedent in pedagogy for the proactive/reactive classification, but it might not be exactly the same as the way I’m using it.

  2. This is true for everything, but I’m focusing on programming (in particular programming languages) here.

  3. And when I learned Rust, it only had two pages of docs, aka “The Tutorial”. Good times.

  4. I do eventually get around to doing a full read of the docs or a book but this is after I’m already able to write nontrivial things in the language, and it takes a lot of time to get there.

Planet MozillaPhoton Engineering Newsletter #1

Well, hello there. Let’s talk about the state of Photon, the upcoming Firefox UI refresh! You’ve likely seen Ehsan’s weekly Quantum Flow updates. They’re a great summary of the Quantum Flow work, so I’m just going to copy the format for Photon too. In this update I’ll briefly cover some of the notable work that’s happened up through the beginning of May. I hope to do future updates on a weekly basis.

Our story so far

Up until recently, the Photon work hasn’t been very user-visible. It’s been lots of planning, discussion, research, prototypes, and foundational work. But now we’re at the point where we’re getting into full-speed implementation, and you’ll start to see things changing.

Photon is landing incrementally between now and Firefox 57. It’s enabled by default on Nightly, so you won’t need to enable any special settings. (Some pieces may be temporarily disabled-by-default until we get them up to a Nightly level of quality, but we’ll enable them when they’re ready for testing.) This allows us to get as much testing as possible, even in versions that ultimately won’t ship with Photon. But it does mean that Nightly users will only gradually see Photon changes arriving, instead of a big splash with everything arriving at once.

For Photon work that lands on Nightly-55 or Nightly-56, we’ll be disabling almost all Photon-specific changes once those versions are out of Nightly. In other words, Beta-55 and Beta-56 (and of course the final release versions, Firefox-55 and Firefox-56). That’s not where we’re actively developing or fixing bugs – so if you want to try out Photon as it’s being built, you should stick with Nightly. Users on Beta or Release won’t see Photon until 57 starts to ship on those channels later this year.

The Photon work is split into 6 main areas (which is also how the teams implementing it are organized). These are, briefly:

1. Menus and structure – Replaces the existing application menu (“Hamburger button”) with a simplified linear menu, adds a “page action” menu, changes the bookmarks split-button to be a more general-purpose “library menu”, updates sidebars, and more.

2. Animation – Adds animation to toolbar button actions, and improves animations/transitions of other UI elements (like tabs and menus).

3. Preferences – Reorganizes the Firefox preferences UI to improve organization and adds the ability to search.

4. Visual redesign – This is a collection of other visual changes for Photon. Updating icons, changing toolbar buttons, adapting UI size when using touchscreens, and many other general UI refinements.

5. Onboarding – An important part of the Photon experience is helping new users understand what’s great about Firefox, and showing existing users what’s new and different in 57.

6. Performance – Performance is a key piece throughout Photon, but the Performance team is helping us to identify what areas of Firefox have issues. Some of this work overlaps with Quantum Flow, other work is improve specific areas of Firefox UI jank.

Recent Changes

These updates are going to focus more on the work that’s landing and less on the process that got it there. To start getting caught up, here’s a summary of what’s happened so far in each of the project areas though early May…

Menus/structure: Work is underway to implement the new menus. It’s currently behind a pref until we have enough implemented to turn them on without making Nightly awkward to use. In bug 1355331 we briefly moved the sidebar to the right side of the window instead of the left. But we’ve since decided that we’re only going to provide a preference to allow putting it on the right, and it will remain on the left by default.

Animation: In bug 1352069 we consolidated some existing preferences into a single new toolkit.cosmeticAnimations.enabled preference, to make it easy to disable non-essential animations for performance or accessibility reasons. Bugs 1345315 and 1356655 reduced jank in the tab opening/closing animations. The UX team is finalizing the new animations that will be used in Photon, and the engineering team has build prototypes for how to implement them in a way that performs well.

Preferences: Earlier in the year, we worked with a group of students at Michigan State University to reorganize Firefox’s preferences and add a search function (bug 1324168). We’re now completing some final work, preparing for a second revision, and adding some new UI for performance settings. While this is now considered part of Photon, it was originally scheduled to land in Firefox 55 or 56, and so will ship before the rest of Photon.

Visual redesign:  Bug 1347543 landed a major change to the icons in Firefox’s UI. Previously the icons were simple PNG bitmaps, with different versions for OS variations and display scaling factors. Now they’re a vector format (SVG), allowing a single source file to be be rendered within Firefox at different sizes or with different colors. You won’t notice this change, because we’re currently using SVG versions of the current pre-Photon icons. We’ll switch to the final Photon icons later, for Firefox 57. Another big foundational piece of work landed in bug 1352364, which refactored our toolbar button CSS so that we can easily update it for Photon.

Onboarding: The onboarding work got started later than other parts of Photon. So while some prototyping has started, most of the work up to May was spent finalizing the scope and design of project.

Performance: As noted in Ehsan’s Quantum updates, the Photon performance work has already resulted in a significant improvement to Firefox startup time. Other notable fixes have made closing tabs faster, and work to improve how favicons are stored improved performance on our Tp5 page-load benchmark by 30%! Other fixes have reduced awesomebar jank. While a number of performance bugs have been fixed (of which these are just a subset), most of the focus so far has been on profiling Firefox to identify lots of other things to fix. And it’s also worth noting the great Performance Best Practices guide Mike Conley helped put together, as well as his Oh No! Reflow! add-on, which is a useful tool for finding synchronous reflows in Firefox UI (which cause jank).

That’s it for now! The next couple of these Photon updates will catch up with what’s currently underway.


Planet Mozillarr Usenix Paper And Technical Report

Our paper about rr has been accepted to appear at the Usenix Annual Technical Conference. Thanks to Dawn for suggesting that conference, and to the ATC program committee for accepting it :-). I'm scheduled to present it on July 13. The paper is similar to the version we previously uploaded to arXiv.

Some of the reviewers requested more material: additional technical details, explanations of some of our design choices compared to alternatives, and reflection on our "real world" experience with rr. There wasn't space for that within the conference page limits, so our shepherd suggested publishing a tech-report version of the paper with the extra content. I've done that and uploaded "Engineering Record And Replay For Deployability: Extended Technical Report". I hope some people find it interesting and useful.

Planet MozillaThe plan towards offering Adblock Plus for Firefox as a Web Extension

TL;DR: Sometime in autumn this year the current Adblock Plus for Firefox extension is going to be replaced by another, which is more similar to Adblock Plus for Chrome. Brace for impact!

What are Web Extensions?

At some point, Web Extensions are supposed to become a new standard for creating browser extensions. The goal is writing extensions in such a way that they could run on any browser without any or only with minimal modifications. Mozilla and Microsoft are pursuing standardization of Web Extensions based on Google Chrome APIs. And Google? Well, they aren’t interested. Why should they be, if they already established themselves as an extension market leader and made everybody copy their approach.

It isn’t obvious at this point how Web Extensions will develop. The lack of interest from Google isn’t the only issue here; so far the implementation of Web Extensions in Mozilla Firefox and Microsoft Edge shows very significant differences as well. It is worth noting that Web Extensions are necessarily less powerful than the classic Firefox extensions, even though many shortcomings can probably be addressed. Also, my personal view is that the differences between browsers are either going to result in more or less subtle incompatibilities or in an API which is limited by the lowest common denominator of all browsers and not good enough for anybody.

So why offer Adblock Plus as a Web Extension?

Because we have no other choice. Mozilla’s current plan is that Firefox 57 (scheduled for release on November 14, 2017) will no longer load classic extensions, and only Web Extensions are allowed to continue working. So we have to replace the current Adblock Plus by a Web Extension by then or ideally even by the time Firefox 57 is published as a beta version. Otherwise Adblock Plus will simply stop working for the majority of our users.

Mind you, there is no question why Mozilla is striving to stop supporting classic extensions. Due to their deep integration in the browser, classic extensions are more likely to break browser functionality or to cause performance issues. They’ve also been delaying important Firefox improvements due to compatibility concerns. This doesn’t change the fact that this transition is very painful for extension developers, and many existing extensions won’t take this hurdle. Furthermore, it would have been better if the designated successor of the classic extension platform were more mature by the time everybody is forced to rewrite their code.

What’s the plan?

Originally, we hoped to port Adblock Plus for Firefox properly. While using Adblock Plus for Chrome as a starting point would require far less effort, this extension also has much less functionality compared to Adblock Plus for Firefox. Also, when developing for Chrome we had to make many questionable compromises that we hoped to avoid with Firefox.

Unfortunately, this plan didn’t work out. Adblock Plus for Firefox is a large codebase and rewriting it all at once without introducing lots of bugs is unrealistic. The proposed solution for a gradual migration doesn’t work for us, however, due to its asynchronous communication protocols. So we are using this approach to start data migration now, but otherwise we have to cut our losses.

Instead, we are using Adblock Plus for Chrome as a starting point, and improving it to address the functionality gap as much as possible before we release this version for all our Firefox users. For the UI this means:

  • Filter Preferences: We are working on a more usable and powerful settings page than what is currently being offered by Adblock Plus for Chrome. This is going to be our development focus, but it is still unclear whether advanced features such as listing filters of subscriptions or groups for custom filters will be ready by the deadline.
  • Blockable Items: Adblock Plus for Chrome offers comparable functionality, integrated in the browser’s Developer Tools. Firefox currently doesn’t support Developer Tools integration (bug 1211859), but there is still hope for this API to be added by Firefox 57.
  • Issue Reporter: We have plans for reimplementing this important functionality. Given all the other required changes, this one has lower priority, however, and likely won’t happen before the initial release.

If you are really adventurous you can install a current development build here. There is still much work ahead however.

What about applications other than Firefox Desktop?

The deadline only affects Firefox Desktop for now; in other applications classic extensions will still work. However, it currently looks like by Firefox 57 the Web Extensions support in Firefox Mobile will be sufficient to release a Web Extension there at the same time. If not, we still have the option to stick with our classic extension on Android. Update (2017-05-18): Mozilla announced that Firefox Mobile will drop support for classic extensions at the same time as Firefox Desktop. So the option to keep our classic extension there doesn’t exist, we’ll have to make with whatever Web Extensions APIs are available.

As to SeaMonkey and Thunderbird, things aren’t looking well there. It’s doubtful that these will have noteworthy Web Extensions support by November. In fact, it’s not even clear whether they plan to support Web Extensions at all. And unlike with Firefox Mobile, we cannot publish a different build for them (Addons.Mozilla.Org only allows different builds per operating system, not per application). So our users on SeaMonkey and Thunderbird will be stuck with an outdated Adblock Plus version.

What about extensions like Element Hiding Helper, Customizations and similar?

Sadly, we don’t have the resources to rewrite these extensions. We just released Element Hiding Helper 1.4, and it will most likely remain as the last Element Hiding Helper release. There are plans to integrate some comparable functionality into Adblock Plus, but it’s not clear at this point when and how it will happen.

Planet MozillaCompatibility Update: Add-ons on Firefox for Android

We announced our plans for add-on compatibility and the transition to WebExtensions in the Road to Firefox 57 blog post. However, we weren’t clear on what this meant for Firefox for Android.

We did this intentionally, since at the time the plan wasn’t clear to us either. WebExtensions APIs are landing on Android later than on desktop. Many of them either don’t apply or need additional work to be useful on mobile. It wasn’t clear if moving to WebExtensions-only on mobile would cause significant problems to our users.

The Plan for Android

After looking into the most critical add-ons for mobile and the implementation plan for WebExtensions, we have decided it’s best to have desktop and mobile share the same timeline. This means that mobile will be WebExtensions-only at the same time as desktop Firefox, in version 57. The milestones specified in the Road to Firefox 57 post now apply to all platforms.

The post Compatibility Update: Add-ons on Firefox for Android appeared first on Mozilla Add-ons Blog.

Planet MozillaMozilla Roadshow Paris

Mozilla Roadshow Paris The Mozilla Roadshow is making a stop in Paris. Join us for a meetup-style, Mozilla-focused event series for people who build the Web. Hear from...

Planet MozillaMozilla Roadshow Paris

Mozilla Roadshow Paris The Mozilla Roadshow is making a stop in Paris. Join us for a meetup-style, Mozilla-focused event series for people who build the Web. Hear from...

Planet MozillaData Science is Hard: Anomalies Part 3

So what do you do when you have a duplicate data problem and it just keeps getting worse?

You detect and discard.

Specifically, since we already have a few billion copies of pings with identical document ids (which are extremely-unlikely to collide), there is no benefit to continue storing them. So what we do is write a short report about what the incoming duplicate looked like (so that we can continue to analyze trends in duplicate submissions), then toss out the data without even parsing it.

As before, I’ll leave finding out the time the change went live as an exercise for the reader:newplot(1)

:chutten


Planet MozillaOne Step Closer to a Closed Internet

Today, the FCC voted on Chairman Ajit Pai’s proposal to repeal and replace net neutrality protections enacted in 2015. The verdict: to move forward with Pai’s proposal

 

We’re deeply disheartened. Today’s FCC vote to repeal and replace net neutrality protections brings us one step closer to a closed internet.  Although it is sometimes hard to describe the “real” impacts of these decisions, this one is easy: this decision leads to an internet that benefits Internet Service Providers (ISPs), not users, and erodes free speech, competition, innovation and user choice.

This vote undoes years of progress leading up to 2015’s net neutrality protections. The 2015  rules properly place ISPs under “Title II” of the Communications Act of 1934, and through that well-tested basis of legal authority, prohibit ISPs from engaging in paid prioritization and blocking or throttling of web content, applications and services. These rules ensured a more open, healthy Internet.

Pai’s proposal removes the 2015 protections and re-re-classifies ISPs under “Title I,” which courts already have determined is insufficient for ensuring a truly neutral net. The result: ISPs would be able to once again prioritize, block and throttle with impunity. This means fewer opportunities for startups and entrepreneurs, and a chilling effect on innovation, free expression and choice online.

Net neutrality isn’t an abstract issue — it has significant, real-world effects. For example, in the past, without net neutrality protections, ISPs have imposed limits on who can FaceTime and determined how we stream videos, and also adopted underhanded business practices.

So what’s next and what can we do?

We’re now entering a 90-day public comment period, which ends in mid-August. The FCC may determine a path forward as soon as October of this year.

During the public comment period in 2015, nearly 4 million citizens wrote to the FCC, many of them demanding strong net neutrality protections.  We all need to show the same commitment again.

We’re already well on our way to making noise. In the weeks since Pai first announced his proposal, more than 100,000 citizens (not bots) have signed Mozilla’s net neutrality petition at mzl.la/savetheinternet. And countless callers (again, not bots) have recorded more than 50 hours of voicemail for the FCC’s ears. We need more of this.

We’re also planning strategic, direct engagement with policymakers, including through written comments in the FCC’s open proceeding. Over the next three months, Mozilla will continue to amplify internet users’ voices and fuel the movement for a healthy internet.

The post One Step Closer to a Closed Internet appeared first on The Mozilla Blog.

Planet MozillaReps Weekly Meeting May 18, 2017

Reps Weekly Meeting May 18, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting May 18, 2017

Reps Weekly Meeting May 18, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet Mozilla2017 Global Sprint Ask Me Anything

2017 Global Sprint Ask Me Anything Questions and answers about the upcoming 2017 Mozilla Global Sprint, our worldwide collaboration party for the Open Web

Planet Mozilla2017 Global Sprint Ask Me Anything

2017 Global Sprint Ask Me Anything Questions and answers about the upcoming 2017 Mozilla Global Sprint, our worldwide collaboration party for the Open Web

Planet MozillaA genuine medical oddity

My health continues to be an adventure. My neuropathy continues to worsen steadily; I no longer have any significant sensation in many of my toes, and my feet are always in a state of “pins and needles” style numbness. My legs are almost always tingling so hard they burn, or feel like they’re being squeezed in a giant fist, or both. The result is that I have some issues with my feet not always doing exactly what I expect them to be doing, and I don’t usually know exactly where they are.

For example, I have voluntarily stopped driving for the most part, because much of the time, sensation in my feet is so bad that I can’t always tell whether my feet are in the right places. A few times, I’ve found myself pressing the gas and brake pedals together because I didn’t realize my foot was too far to the left.

I also trip on things a lot more than I used to, since my feet wander a bit without my realizing it. On January 2, I tripped over a chair in my office while carrying an old CRT monitor to store it in my supply cabinet. I went down hard on my left knee and landed on the monitor I was carrying, taking it squarely to my chest. My chest was okay, just a little sore, but my knee was badly injured. The swelling was pretty brutal, and it is still trying to finish healing up more than four months later.

Given the increased problems with my leg pain, my neurologist recently had an MRI performed on my lumbar (lower) spine. An instance of severe nerve root compression was found which is possibly contributing to my pain and numbness in my legs. We are working to schedule for them to attempt to inject medication at that location to try to reduce the swelling that’s causing the compression. If successful, that could help temporarily relieve some of my symptoms.

But the neuropathic pain in my neck and shoulders continues as well. There is some discussion of possibly once again looking at using a neurostimulator implant to try to neutralize the pain signals that are being falsely generated. Apparently I’m once again eligible for this after a brief period where my symptoms shifted outside the range of those which are considered appropriate for that type of therapy.

In addition to the neurological issues, I am in the process of scheduling a procedure to repair some vascular leaks in my left leg, which may be responsible for some swelling there that could be in part responsible for some of my leg trouble (although that is increasingly unlikely given other information that’s come to light since we started scheduling that work).

Then you can top all that off with the side effects of all the meds I’m taking. I take at least six medications which have the side effect of “drowsiness” or “fatigue” or “sleepiness.” As a result, I live in a fog most of the time. Mornings and early afternoons are especially difficult. Just keeping awake is a challenge. Being attentive and getting things written is a battle. I make progress, but slowly. Most of my work happens in the afternoons and evenings, squeezed into the time between my meds easing up enough for me to think more clearly and alertly, and time for my family to get together for dinner and other evening activities together.

Balancing work, play, and personal obligations when you have this many medical issues at play is a big job. It’s also exhausting in and of itself. Add the exhaustion and fatigue that come from the pain and the meds, and being me is an adventure indeed.

I appreciate the patience and the help of my coworkers and colleagues more than I can begin to say. Each and every one of you is awesome. I know that my unpredictable work schedule (between having to take breaks because of my pain and the vast number of appointments I have to go to) causes headaches for everyone. But the team has generally adapted to cope with my situation, and that above all else is something I’m incredibly grateful for. It makes my daily agony more bearable. Thank you. Thank you. Thank you.

Thank you.

Planet MozillaFake News and Digital Literacies: some resources

Free Hugs CC BY-NC-ND clement127

In a couple of weeks’ time, on Thursday, 1st June 2017, I’ll be a keynote speaker at an online Library 2.0 event, convened by Steve Hargadon. The title is Digital Literacy and Fake News and you can register for it here. An audience of around 5,000 people from all around the world is expected to hear us discuss the following:

What does “digital literacy” mean in an era shaped by the Internet, social media, and staggering quantities of information? How is it that the fulfillment of human hopes for a open knowledge society seem to have resulted in both increased skepticism of, and casualness with, information? What tools and understanding can library professionals bring to a world that seems to be dominated by fake news?

In preparation for the session, Steve has asked us all to provide a ‘Top 10’ of our own resources on the topic, as well as those from others that we’d recommend. In the spirit of working openly, I’m sharing in advance what I’ve just emailed to him.

I’ll be arguing that ‘Fake News’ is a distraction from more fundamental problems, including algorithmic curation of news feeds, micro-targeting of user groups, and the way advertising fuels the web economy.

 1. My resources

 2. Other resources

I hope you can join us live, or at least watch the recording afterwards! Don’t forget to sign up.


Comments? Questions? I’m off Twitter for May, but you can email me: hello@dynamicskillset.com

Image CC BY-NC-ND clement127

Planet MozillaHacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Planet WebKitRelease Notes for Safari Technology Preview 30

Safari Technology Preview Release 30 is now available for download for macOS Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 215859-216643.

Web API

  • Implemented Subresource Integrity (SRI) (r216347)
  • Implemented X-Content-Type-Options:nosniff (r215753, r216195)
  • Added support for Unhandled Promise Rejection events (r215916)
  • Updated document.cookie to only return cookies if the document URL has a network scheme or is a file URL (r216341)
  • Removed the non-standard document.implementation.createCSSStyleSheet() API (r216458)
  • Removed the non-standard Element.scrollByLines() and scrollByPages() (r216456)
  • Changed to allow a null comparator in Array.prototype.sort (r216169)
  • Changed to set the Response.blob() type based on the content-type header value (r216353)
  • Changed Element.slot to be marked as [Unscopable] (r216228)
  • Implemented HTMLPreloadScanner support for <link preload> (r216143)
  • Fixed setting Response.blob() type correctly when the body is a ReadableStream (r216073)
  • Moved offsetParent, offsetLeft, offsetTop, offsetWidth, offsetHeight properties from Element to HTMLElement (r216466)
  • Moved the style property from Element to HTMLElement and SVGElement, then made it settable (r216426)

JavaScript

  • Fixed Arrow function access to this after eval('super()') within a constructor (r216329)
  • Added support for dashed values in unicode locale extensions (r216122)
  • Fixed the behaviour of the .sort(callback) method to match Firefox and Chrome (r216137)

CSS

  • Fixed space-evenly behavior with Flexbox (r216536)
  • Fixed font-stretch:normal to select condensed fonts (r216517)
  • Fixed custom properties used in rgb() with calc() (r216188)

Accessibility

  • Fixed the behavior of aria-orientation="horizontal" on a list (r216452)
  • Prevented exposing empty roledescription (r216457)
  • Propagated aria-readonly to grid descendants (r216425)
  • Changed to ignore aria-rowspan value if a rowspan value is provided for a <td> or <th> (r216167)
  • Fixed an issue causing VoiceOver to skip cells after a cell with aria-colspan (r216134)
  • Changed to treat cells with ARIA table cell properties as cells (r216123)
  • Updated implementation of aria-orientation to match specifications (r216089)

Web Inspector

  • Added resource load error reason text in the details sidebar (r216564)
  • Fixed toggling the Request and Response resource views in certain cases (r216461)
  • Fixed miscellaneous RTL and localization issues (r216465, r216629)
  • Fixed Option-Click on URL behavior in Styles sidebar (r216166)
  • Changed 404 Image Loads to appear as a failures in Web Inspector (r216138)

WebDriver

  • Fixed several issues that prevented cookie-related endpoints from working correctly (r216258, r216261, r216292)

Media

  • Removed black background from video layer while in fullscreen (r216472)

Rendering

  • Fixed problem with the CSS Font Loading API’s load() function erroneously resolving promises when used with preinstalled fonts (r216079)
  • Fixed flickering on asynchronous image decoding and ensured the image is incrementally displayed as new data is received (r216471)

Planet MozillaThe Joy of Coding - Episode 100

The Joy of Coding - Episode 100 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 100

The Joy of Coding - Episode 100 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Manager’s Path in review

I ordered Camille Fournier’s book on engineering management when it was in pre-order.  I was delighted to read when in arrived in the mail.  If you work in the tech industry, manager or not, this book is a must read.

The book is a little over 200 pages but it packs much more succinct and use useful advice that other longer books on this topic that I’ve read in the past.  It takes a unique approach in that the first chapters describes what to expect from your manager, as a new individual contributor.  Each chapter then moves to how to help others as a mentor, a tech lead, managing other people, to managing the a team, to a manager of multiple teams and further up the org chart.  At the end of each chapter, there are questions for you that relate to the chapter to assess your own experience with the material.

IMG_3914

Some of the really useful advice in the book

  • Choose your manager wisely, consider who you will be reporting to when interviewing.  “Strong managers know how to play the game at their company.  They can get you promoted; the can get you attention and feedback from important people.  Strong managers have strong networks, and can get you jobs every after you stop working for them.”
  • How to be a great tech lead – understand the architecture, be a team player, lead technical discussions, communicate “Your productivity is now less important than the productivity of the whole team….If one universal talent separates successful leaders from the pack, it’s communication skills.  Successful leaders write well, they read carefully, and they can get up in front of a group and speak.”
  • On transitioning to a job managing a team “As much as you may want to believe that management is a natural progression of the skills you develop as a senior engineer, it’s really a whole new set of skills and challenges.”  So many people think that and don’t take the time to learn new skills before taking on the challenge of managing a team.
  • One idea I thought was really fantastic was to create a 30/60/90 day plan for new hires or new team members to establish clear goals to ensure they are meeting expectations on getting up to speed.
  • Camille also discusses the perils of micromanagement and how this can be a natural inclination for people who were deeply technical before becoming managers.  Instead of focusing on the technical details, you need to focus on giving people autonomy over their work, to keep them motivated and engaged.
  • On giving performance reviews – use concrete examples from anonymous peer reviews to avoid bias.  Spent plenty of time preparing, start early and focus on accomplishments and strengths.    When describing areas for improvement, keep it focused. Avoid surprises during the review process. If a person is under-performing, the review process should not be the first time they learn this.
  • From the section on debugging dysfunctional teams, the example was given of a team that wasn’t shipping the team only released once a week and the release process was very time consuming and painful.  Once the release process was more automated and occurred more regularly – the team became more productive.  In other words, sometimes dysfunctional teams are due to resource constraints or broken processes.
  • Be kind, not nice. It’s kind to tell someone that they aren’t ready for promotion and describe the steps that they need to get to the next level.  It’s not kind to tell someone that they should get promoted, but watch them fail.  It’s kind to tell someone that their behaviour is disruptive, and that they need to change it.
  • Don’t be afraid.  Avoiding conflict because of fear will not resolve the problems on your team.
  • “As you grow more into leadership positions, people will look to you for behavioral guidance. What you want to teach them in how to focus.  To that end, there are two areas I encourage you to practice modeling, right now: figuring out what’s important, and going home.”  💯💯💯
  • Suggestions for roadmapping uncertainty – be realistic about the probability of change, break down projects into smaller your deliverables so that if you don’t implement the entire larger project,  you have still implemented some of the project’s intended results.
  • Chapter 9 on bootstrapping culture discusses how the role of a senior engineering leader is not just to set technical direction, but to be clear and thoughtful about setting the culture of the engineering team.
  • I really like this paragraph on hiring for the values of the team

culture

  • The bootstrapping culture chapter finishes with notes about code review, running outage postmortems and architectural reviews which all have very direct and useful advice.

This book describes concrete and concise steps to be an effective leader as you progress through your career.  It also features the voices and perspectives of women in leadership, something some well-known books lack. I’ve read a lot of management books over the years, and while some have jewels of wisdom, I haven’t read one that is this densely packed with useful content.  It really makes you think – what is the most important thing I can be doing now to help others make progress and be happy with their work?

I’ve also found the writing, talks and perspectives of the following people on engineering leadership invaluable


Planet MozillaWorking Together Towards a more Secure Internet through VEP Reform

Today, Mozilla sent a letter to Congress expressing support for an important bill has just been introduced: the Protecting Our Ability to Counter Hacking Act (PATCH Act). You can read more in this post from Denelle Dixon.

This bill focuses on a relatively unknown, but critical, piece of the U.S. government’s responsibility to secure our internet infrastructure: the Vulnerabilities Equities Process (VEP). The VEP is the government’s process for reviewing and coordinating the disclosure of vulnerabilities to folks who write code – like us – who can fix them in the software and hardware we all use (you can learn more about what we know here). However, the VEP is not codified in law, and lacks transparency and reporting on both the process policymakers follow and the considerations they take into account. The PATCH Act would address these gaps.

The cyberattack over the last week – using the WannaCry exploit from the latest Shadow Brokers release, and exploiting unpatched Windows computers – only emphasizes the need to work together and make sure that we’re all as secure as we can be. As we said earlier this week, these exploits might have been shared with Microsoft by the NSA – and that would be the right way to handle an exploit like this. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes points to the importance of codifying and improving the Vulnerabilities Equities Process.

We’ve said before – many times – how important it is to work together to protect cybersecurity. Reforming the VEP is one key component of that shared responsibility, ensuring that the U.S. government shares vulnerabilities that put swaths of the internet at risk. The process was conceived in 2010 to improve our collective cybersecurity, and implemented in 2014 after the Heartbleed vulnerability put most of the internet at risk (for more information, take a look at this timeline). It’s time to take the next step and put this process into statute.

Last year, we wrote about five important reforms to the VEP we believe are necessary:

  • All security vulnerabilities should go through the VEP.
  • All relevant federal agencies involved in the VEP should work together using a standard set of criteria to ensure all risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP should be placed within the Department of Homeland Security (DHS), with their expertise in existing coordinated vulnerability disclosure programs.
  • The VEP should be codified in law to ensure compliance and permanence.

Over the last year, we have seen many instances where hacking tools from the U.S. government have been posted online, and then used – by unknown adversaries – to attack users. Some of these included “zero days”, which left companies scrambling to patch their software and protect their users, without prior notice. It’s important that the government defaults to disclosing vulnerabilities, rather than hoarding them in case they become useful later. We hope they will instead work with technology companies to help protect all of us online.

The PATCH Act – introduced by Sen. Gardner, Sen. Johnson, Sen. Schatz, Rep. Farenthold, and Rep. Lieu – aims to codify and make the existing Vulnerabilities Equities Process more transparent. It’s relatively simple – a good thing, when it comes to legislation: it creates a VEP Board, housed at DHS, which will consider disclosure of vulnerabilities that some part of the government knows about. The VEP Board would make public the process and criteria they use to balance the relevant interests and risks – an important step – and publish reporting around the process. These reports would allow the public to consider whether the process is working well, without sharing classified information (saving that reporting for the relevant oversight entities). This would also make it easier to disclose vulnerabilities through DHS’ existing channels.

Mozilla looks forward to working with members of Congress on this bill, as well as others interested in VEP reform – and all the other government actors, in the U.S. and around the world, who seek to take action that would improve the security of the internet. We stand with you, ready to defend the security of the internet and its users.

The post Working Together Towards a more Secure Internet through VEP Reform appeared first on Open Policy & Advocacy.

Planet MozillaImproving Internet Security through Vulnerability Disclosure

Supporting the PATCH Act for VEP Reform

 

Today, Mozilla sent a letter to Congress in support of the Protecting Our Ability to Counter Hacking Act (PATCH Act) that was just introduced by Sen. Cory Gardner, Sen. Ron Johnson, Sen. Brian Schatz, Rep. Blake Farenthold, and Rep. Ted Lieu.

We support the PATCH Act because it aims to codify and make the existing Vulnerabilities Equities Process more transparent. The Vulnerabilities Equities Process (VEP) is the U.S. government’s process for reviewing and coordinating the disclosure of new vulnerabilities learns about.

The VEP remains shrouded in secrecy, and is in need of process reforms to ensure transparency, accountability, and oversight. Last year, I wrote about five important reforms to the VEP we believe are necessary to make the internet more secure. The PATCH Act includes many of the key reforms, including codification in law to increase transparency and accountability.

For background, a vulnerability is a flaw – in design or implementation – that can be used to exploit or penetrate a product or system. We saw an example this weekend as a ransomware attack took unpatched systems by surprise – and you’d be surprised at how common they are if we don’t all work together to fix them. These vulnerabilities can put users and businesses at significant risk from bad actors. At the same time, exploiting these same vulnerabilities can also be useful for law enforcement and intelligence operations. It’s important to consider those equities when the government decides what to do.

If the government has exploits that have been compromised, they must disclose them to tech companies before those vulnerabilities can be used widely and put users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law. Read this Mozilla Policy blog post from Heather West for more details.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

We look forward to working with the U.S. government (and governments around the world) to improve disclosure of security vulnerabilities and better secure the internet to protect us all.

 

 

The post Improving Internet Security through Vulnerability Disclosure appeared first on The Mozilla Blog.

Planet MozillaRust Libs Team Meeting 2017-05-16

Rust Libs Team Meeting 2017-05-16 Rust libs team meeting 2017-05-16

Planet MozillaRust Libs Team Meeting 2017-05-16

Rust Libs Team Meeting 2017-05-16 Rust libs team meeting 2017-05-16

Planet MozillaBuilding an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

<video controls="" height="480" width="640">
<source src="https://video.danielpocock.com/oscal_demo3.mp4" type="video/mp4"></source>
Your browser does not support the video tag.
</video>

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

Planet MozillaAdd-on Compatibility for Firefox 55

Firefox 55 will be released on August 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 55 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

General

Recently, we turned on a restriction on Nightly that only allows multiprocess add-ons to be enabled. You can use a preference to toggle it. Also, Firefox 55 is the first version to move directly from Nightly to Beta after the removal of the Aurora channel.

XPCOM and Modules

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 55.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 54.

The post Add-on Compatibility for Firefox 55 appeared first on Mozilla Add-ons Blog.

Planet MozillaFirefox 54 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – May 12th – we held a Testday event, for Firefox 54 Beta 7.

Thank you all for helping us making Mozilla a better place – Gabi Cheta, Ilse Macías, Juliano Naves, Athira Appu, Avinash Sharma,  Iryna Thompson.

From India team: Surentharan R.A, Fahima Zulfath, Baranitharan.M,  Sriram, vignesh kumar.

From Bangladesh team: Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Md.Majedul islam, Sajedul Islam, Saddam Hossain, Maruf Rahman, Md.Tarikul Islam Oashi, Md. Ehsanul Hassan, Meraj Kazi, Sayed Ibn Masud, Tanvir Rahman, Farhadur Raja Fahim, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Md. Almas Hossain, Saheda Reza Antora, Fahmida Noor, Muktasib Un Nur, Mohammad Maruf Islam,  Rezwana Islam Ria, Tazin Ahmed, Towkir Ahmed, Azmina Akter Papeya

Results:

– several test cases executed for the Net Monitor MVP and Firefox Screenshots features;

– 2 new bugs filed: 13647711364773.

– 15 bugs verified: 116717813640901363288134125813420021335869,  776254, 13637371363840, 13276911315550135847913380361348264 and 1361247.

Again thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaHaving fun with physics and A-Frame

A-Frame is a WebVR framework to build virtual reality experiences. It comes with some bundled components that allow you to easily add behavior to your VR scenes, but you can download more –or even create your own.

In this post I’m going to share how I built a VR scene that integrates a physics engine via a third-party component. While A-Frame allows you to add objects and behaviors to a scene, if you want those objects to interact with each other or be manipulated by the user, you might want to use a physics engine to handle the calculations you’ll need. If you are new to A-Frame, I recommend you check out the Getting started guide and play with it a bit first.

The scene I created is a bowling alley that works with the HTC Vive headset. You have a ball in your right hand which you can throw by holding the right-hand controller trigger button and releasing it as you move your arm. To return the ball back to your hand and try again, press the menu button. You can try the demo here! (Note: You will need Firefox Nightly and an HTC Vive. Follow the the setup instructions in WebVR.rocks)

ScreenshotThe source code is at your disposal on Github to tweak and have fun with.

Adding a physics engine to A-Frame

I’ve opted for aframe-physics-system, which uses Cannon.js under the hood. Cannon is a pure JavaScript physics engine (not a compiled version to ASM from C/C++), so we can easily interface with it –and peek at its code.

aframe-physics-system is middleware that initialises the physics engine and exposes A-Frame components for us to apply to entities. When we use its static-body or dynamic-body components, aframe-physics-system creates a Cannon.Body instance and “attaches” it to our A-Frame entities, so on every frame it adjusts the entity’s position, rotation, etc. to match the body’s.

If you wish to use a different engine, take a look at aframe-physics-system or aframe-physics-components. These components are not very complex and it should not be complicated to mimic their behavior with another engine.

Static and dynamic bodies

Static bodies are those which are immovable. Think of the ground, or walls that can’t be torn down, etc. In the scene, the immovable entities are the ground and the bumpers on each side of the bowling lane.

Dynamic bodies are those which move, bounce, topple etc. Obviously the ball and the bowling pins are our dynamic bodies. Note that since these bodies move and can fall, or collide and knock down other bodies, the mass property will have a big influence. Here’s an example for a bowling pin:

<a-cylinder dynamic-body="mass: 1" ...>

The avatar and the physics world

To display the “hands” of the user (i.e., to show the tracked VR controllers as hands) I used the vive-controls component, already bundled in A-Frame.

<a-entity vive-controls="hand: right" throwing-hand></a-entity>
<a-entity vive-controls="hand: left"></a-entity>

The challenge here is that the user’s avatar (“head” and “hands”) is not part of the physical world –i.e. it’s out of the physics engine’s scope, since the head and the hands must follow the user’s movement, without being affected by physical rules, such as gravity or friction.

In order for the user to be able to “hold” the ball, we need to fetch the position of the right controller and manually set the ball’s position to match this every frame. We also need to reset other physical properties, such as velocity.

This is done in the custom throwing-hand component (which I added to the entity representing the right hand), in its tick callback:

ball.body.velocity.set(0, 0, 0);
ball.body.angularVelocity.set(0, 0, 0);
ball.body.quaternion.set(0, 0, 0, 1);
ball.body.position.set(position.x, position.y, position.z);

Note: a better option would have been to also match the ball’s rotation with the controller.

Throwing the ball

The throwing mechanism works like this: the user has to press the controller’s trigger and when she releases it, the ball is thrown.

There’s a method in Cannon.Body which applies a force to a dynamic body: applyLocalImpulse. But how much impulse should we apply to the ball and in which direction?

We can get the right direction by calculating the velocity of the throwing hand. However, since the avatar isn’t handled by the physics engine, we need to calculate the velocity manually:

let velocity = currentPosition.vsub(lastPosition).scale(1/delta);

Also, since the mass of the ball is quite high (to give it more “punch” against the pins), I had to add a multiplier to that velocity vector when applying the impulse:

ball.body.applyLocalImpulse(
  velocity.scale(50),
  new CANNON.Vec3(0, 0, 0)
);

Note: If I had allowed the ball to rotate to match the controller’s rotation, I would have needed to apply that rotation to the velocity vector as well, since applyLocalImpulse works with the ball’s local coordinates system.

To detect when the controller’s trigger has been released, the only thing needed is a listener for the triggerup event in the entity representing the right hand. Since I added my custom throwing-hand component there, I set up the listener in its init callback:

this.el.addEventListener('triggerup', function (e) {
  // ... throw the ball
});

A glitch

At the beginning, I was simulating the throw by pressing the space bar key. The code looked like this:

document.addEventListener('keyup', function (e) {
  if (e.keyCode === 32) { // spacebar
    e.preventDefault();
    throwBall();
  }
});

However, this was outside of the A-Frame loop, and the computation of the throwing hand’s lastPosition and currentPosition was out of sync, and thus I was getting odd results when calculating the velocity.

This is why I set a flag instead of calling launch directly, and then, inside of the throwing-hand’s tick callback, throwBall is called if that flag is set to true.

Another glitch: shaking pins

Using the aframe-physics-system’s default settings I noticed a glitch when I scaled down the bowling pins: They were shaking and eventually falling to the ground!

This can happen when using a physics engine if the computations are not precise enough: there is a small error that carries over frame by frame, it accumulates and… you have things crumbling or tumbling down, especially if these things are small –they need less error for changes to be more noticeable.

One workaround for this is to increase the accuracy of the physics simulation — at the expense of performance. You can control this with the iterations setting at the aframe-physics-system’s component configuration (by default it is set to 10). I increased it to 20:

<a-scene physics="iterations: 20">

To better see the effects of this change, here is a comparison side by side with iterations set to 5 and 20:

NOTE: Upload to Youtube this video and insert it here: https://drive.google.com/a/mozilla.com/file/d/0B45CULzwzeLdNGdDTk9QOFFqQUE/view?usp=sharing

The “sleep” feature of Cannon provides another possible workaround to handle this specific situation without affecting performance. When an object is in sleep mode, physics won’t make it move until it wakes upon collision with another object.

Your turn: play with this!

I have uploaded the project to Glitch as well as to a Github repository in case you want to play with it and make your own modifications. Some things you can try:

  • Allow the player to use both hands (maybe with a button to switch the ball from one hand to the other?)
  • Automatically reset the bowling pins to their original position once they have all fallen. You can check the rotation of their bodies to implement this.
  • Add sound effects! There is a callback for collision events you can use to detect when the ball has collided with another element… You can add a sound effects for when the ball clashes against the pins, or when it hits the ground.

If you have questions about A-Frame or want to get more involved in building WebVR with A-Frame, check out our active community on Slack. We’d love to see what you’re working on.

Planet MozillaYou don’t owe the world perfection! – keynote at Beyond Tellerrand

Yesterday morning I was lucky enough to give the opening keynote at the excellent Beyond Tellerrand conference in Dusseldorf, Germany. I wrote a talk for the occasion that covered a strange disconnect that we’re experiencing at the moment.
Whilst web technology advanced leaps and bounds we still seem to be discontent all the time. I called this the Tetris mind set: all our mistakes are perceived as piling up whilst our accomplishments vanish.

Eva-Lotta Lamm created some excellent sketchnotes on my talk.
Sketchnotes of the talk

The video of the talk is already available on Vimeo:

Breaking out of the Tetris mind set from beyond tellerrand on Vimeo.

You can get the slides on SlideShare:

I will follow this up with a more in-depth article on the subject in due course, but for today I am very happy how well received the keynote was and I want to remind people that it is OK to build things that don’t last and that you don’t owe the world perfection. Creativity is a messy process and we should feel at ease about learning from mistakes.

Planet MozillaSubmitting Your First Patch to the Linux kernel and Responding to Feedback

After working on the Linux kernel for Nexus and Pixel phones for nearly a year, and messing around with the excellent Eudyptula challenge, I finally wanted to take a crack at submitting patches upstream to the Linux kernel.

This post is woefully inadequate compared to the existing documentation, which should be preferred.

I figure I’d document my workflow, now that I’ve gotten a few patches accepted (and so I can refer to this post rather than my shell history…). Feedback welcome (open an issue or email me).

Step 1: Setting up an email client

I mostly use git send-email for sending patch files. In my ~/,gitconfig I have added:

<figure class="code">
1
2
3
4
5
6
[sendemail]
  ; setup for using git send-email; prompts for password
  smtpuser = myemailaddr@gmail.com
  smtpserver = smtp.googlemail.com
  smtpencryption = tls
  smtpserverport = 587
</figure>

To send patches through my gmail account. I don’t add my password so that I don’t have to worry about it when I publish my dotfiles. I simply get prompted every time I want to send an email.

I use mutt to respond to threads when I don’t have a patch to send.

Step 2: Make fixes

How do you find a bug to fix? My general approach to finding bugs in open source C/C++ code bases has been using static analysis, a different compiler, and/or more compiler warnings turned on. The kernel also has an instance of bugzilla running as an issue tracker. Work out of a new branch, in case you choose to abandon it later. Rebase your branch before submitting (pull early, pull often).

Step 3: Thoughtful commit messages

I always run git log <file I modified> to see some of the previous commit messages on the file I modified.

<figure class="code">
1
2
3
4
5
6
7
8
$ git log arch/x86/Makefile

commit a5859c6d7b6114fc0e52be40f7b0f5451c4aba93
...
    x86/build: convert function graph '-Os' error to warning
commit 3f135e57a4f76d24ae8d8a490314331f0ced40c5
...
    x86/build: Mostly disable '-maccumulate-outgoing-args'
</figure>

The first words of commit messages in Linux are usually <subsystem>/<sub-subsystem>: <descriptive comment>.

Let’s commit, git commit <files> -s. We use the -s flag to git commit to add our signoff. Signing your patches is standard and notes your agreement to the Linux Kernel Certificate of Origin.

Step 4: Generate Patch file

git format-patch HEAD~. You can use git format-patch HEAD~<number of commits to convert to patches> to turn multiple commits into patch files. These patch files will be emailed to the Linux Kernel Mailing List (lkml). They can be applied with git am <patchfile>. I like to back these files up in another directory for future reference, and cause I still make a lot of mistakes with git.

Step 5: checkpatch

You’re going to want to run the kernel’s linter before submitting. It will catch style issues and other potential issues.

<figure class="code">
1
2
3
4
$ ./scripts/checkpatch.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
total: 0 errors, 0 warnings, 9 lines checked

0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch has no obvious style problems and is ready for submission.
</figure>

If you hit issues here, fix up your changes, update your commit with git commit --amend <files updated>, rerun format-patch, then rerun checkpatch until you’re good to go.

Step 6: email the patch to yourself

This is good to do when you’re starting off. While I use mutt for responding to email, I use git send-email for sending patches. Once you’ve gotten a hang of the workflow, this step is optional, more of a sanity check.

<figure class="code">
1
2
$ git send-email \
0001-x86-build-require-only-gcc-use-maccumulate-outgoing-.patch
</figure>

You don’t need to use command line arguments to cc yourself, assuming you set up git correctly, git send-email should add you to the cc line as the author of the patch. Send the patch just to yourself and make sure everything looks ok.

Step 7: fire off the patch

Linux is huge, and has a trusted set of maintainers for various subsystems. The MAINTAINERS file keeps track of these, but Linux has a tool to help you figure out where to send your patch:

<figure class="code">
1
2
3
4
5
6
$ ./scripts/get_maintainer.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
Person A <person@a.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person B <person@b.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person C <person@c.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT))
</figure>

With some additional flags, we can feed this output directly into git send-email.

<figure class="code">
1
2
3
4
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc person@a.com \
0001-my.patch
</figure>

Make sure to cc yourself when prompted. Otherwise if you don’t subscribe to LKML, then it will be difficult to reply to feedback. It’s also a good idea to cc any other author that has touched this functionality recently.

Step 8: monitor feedback

Patchwork for the LKML is a great tool for tracking the progress of patches. You should register an account there. I highly recommend bookmarking your submitter link. In Patchwork, click any submitter, then Filters (hidden in the top left), change submitter to your name, click apply, then bookmark it. Here’s what mine looks like. Not much today, and mostly trivial patches, but hopefully this post won’t age well in that regard.

Feedback may or may not be swift. I think my first patch I had to ping a couple of times, but eventually got a response.

Step 9: responding to feedback

Update your file, git commit <changed files> --amend to update your latest commit, git format-patch -v2 HEAD~, edit the patch file to put the changes below the dash below the signed off lines (example), rerun checkpatch, rerun get_maintainer if the files you modified changed since V1. Next, you need to find the messageID to respond to the thread properly.

In gmail, when viewing the message I want to respond to, you can click “Show Original” from the dropdown near the reply button. From there, copy the MessageID from the top (everything in the angle brackets, but not the brackets themselves). Finally, we send the patch:

<figure class="code">
1
2
3
4
5
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc person@a.com \
--in-reply-to 2017datesandletters@somehostname \
0001-my.patch
</figure>

We make sure to add anyone who may have commented on the patch from the mailing list to keep them in the loop. Rinse and repeat 2 through 9 as desired until patch is signed off/acked or rejected.

Finding out when your patch gets merged is a little tricky; each subsystem maintainer seems to do things differently. My first patch, I didn’t know it went in until a bot at Google notified me. The maintainers for the second and third patches had bots notify me that it got merged into their trees, but when they send Linus a PR and when that gets merged isn’t immediately obvious.

It’s not like Github where everyone involved gets an email that a PR got merged and the UI changes. While there’s pros and cons to having this fairly decentralized process, and while it is kind of is git’s original designed-for use case, I’d be remiss not to mention that I really miss Github. Getting your first patch acknowledged and even merged is intoxicating and makes you want to contribute more; radio silence has the opposite effect.

Happy hacking!

(Thanks to Reddit user /u/EliteTK for pointing out that -v2 was more concise than --subject-prefix="Patch vX").

Planet MozillaAnalyzing Let's Encrypt statistics via Map/Reduce

I've been supplying the statistics for Let's Encrypt since they've launched. In Q4 of 2016 their volume of certificates exceeded the ability of my database server to cope, and I moved it to an Amazon RDS instance.

Ow.

Amazon's RDS service is really excellent, but paying out of pocket hurts.

I've been re-building my existing Golang/MySQL tools into a Golang/Python toolchain at slowly over the past few months. This switches from a SQL database with flexible, queryable columns to a EBS volume of folders containing certificates.

The general structure is now:

/ct/state/Y3QuZ29vZ2xlYXBpcy5jb20vaWNhcnVz
/ct/2017-08-13/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem
/ct/2017-08-13/oracle.out
/ct/2017-08-13/oracle.out.offsets
/ct/2017-08-14/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem

Fetching CT

Underneath /ct/state exists the state of the log-fetching utility, which is now a Golang tool named ct-fetch. It's mostly the same as the ct-sql tool I have been using, but rather than interact with SQL, it simply writes to disk.

This program creates folders for each notAfter date seen in a certificate. It appends each certificate to a file named for its issuer, so the path structure for cert data looks like:

/BASE_PATH/NOT_AFTER_DATE/ISSUER_BASE64.pem

The Map step

A Python 3 script ct-mapreduce-map.py processes each date-named directory. Inside, it reads each .pem and .cer file, decoding all the certificates within and tallying them up. When it's done in the directory, it writes out a file named oracle.out containing the tallies, and also a file named oracle.out.offsets with information so it can pick back up later without starting over.

map script running

The tallies contain, for each issuer:

  1. The number of certificates issued each day
  2. The set of all FQDNs issued
  3. The set of all Registered Domains (eTLD + 1 label) issued

The resulting oracle.out is large, but much smaller than the input data.

The Reduce step

Another Python 3 script ct-mapreduce-reduce.py finds all oracle.out files in directories whose names aren't in the past. It reads each of these in and merges all the tallies together.

The result is the aggregate of all of the per-issuer data from the Map step.

This will get converted then into the data sets currently available at https://ct.tacticalsecret.com/

State

I'm not yet to the step of synthesizing the data sets; each time I compare this data with my existing data sets there are discrepancies - but I believe I've improved my error reporting enough that after another re-processing, the data will match.

Performance

Right now I'm trying to do all this with a free-tier AWS EC2 instance and 100 GB of EBS storage; it's not currently clear to me whether that will be able to keep up with Let's Encrypt's issuance volume. Nevertheless, even an EBS-optimized instance is much less expensive than an RDS instance, even if the approach is less flexible.

performance monitor

Planet MozillaThis Week in Rust 182

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is PX8, a Rust implementation of an Open Source fantasy console. Thanks to hallucino for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

125 pull requests were merged in the last week.

New Contributors

  • Bastien Orivel
  • Dennis Schridde
  • Eduardo Pinho
  • faso
  • Liran Ringel

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Spent the last week learning rust. The old martial arts adage applies. Cry in the dojo, laugh in the battlefield.

/u/crusoe on Reddit.

Thanks to Ayose Cazorla for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaOCSP Telemetry in Firefox

Firefox Telemetry is such an amazing tool for looking at the state of the web. Keeler and I have recently been investigating the state of OCSP, which is a method of revoking TLS certificates. Firefox is among the last of the web browsers to use OCSP on each secure website by default.

The telemetry keys that are relevant to our OCSP support are:

  • SSL_TIME_UNTIL_HANDSHAKE_FINISHED
  • CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_CANCELED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_FAILED_TIME

The first (SSL_TIME_UNTIL_HANDSHAKE_FINISHED) is technically not an OCSP response time telemetry – it includes the server you're connecting to and may/may not have an OCSP fetch – but I include it as it’s highly correlated with successful responses (r=0.972) [1][2], and clearly related in the code. Before you get too excited though, the correlation likely only indicates that a user's network performance is generally similar between the web servers they're talking to and the OCSP servers being queried. Still, cool.

Otherwise the keys are what they say, and times are in milliseconds.

Speeding Up TLS Handshakes

We've been toying with ideas to speed up SSL_TIME_UNTIL_HANDSHAKE_FINISHED; the first of which was to reduce the timeout for OCSP for Domain-Validated certificates. Looking at the Continuous Distribution Function of CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, only 11% of OCSP handshakes complete after 1 second in flight: https://mzl.la/2ogNaSN
OCSP Success time (ms) CDF

Correspondingly, we decreased our timeout on DV OCSP from 2 seconds to 1 second.

This led to a very modest improvement in SSL_TIME_UNTIL_HANDSHAKE_FINISHED (~2%) in Firefox Nightly; this is probably because after one successful OCSP query, the result is cached, so only the first connection is slower, while the telemetry metric includes all connections - whether or not they had an OCSP query.

We've got some other tricks still to try.

Speed Trends

Among CAs in mailing lists there’s been a lot of talk about their efforts at improving OCSP response times. Looking at the aggregate information (as we can't break this down by CA), we’ve not really seen any improvement in response time over the last 18 months [3].

OCSP success time (ms) over time

Next steps

We're going to try some more ideas to speed up the initial TLS connection for both DV and EV certificates.

Footnotes

  1. Correlation between TLS handshake and OCSP Success https://i.have.insufficient.coffee/handshakevsocsp_success.png
  2. Correlation between TLS handshake and OCSP Failure https://i.have.insufficient.coffee/handshakevsocsp_fail.png
  3. OCSP Successes (median, 95th percentile) over time by date. Note: this took a while to load: https://mzl.la/2ogT8TJ

Planet MozillaMozMEAO SRE Status Report - 5/16/2017

Here’s what happened on the MozMEAO SRE team from May 9th - May 16th.

Current work

Bedrock (mozilla.org)

Work continues on moving Bedrock to our Kubernetes infrastructure.

Postgres/RDS provisioning

A Postgres RDS instance has already been provisioned in us-east-1 for our Virginia cluster, and another was created in ap-northeast-1 to support the Tokyo cluster. Additionally, development, staging, and production databases were created in each region. This process was documented here.

Elastic Load Balancer (ELB) provisioning

We’ve automated the creation of ELB’s for Bedrock in Virginia and Tokyo. There are still a few more wrinkles to sort out, but the infra is mostly in place to begin The Big Move to Kubernetes.

MDN

Work continues to analyze the Apache httpd configuration from the current SCL3 datacenter config.

Downtime incident 2017-05-13

On May 13th, 2017 22:49 -22:55, New Relic reported that MDN was unavailable. The site was slow to respond to page views, and was running long database queries. Log analysis show a security scan of our database-intensive endpoints.

On May 14th, 2017, there were high I/O alerts on 3 of the 6 production web servers. This was not reflected in high traffic or a decrease in responsiveness.

Basket

The FxA team would like to send events (FXA_IDs) to Basket and Salesforce, and needed SQS queues in order to move forward. We automated the provisioning of dev/stage/prod SQS queues, and passed off credentials to the appropriate engineers.

The FxA team requested cross AWS account access to the new SQS queues. Access has been automated and granted via this PR.

Snippets

Snippets Stats Collection Issues 2017-04-10

A planned configuration change to add a Route 53 Traffic Policy for the snippets stats collection service caused a day’s worth of data to not be collected due to a SSL certificate error.

Careers

Autoscaling

In order to take advantage of Kubernetes cluster and pod autoscaling (which we’ve documented here), app memory and CPU limits were set for careers.mozilla.org in our Virginia and Tokyo clusters. This allows the careers site to scale up and down based on load.

Acceptance tests

Giorgos Logiotatidis added acceptance tests, which contains a simple bash script and additional Jenkinsfile stages to check if careers.mozilla.org pages return valid responses after deployment.

Downtime incident 2017-04-11

A typo was merged and pushed to production and caused a couple of minutes of downtime before we rolled-back to the previous version.

Decommission openwebdevice.org status

openwebdevice.org will remain operational in http-only mode until the board approves decommissioning. A timeline is unavailable.

Future work

Nucleus

We’re planning to move nucleus to Kubernetes, and then proceed to decommissioning current nucleus infra.

Basket

We’re planning to move basket to Kubernetes shortly after the nucleus migration, and then proceed to decommissioning existing infra.

Links

Planet MozillaAdd-ons Update – 2017/05

Here’s the state of the add-ons world this month.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,132 listed add-on submissions:

  • 944 in fewer than 5 days (83%).
  • 21 between 5 and 10 days (2%).
  • 167 after more than 10 days (15%).

969 listed add-ons are awaiting review.

For two weeks we’ve been automatically approving add-ons that meet certain criteria. It’s a small initial effort (~60 auto-approvals) which will be expanded in the future. We’re also starting an initiative this week to clear most of the review queues by the end of the quarter. The change should be noticeable in the next couple of weeks.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility

We published the blog post for 54 and ran the bulk validation script. Additionally, we’ll publish the add-on compatibility post for Firefox 55 later this week.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • psionikangel
  • lavish205
  • Tushar Arora
  • raajitr
  • ccarruitero
  • Christophe Villeneuve
  • Aayush Sanghavi
  • Martin Giger
  • Joseph Frazier
  • erosman
  • zombie
  • Markus Stange
  • Raajit Raj
  • Swapnesh Kumar Sahoo

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/05 appeared first on Mozilla Add-ons Blog.

Planet MozillaFirefox memory usage with multiple content processes

This is a continuation of my Are They Slim Yet series, for background see my previous installment.

With Firefox’s next release, 54, we plan to enable multiple content processes — internally referred to as the e10s-multi project — by default. That means if you have e10s enabled we’ll use up to four processes to manage web content instead of just one.

My previous measurements found that four content processes are a sweet spot for both memory usage and performance. As a follow up we wanted to run the tests again to confirm my conclusions and make sure that we’re testing on what we plan to release. Additionally I was able to work around our issues testing Microsoft Edge and have included both 32-bit and 64-bit versions of Firefox on Windows; 32-bit is currently our default, 64-bit is a few releases out.

The methodology for the test is the same as previous runs, I used the atsy project to load 30 pages and measure memory usage of the various processes that each browser spawns during that time.

Without further ado, the results:

Graph of browser memory usage, Chrome uses a lot.

So we continue to see Chrome leading the pack in memory usage across the board: 2.4X the memory as Firefox 32-bit and 1.7X 64-bit on Windows. IE 11 does well, in fact it was the only one to beat Firefox. It’s successor Edge, the default browser on Windows 10, appears to be striving for Chrome level consumption. On macOS 10.12 we see Safari going the Chrome route as well.

Browsers included are the default versions of IE 11 and Edge 38 on Windows 10, Chrome Beta 59 on all platforms, Firefox Beta 54 on all platforms, and Safari Technology Preview 29 on macOS 10.12.4.

Note: For Safari I had to run the test manually, they seem to have made some changes that cause all the pages from my test to be loaded in the same content process.

Planet MozillaWannaCry is a Cry for VEP Reform

This weekend, a vulnerability in some versions of the Windows operating system resulted in the biggest cybersecurity attack in years. The so-called “WannaCry” malware relied on at least one exploit included in the latest Shadow Brokers release. As we have repeated, attacks like this are a clarion call for reform to the government’s Vulnerabilities Equities Process (VEP).

The exploits may have been shared with Microsoft by the NSA. We hope that happened, as it would be the right way to handle a vulnerability like this. Sharing vulnerabilities with tech companies enables us to protect our users, including the ones within the government. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law.

The WannaCry attack also shows the importance of security updates in protecting users. Microsoft patched the relevant vulnerabilities in a March update, but users who had not updated remain vulnerable. Mozilla has shared some resources to help users update their software, but much more needs to be done in this area.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

The post WannaCry is a Cry for VEP Reform appeared first on The Mozilla Blog.

Planet WebKitResponsive Design for Motion

WebKit now supports the prefers-reduced-motion media feature, part of CSS Media Queries Level 5, User Preferences. The feature can be used in a CSS @media block or through the window.matchMedia() interface in JavaScript. Web designers and developers can use this feature to serve alternate animations that avoid motion sickness triggers experienced by some site visitors.

To explain who this media feature is for, and how it’s intended to work, we’ll cover some background. Skip directly to the code samples or prefers-reduced-motion demo if you wish.

Motion as a Usability Tool

CSS transforms and animations were proposed by WebKit engineers nearly a decade ago as an abstraction of Core Animation concepts wrapped in a familiar CSS syntax. The standardization of CSS Transforms/CSS Animations and adoption by other browsers helped pave the way for web developers of all skill levels. Richly creative animations were finally within reach, without incurring the security risk and battery cost associated with plug-ins.

The perceptual utility of appropriate, functional motion can increase the understandability and —yes— accessibility of a user interface. There are numerous articles on the benefits of animation to increase user engagement:

In 2013, Apple released iOS 7, which included heavy use of animated parallax, dimensionality, motion offsets, and zoom effects. Animation was used as tool to minimize visual user interface elements while reinforcing a user’s understanding of their immediate and responsive interactions with the device. New capabilities in web and native platforms like iOS acted as a catalyst, leading the larger design community to a greater awareness of the benefits of user interface animation.

Since 2013, use of animation in web and native apps has increased by an order of magnitude.

Motion is Wonderful, Except When it’s Not

Included in the iOS accessibility settings is a switch titled “Reduce Motion.” It was added in iOS 7 to allow users the ability to disable parallax and app launching animations. In 2014, iOS included public API for native app developers to detect Reduce Motion (iOS, tvOS) and be notified when the iOS setting changed. In 2016, macOS added a similar user feature and API so developers could both detect Reduce Motion (macOS) and be notified when the macOS pref changed. The prefers-reduced-motion media feature was first proposed to the CSS Working Group in 2014, alongside the release of the iOS API.

Wait a minute! If we’ve established that animation can be a useful tool for increasing usability and attention, why should it ever be disabled or reduced?

The simplest answer is, “We’re not all the same.” Preference is subjective, and many power users like to reduce UI overhead even further once they’ve learned how the interface works.

The more important, objective answer is, “It’s a medical necessity for some.” In particular, this change is required for a portion of the population with conditions commonly referred to as vestibular disorders.

Vestibular Spectrum Disorder

Vestibular disorders are caused by problems affecting the inner ear and parts of the brain that control balance and spatial orientation. Symptoms can include loss of balance, nausea, and other physical discomfort. Vestibular disorders are more common than you might guess: affecting as many as 69 million people in the United States alone.

Most people experience motion sickness at some point in their lives, usually while traveling in a vehicle. Consider the last time you were car-sick, sea-sick, or air-sick. Nausea can be a symptom of situations where balance input from your inner ear seems to conflict with the visual orientation from your eyes. If your senses are sending conflicting signals to your brain, it doesn’t know which one to trust. Conflicting sensory input can also be caused by neurotoxins in spoiled food, hallucinogens, or other ingested poisons, so a common hypothesis is that these conflicting sensory inputs due to motion or vestibular responses lead your brain to infer its being poisoned, and seek to expel the poison through vomiting.

Whatever the underlying cause, people with vestibular disorders have an acute sensitivity to certain motion triggers. In extreme cases, the symptoms can be debilitating.

Vestibular Triggers

The following sections include examples of common vestibular motion triggers, and variants. If your site or web application includes similar animations, consider disabling or using variants when the prefers-reduced-motion media feature matches.

Trigger: Scaling and Zooming

Visual scaling or zooming animations give the illusion that the viewer is moving forward or backward in physical space. Some animated blurring effects give a similar illusion.

Note: It’s okay to keep many real-time, user-controlled direct manipulation effects such as pinch-to-zoom. As long as the interaction is predictable and understandable, a user can choose to manipulate the interface in a style or speed that works for their needs.

Example 1: Mouse-Triggered Scaling

How to Shoot on iPhone incorporates a number of video and motion effects, including a slowly scaling poster when the user’s mouse hovers over the video playback buttons.

The Apple.com team implemented prefers-reduced-motion to disable the scaling effect and background video motion.

Example 2: 3D Zoom + Blur

The macOS web site simulates flying away from Lone Pine Peak in the Sierra Nevada mountain range. A three-dimensional dolly zoom and animated blur give the viewer a sense that physical position and focal depth-of-field is changing.

In mobile devices, or in browsers that can’t support the more complicated animation, the effect is reduced to a simpler scroll view. By incorporating similar visual treatment, the simpler variant retains the original design intention while removing the effect. The same variant could be used with prefers-reduced-motion to avoid vestibular triggers.

Trigger: Spinning and Vortex Effects

Effects that use spiraling or spinning movements can cause some people with vestibular disorders to lose their balance or vertical orientation.

Example 3: Spinning Parallax Starfield

Viljami Salminen Design features a spinning, background star field by default.

It has incorporated prefers-reduced-motion to stop the spinning effect for users with vestibular needs. (Note: The following video is entirely motionless.)

Trigger: Multi-Speed or Multi-Directional Movement

Parallax effects are widely known, but other variants of multi-speed or multi-directional movement can also trigger vestibular responses.

Example 4: iOS 10 Site Scrolling

The iOS 10 site features images moving vertically at varying speeds.

A similar variant without the scroll-triggered image offsets could be used with prefers-reduced-motion to avoid vestibular triggers.

Trigger: Dimensionality or Plane Shifting

These animations give the illusion of moving two-dimensional (2D) planes in three-dimensional (3D) space. The technique is sometimes referred to as two-and-a-half-dimensional (2.5D).

Example 5: Plane-Shifted Scrolling

Apple’s Environment site features a animated solar array that tilts as the page scrolls.

The site supports a reduced motion variant where the 2.5D effect remains a still image.

Trigger: Peripheral Motion

Horizontal movement in the peripheral field of vision can cause disorientation or queasiness. Think back to the last time you read a book while in a moving vehicle. The center of your vision was focused on the text, but there was constant movement in the periphery. This type of motion is fine for some, and too much to stomach for others.

Example 6: Subtle, Constant Animation Near a Block of Text

After scrolling to the second section on Apple’s Environment site, a group of 10-12 leaves slowly floats near a paragraph about renewable energy.

In the reduced motion variant, these leaves are stationary to prevent peripheral movement while the viewer focuses on the nearby text content.

Take note that only the animations known to be problematic have be modified or removed from the site. More on that later.

Using Reduce Motion on the Web

Now that we’ve covered the types of animation that can trigger adverse vestibular symptoms, let’s cover how to implement the new media feature into your projects.

CSS @Media Block

An @media block is the easiest way to incorporate motion reductions into your site. Use it to disable or change animation and transition values, or serve a different background-image.

@media (prefers-reduced-motion) {
  /* adjust motion of 'transition' or 'animation' properties */
}

Review the prefers-reduced-motion demo source for example uses.

MediaQueryList Interface

Animations and DOM changes are sometimes controlled with JavaScript, so you can leverage the prefers-reduced-motion media feature with window.matchMedia and register for an event listener whenever the user setting changes.

var motionQuery = matchMedia('(prefers-reduced-motion)');
function handleReduceMotionChanged() {
  if (motionQuery.matches) {
    /* adjust motion of 'transition' or 'animation' properties */
  } else { 
    /* standard motion */
  }
}
motionQuery.addListener(handleReduceMotionChanged);
handleReduceMotionChanged(); // trigger once on load if needed

Review the prefers-reduced-motion demo source for example uses.

Using the Accessibility Inspector

When refining your animations, you could toggle the iOS Setting or macOS Preference before returning to your app to view the result, but this indirect feedback loop is slow and tedious. Fortunately, there’s a better way.

The Xcode Accessibility Inspector makes it easier to debug your animations by quickly changing any visual accessibility setting on the host Mac or a tethered device such as an iPhone.

  1. Attach your iOS device via USB.
  2. Select the iOS device in Accessibility Inspector.
  3. Select the Settings Tab.

Alternate closed-captioned version of the Accessibility Inspector demo below.

Don’t Reduce Too Much

In some cases, usability can suffer when reducing motion. If your site uses a vestibular trigger animation to convey some essential meaning to the user, removing the animation entirely may make the interface confusing or unusable.

Even if your site uses motion in a purely decorative sense, only remove the animations you know to be vestibular triggers. Unless a specific animation is likely to cause a problem, removing it prematurely only succeeds in making your site unnecessarily boring.

Consider each animation in its context. If you determine a specific animation is likely to be a vestibular trigger, consider serving an alternate, simpler animation, or display another visual indicator to convey the intended meaning.

Takeaways

  1. Motion can be a great tool for increasing usability and engagement, but certain visual effects trigger physical discomfort in some viewers.
  2. Avoid vestibular trigger animations where possible, and use alternate animations when a user enables the “Reduce Motion” setting. Try out these settings, and use the new media feature when necessary. Review the prefers-reduced-motion demo source for example uses.
  3. Remember that the Web belongs to the user, not the author. Always adapt your site to fit their needs.

More Information

Planet MozillaW3C Advisory Board elections

The W3C Advisory Board (AB) election 2017 just started, and I am not running this time. I have said multiple times the way people are elected is far too conservative, giving a high premium to "big names" or representatives of "big companies" on one hand, tending to preserve a status-quo in terms of AB membership on the other. Newcomers and/or representative of smaller companies have almost zero chance to be elected. Even with the recent voting system changes, the problem remains.

Let me repeat here my proposal for both AB and TAG: two consecutive mandates only; after two consecutive mandates, elected members cannot run again for re-election at least during at least one year.

But let's focus on current candidates. Or to be more precise, on their electoral program:

  1. Mike Champion (Microsoft), who has been on the AB for years, has a clear program that takes 2/3rds of his AB nominee statement.
    1. increase speed on standards
    2. bridge the gap existing between "fast" implementors and "slow" standards
    3. better position W3C internally
    4. better position W3C externally
    5. help the Web community
  2. Rick Johnson (VitalSource Technologies | Ingram Content Group) does not have a detailed program. He wants to help the Publishing side of W3C.
  3. Charles McCathie Nevile (Yandex) wants
    1. more pragmatism
    2. to take "into account the broad diversity of its membership both in areas of interest and in size and power" but he has "been on the AB longer than any current participant, including the staff", which does not promote diversity at all
  4. Natasha Rooney (GSMA) has a short statement with no program at all.
  5. Chris Wilson (Google Inc.), who has also been elected to the AB twice already, wants :
    1. to engage better developers and vendors
    2. to focus better W3C resources, with more agility and efficiency
    3. to streamline process and policies to let us increase speed and quality
  6. Zhang Yan (China Mobile Communications Corporation) does not really have a clear program besides "focus on WEB technology for 5G, AI and the Internet of things and so on"
  7. Judy Zhu (Alibaba (China) Co., Ltd.) wants:
    1. to make W3C more globalized (good luck on that one...)
    2. to make W3C Process more usable/effective/efficient
    3. increase W3C/industries collaboration (but isn't it a industrial consortium already?)
    4. increase agility
    5. focus more on security and privacy

If I except the mentions of agility and Process, let me express a gut feeling: this is terribly depressing. Candidacy statements from ten years ago look exactly the same. They quote the same goals. They're even phrased the same way... But in the meantime, we have major topics on the meta-radar (non-exhaustive list):

  • the way the W3C Process is discussed, shaped and amended is so incredibly long it's ridiculous. Every single major topic Members raised in the last ten years took at least 2 years (if not six years) to solve, leaving Groups in a shameful mess. The Process is NOT a Technical Report that requires time, stability and implementations. It's our Law, that impacts our daily life as Members. When an issue is raised, it's because it's a problem right now and people raising the issue expect to see the issue solved in less than "years", far less than years.
  • no mention at all of finances! The finances of the W3C are almost a taboo, that only a few well-known zealots like yours truly discuss, but they feed all W3C activities. After years of instability, and even danger, can the W3C afford keeping its current width without cutting some activities and limiting its scope? Can the W3C avoid new revenue streams? Which ones?
  • similarly, no mention of transparency! I am not speaking of openness of our technical processes here, I am very clearly and specifically speaking of the transparency of the management of the Consortium itself. The way W3C is managed is far too vertical and it starts being a real problem, and a real burden. We need changes there. Now.
  • the role of the Director, another taboo inside W3C, must be discussed. It is time to acknowledge the fact the Director is not at the W3C any more. It's time to stop signing all emails "in the name of the Director', handle all transition conference calls "in the name of the Director" but almost never "with the Director". I'm not even sure we need another Director. It's time to acknowledge the fact Tim should become Honorary Director - with or without veto right - and distribute his duties to others.
  • we need a feedback loop and very serious evaluation of the recent reorganization of the W3C. My opinion is as follows: nobody knows who to contact and it's a mess of epic magnitude. The Functional leaders centralize input and then re-dispatch it to others, de facto resurrecting Activities and adding delays to everything. The reorg is IMHO a failure, and a rather expensive one in terms of effectiveness.
  • W3C is still not a legal entity, and it does not start being a burden... it's been a burden for eons. The whole architecture of W3C, with regional feet and a too powerful MIT, is a scandalous reminiscence of the past.
  • our election system for AB and TAG is too conservative. People stay there for ages, while all our technical world seems to be reshaped every twelve months. My conclusion is simple, and more or less matches what Mike Champion said : the Consortium is not tailored any more to match its technical requirements. Where we diverge: I understand Mike prefers evolution to revolution, I think evolution is not enough any more and revolution is not avoidable any more. We probably need to rethink the W3C from the ground up.
  • Incubation has been added to W3C Process in a way that is perceived by some as a true scandal. I am not opposed at all to Incubation, but W3C has shown a lack of caution, wisdom, consensus and obedience to its own Process that is flabbergasting. W3M acts fast when it need to remind a Member about the Process, but W3M itself seems to work around the Process all the time. The way Charters under review are modified during the Charter Review itself is the blatant example of that situation.

Given how far the candidacy statements are from what I think are the real and urgent issues of the W3C, I'm not even sure I am willing to vote... I will eventually cast a ballot, sure, but I stand by my opinion above: this is depressing.

I am now 50 years old, I have been contributing to W3C for, er, almost 22 years and that's why I will not run any more. We need younger people, we need different perspectives, we need different ways of doing, we need different futures. We need a Consortium of 2017, we still have a Consortium of 2000, we still have the people of 2000. If I was 20 today, born with the Web as a daily fact, how would I react looking at W3C? I think I would say « oh that old-school organization... » and that alone explains this whole article.

Conclusion for all W3C AB candidates: if you want my vote, you'll have to explain better, much better, where you stand in front of these issues. What do you propose, how do you suggest to implement it, what's your vision for W3C 2020. Thanks.

Planet MozillaGuys! The Mojave Desert is hot and dry. Who knew?

Four days in, 77mi so far, at Julian overnight. Longest waterless stretch was 17.8mi, but I did end up only drinking the water I started with on the first 20mi day, so I suppose it was as if it were a 20mi waterless stretch, even if water was plentiful. (That said, this year was so rainy/snowy that a ton of water sources that usually would be dry, are still running now.)

Starting group picture

First rail crossing

Not a rattlesnake across the trail

A PCT sign that says

Unexploded military ordnance nearby! Woo!

An overlook, with other hiker trash in the foreground

View on a valley

Overnight campsite at sunset - good view, but very windy

Campsite in morning

Prickly pear-looking cactus

And, my overnight lodgings in Julian:

Overnight on the floor of a small restaurant

Planet MozillaCaddy Webserver and MOSS

The team behind the Caddy secure-by-default webserver have written a blog post on their experience with MOSS:

The MOSS program kickstarted a new era for Caddy: turning it from a fairly casual (but promising!) open source project into something that is growing more than we would have hoped otherwise. Caddy is seeing more contributions, community engagement, and development than it ever has before! Our experience with MOSS was positive, and we believe in Mozilla’s mission. If you do too, consider submitting your project to MOSS and help make the Internet a better place.

Always nice to find out one’s work makes a difference. :-)

Planet Mozillatext-shadow in ::selection, still not great

7 years ago I tweeted my only good tweet:

please kill the text-shadow in ::selection. obsessive compulsive text highlighters like myself go blind

screenshot of text selection ugliness

(apologies for hideous screenshot, 2010 was a weird time for web design, I guess)

Some internet hipsters agreed, so they put a default text-shadow: none rule for ::selection in html5 boilerplate's main.css.

Anyways, we recently got a bug about nearly same exact issue: if you have a white background and set a white text-shadow on the copy (wat), things can get weird when someone makes a selection:

#wrapper {
  background-color: #fff;
}
.post-meta {
  text-shadow: 2px 0px 1px #fff;
}

So don't do that?

Anyways. The most important takeaway (for me) is that the devs over at thegunmag.com don't follow me on twitter, which is super rude when you think about it.

Planet MozillaThis Week In Servo 102

In the last week, we landed 140 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • fabrice fixed an issue loading stylesheets with unusual MIME types.
  • ferjm allowed retrieving line numbers for CSS rules in Stylo.
  • behnam generated many conformance tests for the unicode-bidi crate.
  • canaltinova shared quirks information between Stylo and Servo.
  • MortimerGoro fixed an unsafe transmute that was causing crashes on Android.
  • mrobinson corrected the behaviour of the scrollBy API to better match the specification.
  • jdm removed incorrect buffer padding in ipc-channel on macOS.
  • kvark fixed an assertion failure when rendering fonts on unix.
  • aneeshusa implemented per-repository labelling actions in highfive.
  • nox refactored the implementation of CSS position values to reduce code duplication.
  • UK992 reenabled all unit tests on TravisCI.
  • jdm extended the cross-origin canvas security tests to cover same-origin redirects.
  • cbrewster made non-initial about:blank navigations asynchronous.
  • jdm fixed a GC hazard stemming from the transitionend event.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Planet MozillaTwo years of Rust

Rust is a language for confident, productive systems programming. It aims to make systems programming accessible to a wider audience, and to raise the ambitions of dyed-in-the-wool systems hackers.

It’s been two years since Rust 1.0 was released. Happy second birthday, Rust!

Group picture from RustFest Berlin

Rustaceans at RustFest Berlin, September 2016. Picture by Fiona Castiñeira

Over these two years, we have demonstrated stability without stagnation, maintaining backwards compatibility with version 1.0 while also making many improvements. Conveniently, Rust’s birthday is a bit under halfway through 2017, which makes this a great time to reflect not only on the progress in the last year but also on the progress of our 2017 Roadmap goals.

After reading this post, if you’d like to give us your feedback on how we’re doing and where Rust should focus next, please fill out our 2017 State of Rust survey.

But first, let’s do the numbers!

Rust in numbers

A lot has happened since Rust’s first birthday:

  • 10,800 commits by 663 contributors (438 of them new this year) added to the core repository;
  • 56 RFCs merged;
  • 9 minor releases and 2 patch releases shipped;
  • 4,405 new crates published;
  • 284 standard library stabilizations;
  • 10 languages rust-lang.org has been translated into;
  • 48 new companies running Rust in production;
  • 4 new teams (Docs, Style, Infrastructure, and the Unsafe Guidelines strike team);
  • 24 occasions of adding people to teams, 6 retirings of people from teams;
  • 3 babies born to people on the Rust teams;
  • 2 years of stability delivered.

On an average week this year, the Rust community merged 1 RFC and published 83 new crates. Rust topped the “most loved language” for the second year in a row in the StackOverflow survey. Also new this year is thanks.rust-lang.org, a site where you can browse contributors by release.

Rust in production

In addition to the 48 new Rust friends, we now have a Rust jobs website! More and more companies are choosing Rust to solve problems involving performance, scaling, and safety. Let’s check in on a few of them.

Dropbox is using Rust in multiple high-impact projects to manage exabytes of data on the back end, where correctness and efficiency is critical. Rust code is also currently shipping in the desktop client on Windows running on hundreds of millions of machines. Jamie Turner recently spoke at the SF Rust Meetup about the details on how Rust helps Dropbox use less RAM and get more throughput with less CPU.

Mozilla, Rust’s main sponsor, has accelerated their use of Rust in production. Not only did Servo start shipping nightly builds, Firefox 48 marked the first Firefox release that included Rust code as part of the Oxidation project. Project Quantum, announced in October 2016, is an effort to incrementally adopt proven parts of Servo into Firefox’s rendering engine, Gecko. Check out this blog series that’s just getting started for a detailed look at Project Quantum.

GNOME, a free and open source desktop environment for Linux, went from experimenting with Rust in librsvg in October 2016 to a hackfest in March to work on the interoperability between GNOME and Rust to enable more GNOME components to be written in Rust. The hackfest participants made good progress, be sure to check out the reports at the bottom of the hackfest page for all the details. We’re all excited about the possibilities of Rust and GNOME working together.

This year, npm started using Rust in production to serve JavaScript packages. The Rust pieces eliminate performance bottlenecks in their platform that serves around 350 million packages a day. Ashley Williams recently gave a talk at RustFest in Ukraine about npm’s experience with Rust in production; check out the video.

This is just a sampling of the success stories accumulating around Rust. If you’re using Rust in production, we want to hear yours too!

Rust in community

Speaking of conferences, We’ve had four Rust conferences in the last year:

And we have at least three conferences coming up!

That’s not even including the 103 meetups worldwide about Rust. Will you be the one to run the fourth conference or start the 104th meetup? Contact the community team for help and support!

Rust in 2017

The 2017 Roadmap goals have been great for focusing community efforts towards the most pressing issues facing Rust today. Of course we’d love for every aspect of Rust to improve all the time, but we don’t have an infinite number of contributors with an infinite amount of time available yet!

Let’s check in on some of the initiatives in each of the goals in the roadmap. The linked tracking issues give even more detail than the summaries here.

Rust should have a lower learning curve

The second edition of The Rust Programming Language Book is one chapter shy of having its initial content complete. There’s lots more editing to be done to get the book ready for publication in October, though. The print version is currently available for preorder from No Starch, and the online version of the second edition has boarded the beta train and will be an option in the documentation shipped with Rust 1.18.0. Steve and I have gotten feedback that the ownership chapter especially is much improved and has helped people understand ownership related concepts better!

The Language Ergonomics Initiative is another part of the lower learning curve goal that has a number of improvements in its pipeline. The language team is eager to mentor people (another goal!) who are interested in getting involved with moving these ergonomic improvement ideas forward by writing RFCs and working with the community to flesh out the details of how these improvements would work. Comment on the tracking issue if you’d like to jump in.

Also check out:

Rust should have a pleasant edit-compile-debug cycle

Waiting on the compiler is the biggest roadblock preventing the Rust development workflow from being described as “pleasant”. So far, a lot of work has been done behind the scenes to make future improvements possible. Those improvements are starting to come to fruition, but rest assured that this initiative is far from being considered complete.

One of the major prerequisites to improvements was adding MIR (Mid-level Intermediate Representation) to the compiler pipeline. This year, MIR became a default part of the compilation process.

Because of MIR, we’re now able to work on adding incremental recompilation. Nightly builds currently offer “beta” support for it, permitting the compiler to skip over code generation for code that hasn’t changed. We are in the midst of refactoring the compiler to support finer-grained incremental computation, allowing us to skip type-checking and other parts of compilation as well. This refactoring should also offer better support for the IDE work (see next section), since it enables the compiler to do things like compile a single function in isolation. We expect to see the next stage of incremental compilation becoming available over the next few months. If you’re interested in getting involved, please check out the roadmap issue #4, which is updated periodically to reflect the current status, as well as places where help is needed.

The February post on the “beta” support showed that recompiling in release mode will often be five times as fast with incremental compilation! This graph shows the improvements in compilation time when making changes to various parts of the regex crate and rebuilding in release mode:

Graph showing improved time with incremental compilation

Try out incremental compilation on nightly Rust with CARGO_INCREMENTAL=1 cargo <command>!

Thanks to Niko Matsakis for this incremental compilation summary!

We’ve also made some progress on the time it takes to do a full compilation. On average, compile times have improved by 5-10% in the last year, but some worst-case behavior has been fixed that results in >95% improvements in certain programs. Some very promising improvements are on the way for later this year; check out perf.rust-lang.org for monitoring Rust’s performance day-to-day.

Rust should provide a basic, but solid IDE experience

As part of our IDE initiative, we created the Rust Language Server project. Its goal is to create a single tool that makes it easy for any editor or IDE to have the full power of the Rust compiler for error checking, code navigation, and refactoring by using the standard language server protocol created by Microsoft and Eclipse.

While still early in its life, today the RLS is available from rustup for nightly users. It provides type information on hover, error messages as you type, and different kinds of code navigation. It even provides refactoring and formatting as unstable features! It works with projects as large as Cargo. We’re excited to watch the RLS continue to grow and hope to see it make its way to stable Rust later this year.

Thanks to Jonathan Turner for this RLS summary!

Rust should have 1.0-level crates for essential tasks, and Rust should provide easy access to high quality crates

The recent post on the Libz Blitz details the Library Team’s initiative to increase the quality of crates for common tasks; that post is excellent so I won’t repeat it here. I will note that many of the issues that the Libs Team is going to create will be great starter issues. For the blitz to be the best it can be, the Libs Team is going to need help from the community– that means YOU! :) They’re willing to mentor people interested in contributing.

In order to make awesome crates easier to find for particular purposes, crates.io now has categories for crate authors to better indicate the use case of their crate. Crates can also now have CI badges, and more improvements to crates.io’s interface are coming that will help you choose the crates that fit your needs.

Rust should be well-equipped for writing robust, high-scale servers

One of the major events in Rust’s ecosystem in the last year was the introduction of a zero-cost futures library, and a framework, Tokio, for doing asynchronous I/O on top of it. These libraries are a boon for doing high-scale, high-reliability server programming, productively. Futures have been used with great success in C++, Scala, and of course JavaScript (under the guise of promises), and we’re reaping similar benefits in Rust. However, the Rust library takes a new implementation approach that makes futures allocation-free. And Tokio builds on that to provide a futures-enabled event loop, and lots of tools for quickly implementing new protocols. A simple HTTP server using Tokio is among the fastest measured in the TechEmpower server benchmarks.

Speaking of protocols, Rust’s full-blown HTTP story is solidifying, with Hyper’s master branch currently providing full Tokio support (and official release imminent). Work on HTTP/2 is well under way. And the web framework ecosystem is growing too. For example, Rocket came out this year: it’s a framework that marries the ergonomics and flexibility of a scripting framework with the performance and reliability of Rust. Together with supporting libraries like the Diesel ORM, this ecosystem is showing how Rust can provide slick, ergonomic developer experiences without sacrificing an ounce of performance or reliability.

Over the rest of this year, we expect all of the above libraries to significantly mature; for a middleware ecosystem to sprout up; for the selection of supported protocols and services to grow; and, quite possibly, to tie all this all together with an async/await notation that works natively with Rust’s futures.

Thanks to Aaron Turon for this server-side summary!

Rust should integrate easily into large build systems

Cargo, Rust’s native package manager and build system, is often cited as one of people’s favorite aspects of Rust. But of course, the world runs on many build systems, and when you want to bring a chunk of the Rust ecosystem into a large organization that has its own existing build system, smooth integration is paramount.

This initiative is mostly in the ideas stage; we’ve done a lot of work with stakeholders to understand the challenges in build system integration today, and we think we have a good overall vision for how to solve them. There’s lots of great discussion on the tracking issue that has resulted in a few Cargo issues like these:

There are a lot of details yet to be worked out; keep an eye out for more improvement in this area soon.

Rust’s community should provide mentoring at all levels

The “all levels” part of the roadmap item is important to us: it’s about onboarding first-time contributors as well as adding folks all the way up at the core team level (like me, hi!)

For people just getting started with Rust, we held RustBridge events before RustFest Berlin and Rust Belt Rust. There’s another coming up, planned for the day before RustConf in Portland!

The Mozilla Rust folks are going to have Outreachy and GSoC interns this summer working on a variety of projects.

We’ve also had success involving contributors when there are low-committment, high impact tasks to be done. One of those efforts was improving the format of error messages– check out the 82 participants on this issue! The Libz Blitz mentioned in a previous section is set up specifically to be another source of mentoring opportunities.

In January, the Language Team introduced shepherds, which is partly about mentoring a set of folks around the Language Team. The shepherds have been quite helpful in keeping RFC discussions moving forward!

We’ve also been working to grow both the number and size of subteams, to create more opportunities for people to step into leadership roles.

There’s also less formal ways that we’ve been helping people get involved with various initiatives. I’ve worked with many people at many places in their Rust journey: helping out with the conferences, giving their first conference talks, providing feedback on the book, working on crates, contributing to Rust itself, and joining teams! While it’s hard to quantify scenarios like these, everywhere I turn, I see Rustaceans helping other Rustaceans and I’m grateful this is part of our culture.

Rust in the future

At two years old, Rust is finding its way into all corners of programming, from web development, to embedded systems, and even your desktop. The libraries and the infrastructure are maturing, we’re paving the on-ramp, and we’re supporting each other. I’m optimistic about the direction Rust is taking!

Happy birthday, Rust! Here’s to many more! 🎉

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>