Planet MozillaFirewalling, part 2

I previously wrote about setting up multiple VLANs to segment your home network and improve the security characteristics. Since then I've added more devices to my home network, and keeping everything in separate VLANs was looking like it would be a hassle. So instead I decided to put everything into the same VLAN but augment the router's firewall rules to continue restricting traffic between "trusted" and "untrusted" devices.

The problem is that didn't work. I set up all the firewall rules but for some reason they weren't being respected. After (too much) digging I finally discovered that you have to install the kmod-ebtables package to get this to actually work. Without it, the netfilter code in the kernel doesn't filter traffic between hosts on the same VLAN and so any rules you have for that get ignored. After installing kmod-ebtables my firewall rules started working. Yay!

Along the way I also discovered that OpenWRT is basically dead now (they haven't had a release in a long time) and the LEDE project is the new fork/successor project. So if you were using OpenWRT you should probably migrate. The migration was relatively painless for me, since the images are compatible.

There's one other complication that I've run into but haven't yet resolved. After upgrading to LEDE and installing kmod-ebtables, for some reason I couldn't connect between two FreeBSD machines on my network via external IP and port forwarding. The setup is like so:

  • Machine A has internal IP address 192.168.1.A
  • Machine B has internal IP address 192.168.1.B
  • The router's external IP address is E
  • The router is set to forward port P to machine A
  • The router is set to forward port Q to machine B

Now, from machine B, if connect to E:P, it doesn't work. Likewise, from machine A, connecting to E:Q doesn't work. I can connect using the internal IP address (192.168.1.A:P or 192.168.1.B:Q) just fine; it's only the via the external IP that it doesn't work. All the other machines on my network can connect to E:P and E:Q fine as well. It's only machines A and B that can't talk to each other. The thing A and B have in common is they are running FreeBSD; the other machines I tried were Linux/OS X.

Obviously the next step here is to fire up tcpdump and see what's going on. Funny thing is, when I run tcpdump on my router, the problem goes away and the machines can connect to each other. So there's that. I'm sure with more investigation I'll get to the bottom of this but for now I've shelved it under "mysteries that I can work around easily". If anybody has run into this before I'd be interested in hearing about it.

Also if anybody knows of good tools to visualize and debug iptables rules I'd be interested to try them out, because I haven't found anything good yet. I've been using the counters in the tables to try and figure out which rules the packets are hitting but since I'm debugging this "live" there's a lot of noise from random devices and the counters are not as reliable as I'd like.

Planet MozillaCelebrating LCARS With One Last Theme Release

30 years ago, a lot of people were wondering what the new Star Trek: The Next Generation series would bring when it would debut in September 1987. The principal cast had been announced, as well as having a new Enterprise and even the pilot's title was known, but - as always with a new production - a lot of questions were open, just like today in 2017 with Star Trek Discovery, which is set to debut in September almost to the day on the 30th anniversary of The Next Generation.

Given that the story was set to play 100 years after the original and what was considered "futuristic" had significantly changed between the late 1960s and 1980s, the design language had to be significantly updated, including the labels and screens on the new Enterprise. Scenic art supervisor and technical consultant Michael Okuda, who had done starship computer displays for The Voyage Home, was hired to do those for the new series, and was instructed by series creator and show runner Gene Roddenberry that this futuristic ship should have "simple and clean" screens and not much animation (the latter probably also due to budget and technology constraints - the "screens" were built out of colored plexiglass with lights behind them).



With that, Okuda created a look that became known as "LCARS" (for Library Computer Access and Retrieval System (which actually was the computer system's name). Instead of the huge gray panels with big brightly-colored physical buttons in the original series, The Next Generation had touch-screen panels with dark background and flat-style buttons in pastel color tones. The flat design including the fonts and flat-design frames are very similar to quite a few designs we see on touch-friendly mobile apps 30 years later. Touch screens (and even cell phones and tablets) were pretty much unheard of and "future talk" when Mike Okuda created those designs, but he came to pretty similar design conclusions as those who design UIs for modern touch-screen devices (which is pretty awesome when you think of it).

I was always fascinated with that style of UI design even on non-touch displays (and am even more so now that I'm using touch screens daily), and so 18 years ago, when I did my first experiments with Mozilla's new browser-mail all-in-one package and realized that the UI was displayed with the same rendering engine and the same or very similar technologies as websites, I immediately did some CSS changes to see if I could apply LCARS-like styling to this software - and awesomeness ensued when I found out that it worked!

Image No. 23114

Over the years, I created a full LCARStrek theme from those experiments (first release, 0.1, was for Mozilla suite nightlies in late 2000), adapted it to Firefox (starting with LCRStrek 2.1 for Firefox 4), refined it and even made it work with large Firefox redesigns. But as you may have heard, huge changes are coming to Firefox add-ons, and full-blown themes in a manner of LCARStrek cannot be done in the new world as it stands right now, so I'm forced to stop developing this theme.

Image No. 23308

Given that LCARS has a huge anniversary this year, I want to end my work on this theme on a high instead of a too sad a note though, so right along the very awesome Star Trek Las Vegas convention, which just celebrated 30 years of The Next Generation, of course, I'm doing one last LCARStrek release this weekend, with special thanks to Mike Okuda, whose great designs made this theme possible in the first place (picture taken by myself at that convention just two weeks ago, where he was talking about the backlit LCARS panels that were dubbed "Okudagrams" by other crew members):
Image No. 23314

Live long and prosper!

Planet MozillaLantea Maps: GPS Track Upload to OpenStreetMap Broken

During my holidays, when I was using Lantea Maps daily to record my GPS tracks, I suddenly found out one day that upload of the tracks to OpenStreetMap was broken.

I had added that functionality so that people (including myself) could get their GPS tracks out of their mobile devices and into a place from which they can download them anywhere. A bonus was that the tracks were available to the OpenStreetMap project as guides to improve the maps.

After I had wasted about EUR 50 of data roaming costs to verify that it was not only broken on hotel networks but also my mobile network that usually worked, I tried on a desktop Nightly and used the Firefox devtools to find out the actual error message, which was a CORS issue. I filed a GitHub issue but apparently it was an intentional change and OpenStreetMap doesn't support GPS track uploads any more in a way that is simple for pure web apps and also doesn't want to re-add support for that. Find more details in the GitHub issue.

Because of that, I think that this will mark the end of uploading tracks from Lantea Maps to OpenStreetMap. When I have time, I will probably add a GPS track store on my server instead, where third-party changes can't break stuff while I'm on vacation. If any Lantea Maps user wants their tracks on OpenStreetMap in the future, they'll need to manually upload the tracks themselves.

Planet MozillaThe State of CRLs Today

Certificate Revocation Lists (CRLs) are a way for Certificate Authorities to announce to their relying parties (e.g., users validating the certificates) that a Certificate they issued should no longer be trusted. E.g., was revoked.

As the name implies, they're just flat lists of revoked certificates. This has advantages and disadvantages:

Advantages:

  • It's easy to see how many revocations there are
  • It's easy to see differences from day to day
  • Since processing the list is up to the client, it doesn't reveal what information you're interested in

Disadvantages:

  • They can quickly get quite big, leading to significant latency while downloading a web page
  • They're not particularly compressible
  • There's information in there you probably will never care about

CRLs aren't much used anymore; Firefox stopped checking them in version 28 in 2014, in favor of online status checks (OCSP).

The Baseline Requirements nevertheless still require that CRLs, if published, remain available:

4.10.2 Service availability

The CA SHALL operate and maintain its CRL and OCSP capability with resources sufficient to provide a response time of ten seconds or less under normal operating conditions.

Since much as been written about the availability of OCSP, I thought I'd check-in on CRLs.

Collecting available CRLs

When a certificate's status will be available in a CRL, that's encoded into the certificate itself (RFC 5280, 4.2.1.13). If that field is there, we should expect the CRL to survive for the lifetime of the certificate.

I went to Censys.io and after a quick request to them for SQL access, I ran this query:

SELECT parsed.extensions.crl_distribution_points  
   FROM certificates.certificates
WHERE validation.nss.valid = true  
   AND parsed.extensions.crl_distribution_points LIKE 'http%' 
   AND parsed.validity.end >= '2017-07-18 00:00'
GROUP BY parsed.extensions.crl_distribution_points  

Today, this yields 3,035 CRLs, the list of which I've posted on Github.

Downloading those CRLs into a directory downloaded_crls can be done serially using wget quite simply, logging to a file named wget_log-all_crls.txt:

mkdir downloaded_crls  
script wget_log-all_crls.txt wget --recursive --tries 3 --level=1 --force-directories -P downloaded_crls/ --input-file=all_crls.csv  

This took 2h 36m 31s on my Internet connection.

Analyzing the Download Process

Out of 3,035 CRLs, I ended up downloading 2,993 files. The rest failed.

I post-processed the command line wget log (wget_log-all_crls.txt) using a small Python script to categorize each CRL download by how it completed.

Ignoring all the times when requesting a file resulted in the file straightaway (hey, those cases are boring), here's the graphical breakdown of the other cases:

Problems with CRL Downloads

Missing CRLs

There are 40 CRLs that weren't available to me when I checked, or more simply put, 1% of CRLs appear to be dead.

Some of them are dead in temporary-looking ways, like the load balancer giving a 500 Internal Server Error, some of them have hostnames that aren't resolving in DNS.

These aren't currently resolving for me:

Searching Censys' dataset, these CRLs are only used by intermediate CAs, so presumably if one of the handful of CA certificates covered would need to be revoked, their IT staff could fix these links.

Except for http://atospki/, which is clearly an internal name. Mistakes like that can only be revoked via technologies like OneCRL and CRLSets.

The complete list of 400s, 404s, and timeouts by URL is available in crl_resolutions.csv.

Are the missing CRLs a problem?

This doesn't attempt to eliminate possible false-positives where the CRL was for a certificate which is revoked by its parent. For example, if there is a chain Root -> A -> B -> C, and A is revoked, it may not be important that A's CRL exist. (Thanks, @sleevi for pointing this out!)

Redirects

As could be expected, there were a fair number of CRLs which are now serviced by redirects. Interestingly, while section 7.1.2.2(b) of the Baseline Requirements require CRLs to have a "HTTP URL", 13 of the CRL fetches redirect to HTTPS, two of them through HSTS headers [1].

There was a recent thread on Mozilla.dev.security.policy about OCSP responders that were only available over HTTPS; these are problematic as OCSP and CRLs are used to decide whether a secure connection is possible. Having to make such a determination for the revocation check leads to a potential deadlock, so most software will refuse to try it.

Interestingly, there's one CRL that is encoded as HTTPS directly in certificates: https://crl.firmaprofesional.com/fproot.crl [Censys.io search][Example at crt.sh] That's pretty clearly a violation of the Baseline Requirements.

Sizes

I've generally understood that most CRLs are small, but some are very large, so I expected some kind of bi-modal distribution. It's really not, though the retrieved CRLs do have a wild size distribution:

Size Distribution of CRLs

In table form [2]:

Size Buckets# of CRLs
0.5 KB174
0.5 KB to 0.625 KB264
0.625 KB to 0.75 KB246
0.75 KB to 1 KB310
1 KB to 2 KB366
2 KB to 4 KB237
4 KB to 8 KB232
8 KB to 32 KB500
32 KB to 64 KB297
64 KB to 128 KB218
128 KB to 1 MB106
1 MB to 8 MB33
8 MB to 128 MB9

I figured that most CRLs would be tiny, and we'd have a handful of outliers. Indeed, 50% of the CRLs are less than 4 Kbytes, and 75% are less than 32 Kbytes:
Cumulative Distribution of CRL size

On the top end, however, are 9 CRLs larger than 8 MB:

URLSize
http://www.sk.ee/repository/crls/esteid2011.crl66.57 MB
http://crl.godaddy.com/repository/mastergodaddy2issuing.crl36.22 MB
http://crl.eid.belgium.be/eidc201208.crl16.03 MB
http://crl.eid.belgium.be/eidc201204.crl10.84 MB
http://crl.eid.belgium.be/eidc201207.crl10.82 MB
http://crl.eid.belgium.be/eidc201202.crl10.67 MB
http://crl.eid.belgium.be/eidc201203.crl10.66 MB
http://crl.eid.belgium.be/eidc201201.crl10.47 MB

Remember, these are part of the WebPKI, not some private hierarchy.[3] For a convenient example of why browsers don't download CRLs when connecting somewhere, just point to these.

Download Latency

Latency matters. I'm on a pretty fast Internet connection, but even so, some of the CRLs that were even reasonable sizes took a while to download. I won't harp on this, but just a quick histogram:

Histogram of CRLs bucketed by download time

CRLs that took longer than 1 second to download on a really fast Internet connection -- 142 of them, or 4.7% -- are clear reasons for users' software to not check them for live revocation status.

Conclusions (such as there are any)

CRLs are not an exciting technology, but they're still used by the Web PKI. Since they're not exciting, it appears that some CAs believe they don't even need to keep their CRLs online; I mean, who checks these things, anyway?

Oh, yeah, me...

Still, with technologies such as CRLSets depending on CRLs as a means for revocation data, they clearly still have a purpose. It's not particularly convenient to make a habit of crawling OCSP responders to figure out the state of revocations on the Web.

Footnotes

[1] Note, that's not found by the Python script ; you'll need to grep the log for "URL transformed to HTTPS due to an HSTS policy"

[2] I admit that the buckets are a bit arbitrary, but here's what it looks like without some manual massaging:
Auto-generated buckets

[3] Most of these are not realistically going to be reached by browsers, however. The largest contains revocations that appear to belong to a government's national ID card list. GoDaddy's is a master list, but is only referred to by a revoked cert [crt.sh link].

Planet MozillaPhoton Engineering Newsletter #13

This week I’m taking over for Dolske as he takes a vacation to view the eclipse. This is issue #13 of the Photon Engineering Newsletter.

This past week the Nightly team has had some fun with the Firefox icon. We’ve seen the following icons grace Nightly builds in the past week:

The icon in the top-left was created in 2011 by Sean Martell. The icon in the top-right was the original Phoenix icon. Phoenix was later renamed Firebird, and then the name was later changed to Firefox. The icon in the bottom left was the first “Firefox” icon, designed by Steven Garrity in 2003. The icon in the bottom-right, well it is such logo with much browser, we couldn’t help ourselves to not share it.

Recent Changes

Menus/structure:

The Report Site Issue button has been moved to the Page Action menu in Nightly and Dev Edition. This button doesn’t ship to users on Beta or Release.

2017-08-18_1554

Probably the biggest visual change this week is that we now have spacers in the toolbar. These help to separate the location bar from the other utility buttons, and also keep the location bar relatively centered within the window. We have also replaced the bookmarks menu button with the Library button (it’s the icon that looks like books on a shelf).

2017-08-18_1557

We also widened various panels to help fit more text in them.

Animation:

The Pin to Overflow animation has also been tweaked to not move as far. This will likely be the final adjustment to this animation (seen on the left). The Pocket button has moved to the location bar and the button expands when a page is saved to Pocket (seen on the right).

Preferences:

Preferences has continued to work towards their own visual redesign for Firefox 57. New icons were landed for the various categories within Preferences, and some borders and margins have been adjusted.

Visual redesign:

The tab label is no longer centered on Mac. This now brings Linux, Mac, and Windows to all have the same visual treatment for tabs.

Changing to Compact density within Customize mode changes the toolbar buttons to now use less horizontal space. The following GIF shows the theme changing from Compact to Normal to Touch densities.

density

Onboarding:

New graphics for the onboarding tour have landed.

Performance:

Two of the main engineers focusing on Performance were on PTO this past week so we don’t have an update from them.


Tagged: firefox, photon, planet-mozilla

Planet MozillaResignation as co-chair of the Digital Economy Board of Advisors

For the past year and a half I have been serving as one of two co-chairs of the U.S. Commerce Department Digital Economy Board of Advisors. The Board was appointed in March 2016 by then-Secretary of Commerce Penny Pritzer to serve a two year term. On Thursday I sent the letter below to Secretary Ross.

Dear Secretary Ross,
I am resigning from my position as a member and co-chair of the Commerce Department’s Digital Economy Board of Advisors, effective immediately.
It is the responsibility of leaders to take action and lift up each and every American. Our leaders must unequivocally denounce bigotry, racism, sexism, hate, and violence.
The digital economy is fundamental to creating an economy that offers opportunity to all Americans. It has been an honor to serve as member and co-chair of this board and to work with the Commerce Department staff.
Sincerely,
Mitchell Baker
Executive Chairwoman
Mozilla

Planet MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaJavaScript Binary AST Engineering Newsletter #1

Hey, all cool kids have exciting Engineering Newsletters these days, so it’s high time the JavaScript Binary AST got one! Summary JavaScript Binary AST is a joint project between Mozilla and Facebook to rethink how JavaScript source code is stored/transmitted/parsed. We expect that this project will help visibly speed up the loading of large codebases of JS applications and will have a large impact on the JS development community, including both web developers, Node developers, add-on developers and ourselves.

Planet MozillaQuantum Flow Engineering Newsletter #20

It is hard to believe that we’ve gotten to the twentieth of these newsletters.  That also means that we’re very quickly approaching the finish line for this sprint.  We only have a bit more than five more weeks to go before Firefox 57 merges to beta.  It may be a good time to start to think more carefully about what we pay attention to in the remaining time, both in terms of the risk of patches landing, and the opportunity cost of what we decide to put off until 58 and the releases after.

We still have a large number of triaged bugs that are available for someone to pick up and work on.  If you have some spare cycles, we really would appreciate if you consider picking one or two bugs from this list and working on them.  They span many different areas of the codebase so finding something in your area of interest and expertise should hopefully be simple.  Quantum Flow isn’t the kind of project that requires fixing every single one of these bugs to be finished successfully, but at the same time big performance improvements often consist of many small parts, so the cumulative impact of a few additional fixes can make a big impact.

It is worth mentioning that lately while lurking on various tech news and blog sites where Nightly users comment, I have seen quite a few positive comments about Nightly performance from users.  It’s easy to get lost in the details of the work involved in getting rid of synchronous IPCs, synchronous layout/style flushes, unnecessary memory allocations, hashtable lookups, improving data locality, JavaScript JIT performance, making sure code gets inlined better, ship a new CSS engine, etc. etc. but it is reassuring to see people take notice🙂

Moving on to mention one point about Speedometer charts on AWFY which I have gotten a few questions about recently.  We now have Speedometer benchmark numbers on Firefox Beta on the reference hardware reported in addition to inbound optimized and PGO builds.  You may notice that the benchmark score numbers we are getting on Beta are around the same as Nightly (which swings around 83-84 these days).  This doesn’t mean that we haven’t made any improvements on Nightly since the last Beta merge!  We have some Nightly only telemetry code and some features that are only enabled on the Nightly channel, and those add a bit of overhead, which causes us to see a bit of an improvement after an uplift from mozilla-central to mozilla-beta without any code changes.  This means that when the current code on Nightly gets merged to Beta 57, we should expect a bit of an improvement similarly.

And now let me take a moment to acknowledge the work of some of those who helped make Firefox faster last week.  I hope I’m not dropping anyone’s name mistakenly.

Planet MozillaAbout Publishing Code Benchmarks

We often see code benchmarks. Some browser X HTML renderer is faster than browser Y renderer. Some JavaScript engine outperforms the competition by two folds.

While these benchmarks give a kind of instant gratification for the product, they always make me dubious coming from anyone. If the target is to outperform another browser, then I sense that nothing useful has really been accomplished. Even as a marketing technique, I don't think it's working.

When/if publishing a benchmark, focus on three things:

  • How this new code outperform the previous versions of the code? It's good to show that we care about our product and that we want to be faster where/when it matters.
  • How does this improve the user experience on some specific sites? Improving speed in controled environment like benchmarks is nice, but improving speed on real cases Web site is even better. Did it make the JavaScript-controled scrolling faster and smoother?
  • How did we get there? What are the steps which have been taken to improve the code performance? The coding tricks and techniques used to make it faster.

These will be benchmarks blog posts I like to read. So as a summary

Good benchmarks show 1. Outperform your own code 2. Real websites improvement demos 3. Give Technical explanations.

Otsukare!

Planet MozillaIntern Presentations: Round 5: Thursday, August 17th

Intern Presentations: Round 5: Thursday, August 17th Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris

Planet MozillaIntern Presentations: Round 5: Thursday, August 17th

Intern Presentations: Round 5: Thursday, August 17th Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris

Planet MozillaThe Lightweight Browser: Firefox Focus Does Less, Which Is So Much More

Firefox had a baby and named it Focus! Firefox Focus is the new private browser for iOS and Android, made for those times when you just need something simple and … Read more

The post The Lightweight Browser: Firefox Focus Does Less, Which Is So Much More appeared first on The Firefox Frontier.

Planet MozillaI Need Your Open Source Brain

Photo credit: Internet Archive Book Images via Visual Hunt / No known copyright restrictions

Together with help from leaders in Teaching Open Source(TOS), POSSE and others, I’m developing a series of learning modules intended to help Computer Science / Technical Students gain a holistic understanding of open source, with goals for build-in opportunities to ‘learn by doing’.  These modules are intended to enable students in their goals as they build Open Source Clubs(new website coming soon) on their campuses.

And I need your help!

I need your brain, what it knows about Open Source, what skills, knowledge, attitudes, visions you think are important and crucial. I also need your brain to ..um brainstorm(!) ideas for real world value in open source ‘open educational’ offerings.

There’s a Github task for that!

Did I mention I need your Open Source Brain? I really do…

You’ll find checklists for review at the bottom of each

FacebookTwitterGoogle+Share

Planet MozillaSamsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

We are happy to announce that Samsung Gear VR headset support is landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports both the remote and headset controllers available in the Samsung Gear VR 2017 model.

If you are eager to explore, you can download a project template compatible with Gear VR Android phones. Add your Oculus signature file, and run the project to launch the application on your mobile phone.

Alongside the Gear VR support, we worked on other Servo areas in order to provide A-Frame compatibility, WebGL extensions, optimized Android compilations and reduced Servo startup times.

A-Frame Compatibility

Servo now supports Mutation Observers that enables us to polyfill Custom Elements. Together with a solid WebVR architecture and better texture loading we can now run any A-Frame content across mobile (Google Daydream, Samsung Gear VR) and desktop (HTC Vive) platforms. All the pieces have fallen into place thanks to all the amazing work that the Servo team is doing.

WebGL Extensions

Samsung Gear VR support lands in Servo

WebGL Extensions enable applications to get optimal performance by taking advantage of state-of-the-art GPU capabilities. This is even more important in VR because of the extra work required for stereo rendering. We designed the WebGL extension architecture and implemented some of the extensions used by A-Frame/Three.js such as float textures, instancing, compressed textures and VAOs.

Compiling Servo for Android

Recently, the Rust team changed the default Android compilations targets. They added an armv7-linux-androideabi target corresponding to the armeabi-v7a official ABI and changed the arm-linux-androideabi to correspond to the armeabi official ABI instead of armeabi-v7a.

This could cause important performance regressions on Servo because it was using the arm-linux-androideabi target by default. Using the new armv7 compilation target is easy for pure Rust based crates. It’s not so trivial for cmake or makefile based dependencies because they infer the toolchain and compiler names based on the target name triple.

We adapted all the problematic dependencies. We took advantage of this work to add arm64 compilation support and provided a simple CLI API to select any Android compilation target in Servo.

Reduced startup times

C based libfontconfig library was causing long startup times in Servo for Android. We didn’t find a way to fix the library itself so we opted to get rid of it and implement an alternative way to query Android system fonts. Unfortunately, Android doesn't provide an API to query system fonts until Android O so we were forced to parse the system configuration files and load fonts manually.

Gear VR support on Rust-WebVR Library

Samsung Gear VR support lands in Servo

We started working on ovr-mobile-sys, the Rust bindings crate for the Oculus Mobile SDK API. We used rust-bindgen to automatically generate the bindings from the C headers but had to manually transpile some of the inline SDK header code since inline functions don’t generate symbols and are not exported by rust-bindgen.

Then we added the SDK integration into the rust-webvr standalone library. The OculusVRService class offers the entry point to access Oculus SDK and handles life-cycle operations such as initialization, shutdown, and VR device discovery. The integration with the headset is implemented in OculusVRDisplay. Gear VR lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

In order to read Gear VR sensor inputs and submit frames to the headset, the Android activity must enter VR Mode by calling vrapi_EnterVrMode() function. Oculus Mobile SDK requires a precise life cycle management and handling some events that may interleave in complex ways. For a correct implementation the Android Activity must enter VR mode in a surfaceChanged() or onResume() event, whichever comes last. And it must leave VR mode in a surfaceDestroyed() or onPause() event, whichever comes first.

In a Glutin based Android NativeActivity, life cycle events are notified using Rust channels. This caused synchronization problems due to non-deterministic event handling in multithreading. We couldn’t guarantee that the vrapi_LeaveVrMode() function was called before NativeActivity’s EGLSurface was destroyed and the app went to background. Additionally, we needed to block the event notifier thread until Gear VR resources are freed, in a different renderer thread, to prevent collisions (e.g. Glutin dropping the EGLSurface at the same time that VR renderer thread was leaving VR mode). We contributed a deterministic event handling implementation to the Rust-android-glue.

Oculus mobile SDK allows to directly send a WebGL context texture to the headset. Despite that, we opted for a triple buffered swap chain recommended in the SDK to avoid potential flickering and performance problems when using the same texture every frame. As we did with the Daydream implementation, we render the VR-ready texture to the current ovrTextureSwapChain using a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching.

Oculus Mobile SDK allowed us to directly attach the NativeActivity’s surface to the Gear VR time warp renderer. We were able to run the pure Rust room-scale demo without writing a line of Java. It’s nice that the SDK allows to achieve a java-free integration, but our luck changed when we integrated all this work into a full browser architecture.

Gear VR integration into Servo

Our Daydream integration worked inside Servo almost on a first try after it landed on the rust-webvr standalone library. This was not the case with the Gear VR integration…

First, we had to research and fix up to four specific GPU driver issues with the Mali-T880 GPU used in the Samsung Galaxy S7 phone:

As a result, we were able to see WebGL stereo rendering on the screen but entering VR mode crashed with a JNI assertion failure inside the Oculus VR SDK. This was caused because inside the browser context different threads are used for the rendering and VR device initialization/discovery. This requires the use of different Oculus ovrJava instances for each thread.

The assertion failure was gone but we couldn’t see anything on the screen after calling vrapi_EnterVrMode(). The logcat error messages triggered by the Oculus SDK helped to find the cause of the problem. The Gear VR time warp implementation hijacks the explicitly passed Android window surface pointer. We could use the NativeActivity’s window surface in the standalone room-scale demo. In a full browser architecture, however, there is a fight to take over ownership of the Android surface between time warp thread and the browser compositor. We discarded the idea of directly using the NativeActivity’s window surface and decided to switch to a Java SurfaceView VR backend in order to make both the browser’s compositor and Gear VR’s time warp thread happy.

By this means, the VR mode life cycle fit nicely in the browser architecture. There was one final surprise though. The activity entered VR mode correctly, there were no errors in the logcat, time warp thread was showing correct render stats and the headset pose data was correctly fetched. Nevertheless, the VR scene with lens distortion was not yet visible in the Android view hierarchy. This led to a new instance of spending some hours of debugging to change a single line of code. The Android SurfaceView was being rendered correctly but it was composited below the NativeActivity’s browser window because setZOrderOnTop() is not enabled by default on Android:

After this change everything worked flawlessly and it was time to enjoy running some WebVR experiences on the Gear VR ;)

Conclusion

It's been a lot of fun seeing Gear VR support land in Servo and being able to run A-Frame demos in it. We continue to work hard on squeezing WebGL and WebVR performance and expect to land some nice optimizations soon. We are also working on implementing unique WebVR features that no other browser has yet. More news soon ;) Stay tuned!

Planet MozillaThese Weeks in Firefox: Issue 22

Highlights

  • The main toolbar now has 2 flexible spaces, one on either side of the url/search bar(s). The library button has also replaced the bookmarks menu button in the default toolbar set.

Friends of the Firefox team

  • Resolved bugs (excluding employees): https://mzl.la/2x0m5n4
    • More than one bug fixed:
      • Alejandro Rodriguez Salamanca
      • Dan Banner
      • Hossain Al Ikram [:ikram] (QA Contact)
      • Masatoshi Kimura [:emk]
      • Michael Kohler [:mkohler]
      • Michael Smith [:mismith]
      • Richard Marti (:Paenglab)
      • Rob Wu [:robwu]
      • Tomislav Jovanovic :zombie
      • flyingrub
    • New contributors (🌟 = First Patch!)

Project Updates

Add-ons

Activity Stream

  • Landed pref’ed off in 56 Beta, with localization, snippets, performance telemetry, and Pocket recommendations.
  • Up next
    • Adding “Recent Bookmarks” and “Recently Visited” to Highlights.
    • Adding custom sections via a Web Extension.
    • More customization for Top Sites: Pin/Dismiss, Show More/Less, Add/Edit Top Site.
    • Creating a site summary pipeline (high-res page icons -> Tippytop -> Screenshot + Favicon).
    • Optimizing metadata queries and Tippytop Icon DB improvements.

Firefox Core Engineering

  • Installer
    • Profile cleanup option has landed in the stub installer for 57. Users who are running the stub installer and have an older version of Firefox installed will be presented with the option to clean up their profile.
  • Updater
    • LZMA/SHA384 changes have landed as of 56 beta 3.
  • Quantum & Photon Performance pile-on:
    • Felipe Gomes, Kirk Steuber, Adam Gashlin, Perry Jiang, Doug Thayer, Robert Strong closed 16 bugs and are currently on 11 more bugs.

Form Autofill

Photon

Structure
Animation
Visuals
Preferences

Privacy/Security

Sync / Firefox Accounts

  • We’re wrapping up iOS bidirectional sync work!
  • Form Autofill Address sync is now enabled on Nightly. Enable it in about:preferences#sync

Test Pilot

  • All Test Pilot experiments are off the Add-on SDK now!
  • All Test Pilot add-ons are getting signed through a new signing pipeline (not AMO) to allow for non-WebExtensions in the future.
  • Planning to roll out Screenshots to Release in the next couple of weeks.

Web Payments

Planet MozillaTaking a break – and so should you

TL;DR: I am going on holiday for a week and don’t take any computer with me. When I’m back I will cut down on my travels, social media and conference participation and focus more on coaching others, writing and developing with a real production focus.

<figure>Sleeping dogLarry shows how it is done</figure>

You won’t hear much from me in the next week or so as I am taking a well-deserved vacation. I’m off to take my partner to the Cayman Islands to visit friends who have a house with a spare room as hotels started to feel like work for me. I’m also making the conscious decision to not take any computer with me as I will be tempted to do work whilst I am there. Which would be silly.

Having just been in a lot of meetings with other DevRel people and a great event about it I found a pattern: we all have no idea how to measure our success and feel oddly unsatisfied if not worried about this. And we are all worried about keeping up to do date in a daily changing market.

I’m doing OK on both of these, but I also suffer from the same worries. Furthermore, I am disturbed about the gap between what we talk about at events and workshops and what gets released in the market afterwards.

The huge gap between publication and application

We have all the information what not to do to create engaging, fast and reliable solutions. We have all the information how to even automate some of these to not disrupt fast development processes. And yet I feel a massive lack of longevity or maintainability in all the products I see and use. I even see a really disturbing re-emergence of “this only needs to work on browser $x and platform $y” thinking. As if the last decade hadn’t happened. Business decisions dictate what goes into production, less so what we get excited about.

Even more worrying is security. We use a lot of third party code, give it full access to machines and fail to keep it up-to-date. We also happily use new and untested code in production even when the original developers state categorically that it shouldn’t be used in that manner.

When it comes to following the tech news I see us tumbling in loops. Where in the past there was a monthly cadence of interesting things to come out, more readily available publication channels and a “stream of news” mentality makes it a full-time job just to keep up with what’s happening.

Many thoughtpieces show up in several newsletters and get repurposed even if the original authors admitted in commentary that they were wrong. A lot is about being new and fast, not about being right.

There is also a weird premature productisation happening. When JavaScript, Browsers and the web weren’t as ubiquitous as they are now, we showed and explained coding tricks and workarounds in blog posts. Now we find a solution, wrap it in a package or a library and release it for people to use. This is a natural progression in any software, but I miss the re-use and mulling around of the original thought. And I am also pretty sure that the usage numbers and stars on GitHub are pretty inflated.

My new (old) work modus

Instead of speaking at a high amount of conferences, I will be much pickier with where I go. My time is more limited now, and I want to use my talents to have a more direct impact. This is due to a few reasons:

  • I want to be able to measure more directly what I do – it is a good feeling to be told that you were inspiring and great. But it fails to stay a good feeling when you don’t directly see something coming out of it. That’s why instead of going from event to event I will spend more time developing tools and working directly with people who build products.
  • I joined a new team that is much more data driven – our job is to ensure people can build great apps and help them by fixing our platform and help them apply best practices instead of just hearing about them. This is exciting – I will be able to see just how applicable what we talk about really is and collect data of its impact. Just like any good trainer should ensure that the course attendees really learned what you talked about this is a full feedback loop for cool technologies like ServiceWorker and Push Nofifications.
  • We just hired a truckload of talented people to coach – and I do want to see other people on stage than the usual suspects. It is great to see people grow with help you can give.
  • I just had a cancer growth removed from my face – it was benign but it is kind of a wake-up call to take more care about myself and have my body looked after better on an ongoing basis
  • I am moving to Berlin to exclusively live there with my partner and our dog – I’ve lived out of suitcases for years now and while this is great it is fun to have a proper home with people you care about to look after. I will very much miss London, but I am done with the politics there and I don’t want to maintain two places any longer.
  • I will spend more time coding – I am taking over some of the work on PWAbuilder and other helper tools and try them out directly with partners. Working in the open is great, but there is a huge difference between what Twitter wants and what people really need
  • I will write more – both articles and blog posts. I will also have a massive stab at refreshing the Developer Evangelism Handbook
  • I will work more with my employer and its partners – there is a huge group of gifted, but very busy developers out there that would love to use more state-of-the-art technology but have no time to try it out or to go to conferences.

<figure>Anke, Larry and ChrisGreetings from Berlin</figure>

What this means for events and meetups

Simple.

  • I will attend less – instead I will connect conferences and meetups with other people who are not as in demand but great at what they do. I am also helping and mentoring people inside and outside the company to be invited instead of me. A lot of times a recommendation is all that is needed. And a helping hand in getting over the fear of “not being good enough”.
  • I will stay shorter – I want to still give keynotes and will consider more workshops. But I won’t be booking conferences back-to-back and will not take part in a lot of the social activities. Unless my partner is also coming along. Even better when the dog is allowed, too.
  • I am offering to help others – to review their work to get picked and help conference organisers to pick new, more diverse, talent.

I have a lot of friends who do events and I will keep supporting those I know have their full heart in them. I will also try to be supportive for others that need a boost for their new event. But I think it is a good time to help others step up. As my colleague Charles Morris just said at DevRelConf, “not all conferences need a Chris Heilmann”. It is easy to get overly excited about the demand you create. But it is as important to not let it take over your life.

Planet MozillaDevRelSummit was well worth it

Last week I was in Seattle to attend a few meetings and I was lucky to attend DevRelSummit in the Galvanize space. I was invited to cover an “Ask me anything” slot about Developer Outreach in Microsoft and help out Charles Morris of the Edge team who gave a presentation a similar matter.

It feels weird to have a conference that is pretty meta about the subject of Developer relations (and there is even a ConfConf for conference organisers), but I can wholeheartedly recommend DevRelSummit for people who already work in this field and those who want to.

The line-up and presentations were full of people who know their job and shared real information from the trenches instead of advertising products to help you. This is a very common worry when a new field in our job market gains traction. Anyone who runs events or outreach programs drowns in daily offers of “the turn-key solution to devrel success” or similar snake oil.

In short, the presentations were:

  • Bear Douglas of Slack (formerly Twitter and Facebook) sharing wins and fails of developer outreach
  • Charles Morris of Microsoft showing how he scaled from 3 people on the Edge team to a whole group, aligning engineering and outreach
  • Kyle Paul showing how to grow a community in spaces that are not technical cool spots and how to measure DevFest success
  • AJ Glasser of Unity explaining how to deal with and harvest feedback you get showing some traps to avoid
  • Damon Hernandez of Samsung talking about building community around hackathons
  • Linda Xie of Sourcegraph showing the product and growth cycle of a new software product
  • Robert Nyman of Google showing how he got into DevRel and what can be done to stay safe and sound on the road
  • Angel Banks and Beth Laing sharing the road to and the way to deliver an inclusive conference with their “We Rise” event as the example
  • Jessica Tremblay and Sam Richard showing how IBM scaled their developer community

In between the presentations there were breakout discussions, lightning talks and general space and time to network and share information.

As expected, the huge topics of the event were increasing diversity, running events smoothly, scaling developer outreach and measuring devrel success. Also, as expected, there were dozens of ways and ideas how to do these things with consensus and agreeable discourse.

All in all, DevRelSummit was a very well executed event and a superb networking opportunity without any commercial overhead. There was a significant lack of grandstanding and it was exciting to have a clear and open information exchange amongst people who should be in competition but know that when it comes to building communities, this is not helpful. There is a finite amount of people we want to reach doing Developer Relations. There is no point in trying to subdivide this group even further.

I want to thank everyone involved about the flawless execution and the willingness to share. Having a invite-only slack group with pre-set channels for each talk and session was incredibly helpful and means the conversations are going on right now.

Slack Channel of the event

DevRelSummit showed that when you get a dedicated group of people together who know their jobs and are willing to share that you can get an event to be highly educational without any of the drama that plights other events. We have a lot of problems to solve and many of them are very human issues. A common consensus of the event was that we have to deal with humans and relate to them. Numbers and products are good and useful, but not burning out or burning bridges even with the best of intentions are even more important.

Planet MozillaIntern Presentations: Round 4: Tuesday, August 15th

Intern Presentations: Round 4: Tuesday, August 15th Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 5 MTV, 1 Berlin

Planet MozillaIntern Presentations: Round 4: Tuesday, August 15th

Intern Presentations: Round 4: Tuesday, August 15th Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 5 MTV, 1 Berlin

Planet MozillaEssential WebVR resources

The general release of Firefox 55 brought a number of cool new features to the Gecko platform, one of which is the WebVR API v1.1. This allows developers to create immersive VR experiences inside web apps, compatible with popular hardware such as HTC VIVE, Oculus Rift, and Google Daydream. This article looks at the resources we’ve made available to facilitate getting into WebVR development.

Support notes

Version 1.1 of the WebVR API is very new, with varying support available across modern browsers:

  • Firefox 55 sees full support on Windows, and more experimental support available for Mac in the Beta/Nightly release channels only, until testing and final work is completed. Supported VR hardware includes HTC VIVE, Oculus Rift, and Google Daydream.
  • Chrome support is still experimental — you can currently only see support out in the wild on Chrome for Android with Google Daydream.
  • Edge fully supports WebVR 1.1, through the Windows Mixed Reality headset.
  • Support is also available in Samsung Internet, via their GearVR hardware.

Note that the 1.0 version of the API can be considered obsolete, and has been (or will be) removed from all major browsers.

Controlling WebVR apps using the full features of VR controllers relies on the Gamepad Extensions API. This adds features to the Gamepad API that provide access to controller features like haptic actuators (e.g. vibration hardware) and position/orientation data (i.e., pose). This currently has even more limited support than the WebVR API; Firefox 55+ has it available in Beta/Nightly channels.

In other browsers, you’ll have to make do for now with basic Gamepad API functionality, like reporting button presses.

vr.mozilla.org

vr.mozilla.org — Mozilla’s new landing pad for WebVR — features demos, utilities, news and updates, and all the other information you’ll need to get up and running with WebVR.

MDN documentation

MDN has full documentation available for both the APIs mentioned above. See:

In addition, we’ve written some useful guides to get you familiar with the basics of using these APIs:

A-Frame and other libraries

WebVR experiences can be fairly complex to develop. The API itself is easy to use, but you need to use WebGL to create the 3D scenes you want to feature in your apps, and this can prove difficult to those not well-versed in low-level graphics programming. However, there are a number of libraries to hand that can help with this.

The hero of the WebVR world is Mozilla’s A-Frame library, which allows you to create nice looking 3D scenes using custom HTML elements, handling all the WebGL for you behind the scenes. A-Frame apps are also WebVR-compatible by default. It is perfect for putting together apps and experiences quickly.

There are a number of other well-written 3D libraries available too, which abstract away the difficulty of working with raw WebGL. Good examples include:

These don’t include VR capabilities out of the box, but it is not too difficult to write your own WebVR rendering code around them.

If you are worried about supporting older browsers that only include WebVR 1.0 (or no VR) as well as newer browsers with 1.1, you’ll be pleased to know that there is a WebVR polyfill available.

Demos and examples

See also

Planet MozillaReps Program Objectives – Q3 2017

As with every quarter, we define Objectives and Key Results for the Reps Program. We are happy to announce the Objectives for the current quarter.

Objective 1: The Reps program continues to grow its process maturity
KR1: 20 Reps have been trained with the Resource training
KR2: 100% of the budget requests of new Reps are filed by Resource Track Reps
KR3: 30 Reps complete the coaching training
KR4: The amount of mentor-less Reps is reduced by 50%
KR5: Increase number of authors for Reps tweets to 10 people

Objective 2: The Reps program is the backbone for any mobilizing needs
KR1: We documented what mobilizing Reps are focusing on
KR2: An implementation roadmap for mobilizers’ recommendations is in place.
KR3: Identified 1 key measures that is defining how our Mobilizers add value to the coding and Non-Coding/Enthusiast communities

Objective 3: The Activate Portal is improved for Mobilizer Reps and Functional Areas
KR1: The Rust activity is updated
KR2: The WebExtensions activity update has been tested in 3 pilot events in 3 different countries
KR3: 60 unique Reps have run a MozActivate event
KR4: The website is updated to the new branding

We will work closely with the Community Development Team to achieve our goals. You can follow the progress of these tasks in the Reps Issue Tracker. We also have a dashboard to track the status of each objective.

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

Planet MozillaThese Weeks in Dev-tools #1

2017-08-14

Welcome to the first ever issue of 'These Weeks in Dev-Tools'! The dev-tools team is responsible for developer tools for Rust developers. That means any tools a developer might use (or want to use) when reading, writing, or debugging Rust code, such as Rustdoc, IDEs, editors, Racer, bindgen, Clippy, Rustfmt, etc.

These Weeks in Dev-Tools will keep you up to date with all the exciting news in this area. We plan to have a new issue every few weeks. If you have any news you'd like us to report, please comment on the tracking issue.

If you're interested in Rust's developer tools and want to contribute or ask questions, come chat to us in #rust-dev-tools.

Releases

RFCs

Thanks!

  • @photoszzt has been re-writing various ad-hoc computations into fix-point analyses in Bindgen:
    • whether we can add derive(Debug) to a struct: rust-lang-nursery/rust-bindgen#824
    • and whether a struct has a virtual table: rust-lang-nursery/rust-bindgen#850
  • @topecongiro for doing sustained, impressive work on Rustfmt - implementing the new RFC style, fixing (literally) hundreds of bugs, and lots more.
  • Shout out to @TedDriggs for continuing to push Racer forward. Jwilm and the rest of Racer's users continue to appreciate all your hard work!

Meetings

We've had a bunch of meetings. You can find all the minutes here. Some that might be interesting:

Planet MozillaBringing the 4th Amendment into the Digital Age

Today, Mozilla has joined other major technology companies in filing an amicus brief urging the Supreme Court of the United States to reexamine how the 4th Amendment and search warrant requirements should apply in our digital era. We are joining this brief because we believe our laws need to keep up with what we already know to be true: that the Internet is an integral part of modern life, and that user privacy must not be treated as optional.

At the heart of this case is the government’s attempt to obtain “cell site location information” to aid in a criminal investigation. This information is generated continuously when your phone is on. Your phone communicates with nearby cell sites to connect with the cellular network and those sites create a record of your phone’s location as you go about your business. In the case at hand, the government did not obtain a warrant, which would have required probable cause, before obtaining this location information. Instead, the government sought a court order under the Stored Communications Act of 1986, which requires a lesser showing.

Looking at how the courts have dealt with the cell phone location records in this case demonstrates why our laws must be revisited to account for modern technological reality. The district court decided that the government didn’t have to obtain a warrant because people do not have a reasonable expectation of privacy in their cell phone location information. On appeal, the Sixth Circuit acknowledged that similar information, such as GPS monitoring in government investigations, would require a warrant. But it too found no warrant was needed because the location information was a “business record” from a “third party” (i.e., the service providers).

We believe users should not be forced to surrender their expectations of privacy when using their phones and we hope the Court will reconsider the law in this area.

*Brief link updated on August 16

The post Bringing the 4th Amendment into the Digital Age appeared first on Open Policy & Advocacy.

Planet MozillaThis Week in Rust 195

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is exa, a modern ls replacement (with a tree thrown in as well) written in Rust. Thanks to Vikrant for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

128 pull requests were merged in the last week

New Contributors

  • Alexey Tarasov
  • arshiamufti
  • Foucher
  • Justin Browne
  • Natalie Boehm
  • nicole mazzuca
  • Owen Sanchez
  • Ryan Leckey
  • Tej Chajed
  • Thomas Levy

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

Currently being discussed:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

once you can walk barefoot (C), it’s easy to learn to walk with shoes (go) but it will take time to learn to ride a bike (rust)

/u/freakhill on Reddit.

Thanks to Rushmore for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaMozMEAO SRE Status Report - August 15, 2017

Here’s what happened on the MozMEAO SRE team from August 8th - August 15th.

Current work

MDN Migration to AWS

  • We’ve setup a few cronjobs to periodically sync static files from the current SCL3 datacenter to an S3 bucket. Our Kubernetes development environment runs a cronjobs that pulls these files from S3 to a local EFS mount.
    • There was some additional work needed to deal with files in SCL3 that contained unicode characters in their names.
  • A cronjob in Kubernetes has been implemented to backup new files uploaded to our shared EFS volume.

  • We’ve finished our evaluation of hosted Elasticsearch from elastic.co, which we’ll be using for our initial migration in production.

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland is tentatively scheduled to be decommissioned later this week.

Links

Dev.OperaWhat’s new in Chromium 60 and Opera 47

Opera 47 (based on Chromium 60) for Mac, Windows, Linux is out! To find out what’s new for users, see our Desktop blog post. Here’s what it means for web developers.

Paint Timing API

While no generalized metric perfectly captures when a page is loaded in all cases, First Paint and First Contentful Paint are invaluable numbers to measure critical user moments during loading. To give developers better insight into their site’s loading performance, the new Paint Timing API exposes metrics that capture First Paint and First Contentful Paint.

CSS font-display

Downloadable web fonts are often used to create more visually rich web experiences. Historically, Opera has delayed rendering text until the specified font is available, to ensure visual correctness. However, downloading a font can take as long as several seconds on a poor connection, significantly delaying the time until a user sees content. Opera now supports the CSS font-display property as part of an @font-face descriptor, allowing developers to specify how and when Opera displays text content while downloading fonts.

Credential Management API improvements

Starting in Opera 47, PasswordCredential also contains the user’s password, alleviating the need for a custom fetch() to access a stored password.

Some changes has also been made to to better align with the work being done in the Web Authentication Working Group. This includes the deprecation of requireUserMediation, which has been renamed to preventSilentAccess.

Other features in this release

  • Object rest & spread properties are now supported, making it simpler to merge and shallow-clone objects and implement various immutable object patterns.
  • The new Web Push Encryption format is now supported and PushManager.supportedContentEncodings can be used to detect where it can be used.
  • PushSubscription.expirationTime is now available, notifying sites when and if a subscription expires.
  • To improve performance and predictability, pointermove and mousemove events are now delivered once per animation frame (analog to requestAnimationFrame), matching the current functionality of scroll and TouchEvents.
  • The :focus-within CSS pseudo-class is now available, affecting any element the :focus pseudo-class affects, as well as any element with a descendant affected by :focus.
  • The CSS frames() timing function is now available, allowing for proper frame-based animations. (Animations where all frames have exactly the same duration, including its first and last frames.)
  • To provide an improved way to capture editing actions, InputEvent now allows user input to be managed by script, enhancing the details provided to editable elements.
  • To increase security, a beforeunload dialog triggered when the user leaves a site is now only shown if the frame attempting to display it has ever received a user gesture or user interaction, though the beforeunload event is still dispatched regardless.
  • VP9, an open and royalty-free video coding format, can now be used with the MP4 (ISO BMFF) container and requires the new VP9 string format mentioned below.
  • A new VP9 string format is now available and accepted by various media-related APIs, enabling developers to describe the encoding properties that are common in video codecs, but are not yet exposed.

Deprecations and interoperability improvements

  • getElementsByTagName() now accepts qualified names in response to an update to the DOM specification.
  • The /deep/ combinator which has been removed from the current specification, now behaves like the descendant combinator, which is effectively a no-op. Previously it would enable selecting descendants in shadow trees.
  • To improve the user experience, calls to Navigator.vibrate() now immediately return false if the user hasn’t explicitly tapped on the frame or any embedded frame, matching existing behavior for cross-origin iframes.
  • WEBKIT_KEYFRAME_RULE and WEBKIT_KEYFRAMES_RULE have been removed in favor of the unprefixed standardized APIs, KEYFRAME_RULE and KEYFRAMES_RULE.
  • Support for non-standard WebKitAnimationEvent and WebKitTransitionEvent has been removed from document.createEvent().
  • To better align with spec, NodeIterator.filter and TreeWalker.filter no longer wrap JavaScript objects, and .prototype has been removed from window.NodeFilter.
  • RTCPeerConnection.prototype.getStreamById() is being removed, and a polyfill is recommended as a replacement.
  • SVGPathElement.prototype.getPathSegAtLength() has been deprecated, and will be removed in Opera 49 (Chromium 62), since it has been been removed from the SVGPathElement interface in the SVG2 spec.
  • Headers.prototype.getAll() has been removed from the Fetch API in line with its removal from the spec.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaAdd-ons Update – 2017/08

Here’s the monthly update of the state of the add-ons world.

The Review Queues

In the past month, our team reviewed 1,803 listed add-on submissions:

  • 1368 in fewer than 5 days (76%).
  • 147 between 5 and 10 days (8%).
  • 288 after more than 10 days (16%).

274 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel, and only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

We recommend that you test your add-ons on Beta. If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Apoorva Pandey
  • Neha Tekriwal
  • Swapnesh Kumar Sahoo
  • rctgamer3
  • Tushar Saini
  • vishal-chitnis
  • Cameron Kaiser
  • zombie
  • Trishul Goel
  • Krzysztof Modras
  • Tushar Saini
  • Tim Nguyen
  • Richard Marti
  • Christophe Villeneuve
  • Jan Henning
  • Leni Mutungi
  • dw-dev
  • Dino Herbert

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/08 appeared first on Mozilla Add-ons Blog.

Planet Mozilla64-bit Firefox is the new default on 64-bit Windows

Users on 64-bit Windows who download Firefox will now get our 64-bit version by default. That means they’ll install a more secure version of Firefox, one that also crashes a … Read more

The post 64-bit Firefox is the new default on 64-bit Windows appeared first on The Firefox Frontier.

Planet MozillaA-Frame comes to js13kGames: build a game in WebVR

It’s that time of the year again – the latest edition of the js13kGames competition opened yesterday, on Sunday, August 13th. Just like last year, and going back to 2012 when I started this competition. Every year the contest has a new theme, but his time there’s another new twist that’s a little bit different – a brand new A-Frame VR category just in time for the arrival of WebVR to Firefox 55 and a desktop browser near you.

Js13kGames is an online competition for HTML5 game developers where the fun part is that the size limit is set to 13 kilobytes. Unlike a 48-hour game jam, you have a whole month to come up with your best idea, create it, polish as much as you can, and submit – deadline is September 13th.

A brief history of js13kgames

It started five years ago from the pure need of having a competition for JavaScript game developers like me – I couldn’t find anything interesting, so I created one myself. Somehow it was cool enough for people to participate, and from what I heard they really enjoyed it, so I kept it going over the years even though managing everything on my own is exhausting and time-consuming.

There have been many great games created since the beginning – you can check GitHub’s recent blog post for a quick recap of some of my personal favourites. Two of the best entries from 2016 ended up on Steam in their post-competition versions: Evil Glitch and Glitch Buster, and keys for both of them are available as prizes in the competition this year.

A-Frame category

The big news this year that I’m really proud of: Virtual Reality has arrived with the new A-Frame category. Be sure to check it out the A-Frame landing page for the rules and details. You can reference the minified version of the A-Frame library and you are not required to count its size as part of the 13 kilobytes size limit that defines this contest.

Since the A-Frame library itself was announced I have been really excited trying it out. I believe it’s a real game changer (pun intended) for the WebVR world. With just a few lines of HTML markup you can set up a simple scene with VR mode, controls, lights. Prototyping is extremely easy, and you can build really cool experiments within minutes. There are many useful components in the Registry that can help you out too, so you don’t have to write everything yourself. A-Frame is very powerful, yet so easy to use – I really can’t wait to see what you’ll come up with this year.

Resources

If WebVR is all brand new to you and you have no idea where to start, read Chris Mills’ recent article “WebVR Essentials”. Then be sure to check out the A-Frame website for useful docs and demos, and a lively community of WebVR creators:

I realize the 13K size limit is very constraining, but these limitations spawn creativity. There have been many cool and inspiring games created over the years, and all their source code is available on GitHub in a readable form for everyone to learn from. There are plenty of A-Frame tutorials out there, so feel free to look for the specific solutions to your ideas. I’m sure you’ll find something useful.

Feedback

Many developers who’ve participated in this competition in previous years have mentioned expert feedback as a key benefit from the competition. This year’s judges for the A-Frame category will focus their full attention to on WebVR games only, in order to be able to offer constructive feedback on your entry.

The A-Frame judges include: Fernando Serrano Garcia (WebVR and WebGL developer), Diego Marcos (A-Frame co creator, API designer and maintainer), Ada Rose Edwards (Senior Engineer and WebVR advocate at Samsung) and Matthew ‘Potch’ Claypotch (Developer Advocate at Mozilla).

Prizes

This year, we’ll be offering custom-made VR cardboards to all participants in the js13kGames competition. These will be shipped for every complete submission, along with the traditional annual t-shirt, and a bunch of cool stickers.

In addition to the physical package that’s shipped for free to your doorstep, there’s a whole bunch of digital prizes you can win – software licenses, engines, editors and other tools, as well as subscription plans for various services and online courses, games and game assets, ebooks, and vouchers.

Prizes for the A-Frame category include PlayCanvas licenses, WebVR video courses, and WebStorm licenses. There are other ways to win more prizes too: Community Awards and Social Specials. You can find all the details and rules about how to enter on the competition website.

A look back

I’m happy to see this competition become more and more popular. I’ve started many projects, and many have failed. Yet this one is still alive and kicking, even though HTML5 game deveopment itself is a niche, and the size constraint in this contest means you have to mind the size of every resource you want to use. It is indeed a tough competition and not every developer makes it to the finish, but the feeling of submitting an entry minutes before the deadline is priceless.

I’m a programmer, and my wife Ewa is a graphic designer on all our projects, including js13kGames. I guess that makes Enclave Games a family business! With our little baby daughter Kasia born last year, it’s an ongoing challenge to balance work, family and game development. It’s not easy, but if you believe in something you have to try and make it work.

Start your engines

Anyway, the new category in the competition is a great opportunity to learn A-Frame if you haven’t tried it yet, or improve your skills. After all you have a full month, and there’s guaranteed swag for every entry. The theme this year is “lost” – I hope it will help you find a good idea for the game.

Visit js13kGames website for all the details, see the A-Frame category landing page, and follow @js13kgames on Twitter or on Facebook for announcements. The friendly js13kGames community can help you with any problems or issues you’ll face; they can be found on our js13kgames Slack channel. Good luck and have fun!

Planet Mozillakeep finding old security problems

I decided to look closer at security problems and the age of the reported issues in the curl project.

One theory I had when I started to collect this data, was that we actually get security problems reported earlier and earlier over time. That bugs would be around in public release for shorter periods of time nowadays than what they did in the past.

My thinking would go like this: Logically, bugs that have been around for a long time have had a long time to get caught. The more eyes we’ve had on the code, the fewer old bugs should be left and going forward we should more often catch more recently added bugs.

The time from a bug’s introduction into the code until the day we get a security report about it, should logically decrease over time.

What if it doesn’t?

First, let’s take a look at the data at hand. In the curl project we have so far reported in total 68 security problems over the project’s life time. The first 4 were not recorded correctly so I’ll discard them from my data here, leaving 64 issues to check out.

The graph below shows the time distribution. The all time leader so far is the issue reported to us on March 10 this year (2017), which was present in the code since the version 6.5 release done on March 13 2000. 6,206 days, just three days away from 17 whole years.

There are no less than twelve additional issues that lingered from more than 5,000 days until reported. Only 20 (31%) of the reported issues had been public for less than 1,000 days. The fastest report was reported on the release day: 0 days.

The median time from release to report is a whopping 2541 days.

When we receive a report about a security problem, we want the issue fixed, responsibly announced to the world and ship a new release where the problem is gone. The median time to go through this procedure is 26.5 days, and the distribution looks like this:

What stands out here is the TLS session resumption bypass, which happened because we struggled with understanding it and how to address it properly. Otherwise the numbers look all reasonable to me as we typically do releases at least once every 8 weeks. We rarely ship a release with a known security issue outstanding.

Why are very old issues still found?

I think partly because the tools are gradually improving that aid people these days to find things much better, things that simply wasn’t found very often before. With new tools we can find problems that have been around for a long time.

Every year, the age of the oldest parts of the code get one year older. So the older the project gets, the older bugs can be found, while in the early days there was a smaller share of the code that was really old (if any at all).

What if we instead count age as a percentage of the project’s life time? Using this formula, a bug found at day 100 that was added at day 50 would be 50% but if it was added at day 80 it would be 20%. Maybe this would show a graph where the bars are shrinking over time?

But no. In fact it shows 17 (27%) of them having been present during 80% or more of the project’s life time! The median issue had been in there during 49% of the project’s life time!

It does however make another issue the worst offender, as one of the issues had been around during 91% of the project’s life time.

This counts on March 20 1998 being the birth day. Of course we got no reports the first few years since we basically had no users then!

Specific or generic?

Is this pattern something that is specific for the curl project or can we find it in other projects too? I don’t know. I have not seen this kind of data being presented by others and I don’t have the same insight on such details of projects with an enough amount of issues to be interesting.

What can we do to make the bars shrink?

Well, if there are old bugs left to find they won’t shrink, because for every such old security issue that’s still left there will be a tall bar. Hopefully though, by doing more tests, using more tools regularly (fuzzers, analyzers etc) and with more eyeballs on the code, we should iron out our security issues over time. Logically that should lead to a project where newly added security problems are detected sooner rather than later. We just don’t seem to be at that point yet…

Caveat

One fact that skews the numbers is that we are much more likely to record issues as security related these days. A decade ago when we got a report about a segfault or something we would often just consider it bad code and fix it, and neither us maintainers nor the reporter would think much about the potential security impact.

These days we’re at the other end of the spectrum where we people are much faster to jumping to a security issue suspicion or conclusion. Today people report bugs as security issues to a much higher degree than they did in the past. This is basically a good thing though, even if it makes it harder to draw conclusions over time.

Data sources

When you want to repeat the above graphs and verify my numbers:

  • vuln.pm – from the curl web site repository holds security issue meta data
  • releaselog – on the curl web site offers release meta data, even as a CSV download on the bottom of the page
  • report2release.pl – the perl script I used to calculate the report until release periods.

Planet MozillaPhoton Engineering Newsletter #12

Let’s get straight into update #12!

Oh, hey, anyone notice any icon chances recently? Yeah, they’re pretty wonderful. Or maybe I should say funderful? Looking forward to where they end up!

about-logo@2x

Speaking of looking forward, I’m going to be on vacation for the next two weeks. But fear not! Jared and Mike will be covering Photon updates, so you’ll still be able to get your Photon phix.

Recent Changes

Menus/structure:

Animation:

Preferences:

Visual redesign:

  • Updated the button positions in the navbar, and made them more customizable. (This was a contributor patch – thanks!)
  • Close buttons updated across the UI (also a contributor patch!)
  • The “Compact Light” and “Compact Dark” themes have been renamed to simply “Light” and “Dark”. (The UI density setting is already independent of the theme.)

Onboarding:

Performance:

 


Planet MozillaKindness and Code

It is very easy to think of software development as being an entirely technical activity, where humans don’t really matter and everything is about the computer. However, the opposite is actually true.

Software engineering is fundamentally a human discipline.

Many of the mistakes made over the years in trying to fix software development have been made by focusing purely on the technical aspects of the system without thinking about the fact that it is human beings who write the code. When you see somebody who cares about optimization more than readability of code, when you see somebody who won’t write a comment but will spend all day tweaking their shell scripts to be fewer lines, when you have somebody who can’t communicate but worships small binaries, you’re seeing various symptoms of this problem.

In reality, software systems are written by people. They are read by people, modified by people, understood or not by people. They represent the mind of the developers that wrote them. They are the closest thing to a raw representation of thought that we have on Earth. They are not themselves human, alive, intelligent, emotional, evil, or good. It’s people that have those qualities. Software is used entirely and only to serve people. They are the product of people, and they are usually the product of a group of those people who had to work together, communicate, understand each other, and collaborate effectively. As such, there’s an important point to be made about working with a group of software engineers:

There is no value to being cruel to other people in the development community.

It doesn’t help to be rude to the people that you work with. It doesn’t help to angrily tell them that they are wrong and that they shouldn’t be doing what they are doing. It does help to make sure that the laws of software design are applied, and that people follow a good path in terms of making systems that can be easily read, understood, and maintained. It doesn’t require that you be cruel to do this, though. Sometimes you do have to tell people that they haven’t done the right thing. But you can just be matter of fact about it—you don’t have to get up in their face or attack them personally for it.

For example, let’s say somebody has written a bad piece of code. You have two ways you could comment on this:

“I can’t believe you think this is a good idea. Have you ever read a book on software design? Obviously you don’t do this.”

That’s the rude way—it’s an attack on the person themselves. Another way you could tell them what’s wrong is this:

“This line of code is hard to understand, and this looks like code duplication. Can you refactor this so that it’s clearer?”

In some ways, the key point here is that you’re commenting on the code, and not on the developer. But also, the key point is that you’re not being a jerk. I mean, come on. The first response is obviously rude. Does it make the person want to work with you, want to contribute more code, or want to get better? No. The second response, on the other hand, lets the person know that they’re taking a bad path and that you’re not going to let that bad code into the codebase.

The whole reason that you’re preventing that programmer from submitting bad code has to do with people in the first place. Either it’s about your users or it’s about the other developers who will have to read the system. Usually, it’s about both, since making a more maintainable system is done entirely so that you can keep on helping users effectively. But one way or another, your work as a software engineer has to do with people.

Yes, a lot of people are going to read the code and use the program, and the person whose code you’re reviewing is just one person. So it’s possible to think that you can sacrifice some kindness in the name of making this system good for everybody. Maybe you’re right. But why be rude or cruel when you don’t have to be? Why create that environment on your team that makes people scared of doing the wrong thing, instead of making them happy for doing the right thing?

This extends beyond just code reviews, too. Other software engineers have things to say. You should listen to them, whether you agree or not. Acknowledge their statements politely. Communicate your ideas to them in some constructive fashion.

And look, sometimes people get angry. Be understanding. Sometimes you’re going to get angry too, and you’d probably like your teammates to be understanding when you do.

This might all sound kind of airy-fairy, like some sort of unimportant psychobabble BS. But look. I’m not saying, “Everybody is always right! You should agree with everybody all the time! Don’t ever tell anybody that they are wrong! Nobody ever does anything bad!” No, people are frequently wrong and there are many bad things in the world and in software engineering that you have to say no to. The world is not a good place, always. It’s full of stupid people. Some of those stupid people are your co-workers. But even so, you’re not going to be doing anything effective by being rude to those stupid people. They don’t need your hatred—they need your compassion and your assistance. And most of your co-workers are probably not stupid people. They are probably intelligent, well-meaning individuals who sometimes make mistakes, just like you do. Give them the benefit of the doubt. Work with them, be kind, and make better software as a result.

-Max

Planet MozillaTime to sink the Admiral (or, why using the DMCA to block adblockers is a bad move)

One of the testing steps I have to do, but don't enjoy, is running TenFourFox "naked" (without my typical adblock add-ons) to get an assessment of how it functions drinking from the toxic firehose that is the typical modern ad network. (TL;DR: Power Macs run modern Web ads pretty poorly. But, as long as it doesn't crash.) Now to be sure, as far as I'm concerned sites gets to monetize their pages however they choose. Heck, there's ads on this blog, provided through Google AdSense, so that I can continue to not run a tip jar. The implicit social contract is that they can stick it behind a paywall or run ads beside them and it's up to me/you to decide whether we're going to put up with that and read the content. If we read it, we should pony up in either eyeballs or dinero.

This, of course, assumes that the ads we get served are reasonable and in a reasonable quantity. However, it's pretty hard to make money simply off per-click ads and networks with low CPM, so many sites run a quantity widely referred to as a "metric a$$ton" and the ads they run are not particularly selective. If those ads end up being fat or heavy or run scripts and drag the browser down, they consider that the cost of doing business. If, more sinisterly, they end up spying on or fingerprinting you, or worse, try to host malware and other malicious content, well, it's not their problem because it's not their ad (but don't block them all the same).

What the solution to this problem is not, is begging us to whitelist them because they're a good site. If you're not terribly discriminating about what ads you burden your viewers with, then how good can your site really be? The other non-solution is to offer effectively the Hobson's choice of "ads or paywall." What, the solution to the ads you don't curate is to give you my credit card number so you can be equally as careful with that?

So until this situation changes and sites get a little smarter about how they do sponsorship (let me call out a positive example: The Onion's sponsored content [slightly NSFW related article]), I don't have a moral problem with adblocking because really that's the only way to equalize the power dynamic. Block the ads on this blog if you want; I don't care. Click on them or not, your choice. In fact, for the Power Macs TenFourFox targets, I find an adblocker just about essential and my hats are off to those saints of the church who don't run one. Lots of current sites are molasses in January on barbituates without it and I can only improve this problem to a certain degree. Heck, they drag on my i7 MacBook Air. What chance does my iMac G4 have?

That's why this egregious abuse of statute is particularly pernicious: a company called Admiral, which operates an anti-adblocker, managed to use a DMCA request to Github to get the address of the site hosting their beacon image (to determine if you're blocking them or not) removed from the EasyList adblock listing. They've admitted it, too.

The legal theory, as I understand it (don't ask me to defend it), is that adblockers allow users to circumvent measures designed to "control access," which is a specific component of the American DMCA. (It is not, in fact, the case in Europe.) It might be more accurate to say that the components of adblockers that block adblocker blocking are primarily what they object to. (Uh, yo dawg.) Since the volunteer maintainers of EasyList are the weak link and the list they maintain is the one most adblockers use as a base, this single action gets them unblocked by most adblock extensions and potentially gives other ad networks a fairly big club to force compliance to boot.

The problem with this view, and it is certainly not universally shared, is that given that adblockers work by preventing certain components of the page from loading, theoretically anything that does not load the website completely as designed is therefore in violation. The famous text browser Lynx, for example, does not display images or run JavaScript, and since most ads and adblocker-blockers are implemented with images and JavaScript, it is now revealed as a sinister tool of the godless communist horde. NoScript blocks JavaScript on sites you select, and for the same reasons will cause the end of the American Republic. Intentionally unplugging your network cable at the exact moment when the site is pushing you a minified blob of JS crap -- or the more technically adept action of blackholing that address in your hosts file or on your router -- prevents the site from loading code to function in the obnoxious manner the ad network wants it to, and as a result is clearly treason. Notice that in all these examples the actual code of the site is not modified, just whether the client will process (or in the last example even just receive) and display it. Are all these examples "circumvention"?

This situation cannot stand and it's time for us independent browser maintainers to fight fire with fire. If Admiral isn't willing to back down, I'll issue the ultimatum that I will write code into TenFourFox to treat any of Admiral's web properties as malicious, and I encourage other browser maintainers to do the same. We already use Safe Browsing to block sites that try to load malicious code and we already generate warnings for sites with iffy credentials or bad certificates, so it's not a stretch to say that a site that actively attacks user choice is similarly harmful. The block will only be by default and a user that really wants to can turn it off, but the point will be made. I challenge Admiral to step up their game and start picking on people their own size if they really believe this is the best course of action.

And hey, even if this doesn't work, I should get lots of ad clicks from this, right? Right?






I'll get my coat.

Planet MozillaFirefox 56 Beta 4 Testday, August 18th

Hello dear Mozillians!

We are happy to let you know that Friday, August 18th, we are organizing Firefox 56 Beta 4 Testday. We’ll be focusing our testing on the following new features: Media Block Autoplay, Preferences Search [Photon] and Photon Preferences reorg V2.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaHonoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

To honor Bassel Khartabil’s legacy and his lasting impact on the open web, a slate of nonprofits are launching a new fellowship in his name

 

By Katherine Maher (executive director, Wikimedia Foundation), Ryan Merkley (CEO, Creative Commons) and Mark Surman (executive director, Mozilla)

On August 1, 2017, we received the heartbreaking news that our friend Bassel (Safadi) Khartabil, detained since 2012, was executed by the Syrian government shortly after his 2015 disappearance. Khartabil was a Palestinian Syrian open internet activist, a free culture hero, and an important member of our community. Our thoughts are with Bassel’s family, now and always.

Today we’re announcing the Bassel Khartabil Free Culture Fellowship to honor his legacy and lasting impact on the open web.

Bassel Khartabil

Bassel was a relentless advocate for free speech, free culture, and democracy. He was the cofounder of Syria’s first hackerspace, Aiki Lab, Creative Commons’ Syrian project lead, and a prolific open source contributor, from Firefox to Wikipedia. Bassel’s final project, relaunched as #NEWPALMYRA, entailed building free and open 3D models of the ancient Syrian city of Palmyra. In his work as a computer engineer, educator, artist, musician, cultural heritage researcher, and thought leader, Bassel modeled a more open world, impacting lives globally.

To honor that legacy, the Bassel Khartabil Free Culture Fellowship will support outstanding individuals developing the culture of their communities under adverse circumstances. The Fellowship — organized by Creative Commons, Mozilla, the Wikimedia Foundation, the Jimmy Wales Foundation, #NEWPALMAYRA, and others — will launch with a three-year commitment to promote values like open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity.

As part of this new initiative, fellows can work in a range of mediums, from art and music to software and community building. All projects will catalyze free culture, particularly in societies vulnerable to attacks on freedom of expression and free access to knowledge. Special consideration will be given to applicants operating within closed societies and in developing economies where other forms of support are scarce. Applications from the Levant and wider MENA region are greatly encouraged.

Throughout their fellowship term, chosen fellows will receive a stipend, mentorship from affiliate organizations, skill development, project promotion, and fundraising support from the partner network. Fellows will be chosen by a selection committee composed of representatives of the partner organizations.

Says Mitchell Baker, Mozilla executive chairwoman: “Bassel introduced me to Damascus communities who were hungry to learn, collaborate and share. He introduced me to the Creative Commons community which he helped found. He introduced me to the open source hacker space he founded, where Linux and Mozilla and JavaScript libraries were debated, and the ideas of open collaboration blossomed. Bassel taught us all. The cost was execution. As a colleague, Bassel is gone. As a leader and as a source of inspiration, Bassel remains strong. I am honored to join with others and echo Bassel’s spirit through this Fellowship.”

Fellowship details

Organizational Partners include Creative Commons, #FREEBASSEL, Wikimedia Foundation, GlobalVoices, Mozilla, #NEWPALMYRA, YallaStartup, the Jimmy Wales Foundation, and SMEX.

Amazon Web Services is a supporting partner.

The Fellowships are based on one-year terms, which are eligible for renewal.

The benefits are designed to allow for flexibility and stability both for Fellows and their families. The standard fellowship offers a stipend of $50,000 USD, paid in 10 monthly installments. Fellows are responsible for remitting all applicable taxes as required.

To help offset cost of living, the fellowship also provides supplements for childcare and health insurance, and may provide support for project funding on a case-by-case basis. The fellowship also covers the cost of required travel for fellowship activities.

Fellows will receive:

  • A stipend of $50,000 USD, paid in 10 monthly installments
  • A one-time health insurance supplement for Fellows and their families, ranging from $3,500 for single Fellows to $7,000 for a couple with two or more children
  • A one-time childcare allotment of up to $6,000 for families with children
  • An allowance of up to $3,000 towards the purchase of laptop computer, digital cameras, recorders and computer software; fees for continuing studies or other courses, research fees or payments, to the extent such purchases and fees are related to the fellowship
  • Coverage in full for all approved fellowship trips, both domestic and international

The first fellowship will be awarded in April 2018. Applications will be accepted beginning February 2018.

Eligibility Requirements. The Bassel Khartabil Free Culture Fellowship is open to individuals and small teams worldwide, who:

  • Propose a viable new initiative to advance free culture values as outlined in the call for applicants
  • Demonstrate a history of activism in the Open Source, Open Access, Free Culture or Sharing communities
  • Are prepared to focus on the fellowship as their primary work

Special consideration will be given to applicants operating under oppressive conditions, within closed societies, in developing economies where other forms of support are scarce, and in the Levant and wider MENA regions.

Eligible Projects. Proposed projects should advance the free culture values of Bassel Khartabil through the use of art, technology, and culture. Successful projects will aim to:

  • Meaningfully increase free public access to human knowledge, art or culture
  • Further the cause of social justice/social change
  • Strive to develop both a local and global community to support its cause

Any code, content or other materials produced must be published and released as free, openly licensed and/or open-source.

Application Process. Project proposals are expected to include the following:

  • Vision statement
  • Bio and CV
  • Budget and resource requirements for the next year of project development

Applicants whose projects are chosen to advance to the next stage in the evaluation process may be asked to provide additional information, including personal references and documentation verifying income.

About Bassel

Bassel Khartabil, a Palestinian-Syrian computer engineer, educator, artist, musician, cultural heritage researcher and thought leader, was a central figure in the global free culture movement, connecting and promoting Syria’s emerging tech community as it existed before the country was ransacked by civil war. Bassel co-founded Syria’s first hackerspace, Aiki Lab, in Damascus in 2010. He was the Syrian lead for Creative Commons as well as a contributor to Mozilla’s Firefox browser and the Red Hat Fedora Linux operating system. His research into preserving Syrian archeology with computer 3D modeling was a seminal precursor to current practices in digital cultural heritage preservation — this work was relaunched as the #NEWPALMYRA project in 2015.

Bassel’s influence went beyond Syria. He was a key attendee at the Middle East’s bloggers conferences and played a vital role in the negotiations in Doha in 2010 that led to a common language for discussing fair use and copyright across the Arab-speaking world. Software platforms he developed, such as the open-source Aiki Framework for collaborative web development, still power high-traffic web sites today, including Open Clip Art and the Open Font Library. His passion and efforts inspired a new community of coders and artists to take up his cause and further his legacy, and resulted in the offer of a research position in MIT Media Lab’s Center for Civic Media; his listing in Foreign Policy’s 2012 list of Top Global Thinkers; and the award of Index on Censorship’s 2013 Digital Freedom Award.

Bassel was taken from the streets in March of 2012 in a military arrest and interrogated and tortured in secret in a facility controlled by Syria’s General Intelligence Directorate. After a worldwide campaign by international human rights groups, together with Bassel’s many colleagues in the open internet and free culture communities, he was moved to Adra’s civilian prison, where he was able to communicate with his family and friends. His detention was ruled unlawful by the United Nations Working Group on Arbitrary Detention, and condemned by international organizations such as Creative Commons, Amnesty International, Human Rights Watch, the Electronic Frontier Foundation, and the Jimmy Wales Foundation.

Despite the international outrage at his treatment and calls for his release, in October of 2015 he was moved to an undisclosed location and executed shortly thereafter — a fact that was kept secret by the Syrian regime for nearly two years.

The post Honoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship appeared first on The Mozilla Blog.

Planet MozillaQuantum Flow Engineering Newsletter #19

As usual, I have some quick updates to share about what we’ve been up to on improving the performance of the browser in the past week or so.  Let’s first look at our progress on the Speedometer benchmark.  Our performance goal for Firefox 57 was to get within 20% of Chrome’s benchmark score on our Acer reference hardware on Win64.  Those of you who watch the Firefox Health Dashboards every once in a while may have noticed that now we are well within that target:

Speedometer Progress Chart from the Firefox Health Dashboard, within 14.86% of Chrome's benchmark score

It’s nice to see the smiley face on this chart, finally!  You can see the more detailed downward slope on the AWFY graph that shows the progress in the past couple of weeks or so (dark red dots are PGO builds, orange dots are non-PGO builds, and of course green in Chrome):

Detailed Speedometer progress in the past couple of weeks on Win64 (Acer reference hardware)The situation on Win32 is a bit worse, due to Chrome’s recent switch to use clang-cl on Windows instead of MSVC which gave them an around 30% speed boost on the 32-bit Speedometer score, but we have made progress nonetheless.  Such is the nature of tracking moving targets!

Speedometer progress chart on Win32The other performance aspect to have a look at again is our progress at eliminating slow synchronous IPC calls.  I last wrote about this about three weeks ago, and since then at least one major change happened: the infamous document.cookie synchronous IPC call was eliminated, so I figured it may be a good time to look at the data again.

Sync IPC Analysis for 2017-08-10Telemetry data is laggy since it includes data from older versions of Nightly, but if you compare this to the previous chart, there should be a stark difference visible: PCookieService::Msg_GetCookieString is now a much smaller part of the overall data (at around 26.1%).  Looking at the list of the top ten messages, the next ones in order are the usual suspects for those who have followed these newsletters for a while: some JS initiated IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent, followed by more JS IPC, followed by PBrowser::Msg_NotifyIMEFocus, followed by even more JS IPC, followed by 2 new messages that are now surfacing as we’ve fixed the worst ones of these: PDocAccessible::Msg_SyncTextChangeEvent which is related to accessibility and the data shows it affects a relatively small number of sessions due to its low submission rate, and PContent::Msg_ClassifyLocal, which probably comes from turning the Flash plugin click-to-play by default.

Now let’s look at the breakdown of synchronous IPC messages initiated from JS:

JS Sync IPC Analysis for 2017-08-10

The story here remains unchanged: most of the sync IPC messages we’re seeing come from legacy extensions, and there is also the contextmenu sync IPC, which has a patch pending review.  However, the picture here may start changing quite soon.  You may have seen the recent announcement about legacy extensions being disabled on Nightly starting from tomorrow, so hopefully this data (and the C++ sync IPC data) will soon start to shift to reflect more of the performance characteristics that our users on the release channel will experience for Firefox 57.

Now please let me to acknowledge the great work of those who made Firefox faster last week.  I hope I’m not forgetting any names!

Planet MozillaWebExtensions in Firefox 56

Firefox 56 landed in Beta this week, so it’s time for another update on the WebExtensions transition. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN Web Docs.

API changes

The browsingData API can now remove cookies by host. The initial implementation of browsingData has landed for Android with support for the settings and removeCookies APIs.

The contextMenus API also has a few improvements. The text of the link is now included in the onClickData event and text selection is no longer limited to 150 characters. Optional permission requests can now also be triggered from context menus.

An alternative, more general namespace was added, called browser.menus. It supports the same API and all existing menu contexts, plus a new one that allows you to add items to the Tools menu. You can also provide different icons for your menu items. For example:

browser.menus.create({
  id: "sort-tabs",
  title: "A-Z",
  contexts: ["tools_menu"],
  icons: {
   16: "icon-16-context-menu.png",
  },
});



The windows API now has the ability to read and preface the title of the window object, by passing titlePreface to the window object. This allows extensions to label different windows so they’re easier to distinguish.

The downloads.open API now requires user interaction to be called. This mirrors the Chrome API which also requires user interaction. You can now download a blob created in a background page.

The tabs API has new printing APIs. The tabs.print, tabs.printPreview and tabs.saveAsPDF (not on Mac OS X) methods will bring up the respective print dialogs for the page. The tabs.Tab object now includes the time the tab was lastAccessed.

The webRequests API can now monitor web socket connections (but not the messages)  by specifying ws:// or wss:// in the match pattern. Similarly the match patterns now support moz-extension URLs, however this only applies to the same extension. Importantly a HTTP 302 redirection to a moz-extension page will now work. For example, this was a common use case for extensions that integrated with OAuth.

The pageActions API can now be shown on a per tab basis on Android.

The privacy API gained two new APIs. The privacy.services.passwordSavingEnabled API allows an extension to toggle the preferences that control password saving. The privacy.websites.referrersEnabled API allows an extension to toggle the preferences that control the sending of HTTP Referrer headers.

A new API to control browserSettings has been added with an API to disable the browser’s cache. We’ll use this API for similar settings in the future.

In WebExtensions, we manage the changing of preferences and effects when extensions get uninstalled. This management was applied to chrome_url_overrides. The same management now prevents extensions overriding user changed preferences.

The theming API gained a reset method which can be called after an update to reset Firefox to the default theme.

The proxy API now has the ability to clear out a previously registered proxy.

If you’d like store a large amount of data in indexedDB (something we recommend over storage.local) then you can do so by requesting the unlimitedStorage permission. Requesting this will stop indexedDB prompting the user for permission to store a large amount of data.

The management API has added get and getAll commands. This allows extensions to query existing add-ons to spot any potential conflicts with other content.

Finally, the devtools.panels.elements.onSelectionChanged API landed and extensions that use the developer tools will find that their panels open faster.

Out of process extensions

We first mentioned out of process extensions back in the WebExtensions in Firefox 52 blog post. They’ve been a project that started back in 2016, but they have now been turned on for Windows users in Firefox 56. This is a huge milestone and a lot of work from the team.

This means that all the WebExtensions will run in their own process (that’s one process for all extensions). This has many advantages, but chief among them are  performance, security, and crash handling. For example, a crash in a WebExtension will no longer bring down Firefox. Content scripts from WebExtensions are still handled by the content process.

With the new WebExtensions architecture this change was completed with zero changes by extension developers, a significant improvement over the legacy extension environment.

There are some remaining bugs on Linux and OS X that prevent us from enabling it there, but we hope to enable those in the coming releases.

Along with measuring the performance of out of process, we’ve added in multiple telemetry signals to measure the performance of WebExtensions.  For example, it was recently found that storage.local.set was slow. With some improvements, we’ve seen a significant performance boost from a median of over 200ms down to around 25ms:

These telemetry measures conform to the standard Mozilla telemetry guidelines.

about:debugging

The about:debugging page got some more improvements:

The add-on ID has been added to the page. If there’s a warning about processing the add-on, that will now be shown next to the extension. Perhaps most useful to those working on their first add-on, if an add-on fails to load because of a problem, then no problem—there’s now an easy “retry” button for you to press:

Contributors

Thank you once again to our many contributors for this release, especially our volunteers including: Cameron Kaiser, dw-dev, Giorgio Maone, Swapnesh Kumar Sahoo, Timothy Johnson, Tushar Saini and Tomislav Jovanovic.

Update: improved the quality of the image for context menus.

The post WebExtensions in Firefox 56 appeared first on Mozilla Add-ons Blog.

Planet MozillaMy Summer Internship with Firefox Test Pilot

<figure>As part of my internship, I participated in a range of activities with the Test Pilot team, including in-home interviews with people who use Test Pilot experiments.</figure>

This past summer I had the opportunity to work as an intern on the Firefox Test Pilot team. Upon joining the team I was informed of my summer project: a web experiment to allow for private and secure file transfers.

Firefox Test Pilot experiments tend to act as supplemental features available for the browser, but this project was different. It was necessary to move this experiment to the web because we felt it would be too restricting to force both file senders and file recipients to use Firefox. After much reworking and refactoring, the project was finally released as Send.

Defining Standards

One of the most important things we needed to do before we started writing code was to define exactly what we meant by “private” and “secure file transfer”. Different people can have greatly varied perceptions of what constitutes a satisfactory level of privacy, so it was essential to define our standards before starting the project so as to not compromise our users’ privacy.

Our goal was to make a product that would allow users to share files anonymously and without fear that a third party could snoop in on the transfer. At first we considered WebRTC to allow for peer-to-peer connections, but decided against it in the end as it wasn’t entirely reliable for larger file sizes. It also would have been a hassle for users as it would require both the sender and recipient to keep the browser tab open for the entire duration of transfer.

Without peer-to-peer connections, we decided to host files on Amazon’s S3. We also decided to encrypt the file using client-side cryptography libraries to prevent Mozilla or any third-party from ever seeing the contents of the file. We settled on appending secret 128-bit AES-GCM keys as a hash field on a generated URL that a sender could then share to a recipient. We would then pipe the upload of the encrypted file through our servers to an S3 bucket.

The use of the hash parameter in the URL means the key is never sent to the server and ensures that the file stays encrypted until the recipient’s browser downloads the entire file. We believe that the use of client-side encryption and decryption greatly mitigates any possible information leakage while sharing files, and as a result, would be satisfactory for almost all use cases.

We also decided on adding in auto-expiry of files after twenty-four hours to prevent a user’s files from lingering online indefinitely. We felt that this approach would both provide sufficient privacy and also be seamless enough to satisfy all users of Send.

Building an MVP

I spent the next couple of weeks building a minimal viable product that we could provide to the Legal and Security teams for review. This involved multiple rounds of hacking together some code, and then organizing it into logical modules. The modules became especially important as more teams came together to start working on the experiment.

After finishing a basic working version of the experiment, we decided to put it through some UX testing. We video conferenced with several research participants and sent their feedback to our Taipei UX team, who eventually fleshed out the UX that is used in Send today. After building a working version of the application, I wrote several test cases to ensure that future code edits would not break the file transfer modules and the server side API.

Coordinating with Different Teams

One of the biggest skills that this internship has helped me develop is the ability to coordinate with multiple teams to get a feature implemented. For example, I worked with the Security and Legal teams to make sure Send met Mozilla’s high standards for release (even for a Test Pilot experiment), as well as tackled new bugs found by the Quality Assurance team. I also had the opportunity to work with our Operations team to make sure our production environment was set up correctly. Around the same time, I finished writing frontend and backend tests for the Send app, and we were able to finalize a UI.

By early July, most of the major features of Send had been implemented, so it was mostly an issue of adding metrics, refactoring code, and adding localization scripts. I had to add in Mozilla’s L20n library which meant refactoring a great deal of code so the localization team could work with it. At the same time, I was working on anonymous metrics collection, so I had to work with the Operations team to set up error reporting and analytics addressing correctly. I’ve learned a lot of technical skills this summer, but I also learned a great deal of social skills.

The Launch of Send

After around three months of work, Send released to the Test Pilot website. This was probably the best part of my internship with Test Pilot: I was able to work on a product from its inception all the way to release. The work was fast-paced and I had to make lots of revisions to get code cleared by various teams, but in the end it was extremely rewarding. During the days approaching the release date, I was nervous as this was the first project in which I was a major contributor that would be publicly released.

Although I knew the Mozilla name would garner some interest, the release of Send was met with more fanfare than I could have imagined, being reported on by several different large tech-oriented media outlets. I felt very proud of what I had accomplished! Before this summer started, I never would have guessed that a product I developed would be used by people around the world.


My Summer Internship with Firefox Test Pilot was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet MozillaHow One Tweet Can Ruin Your Life

This video is pretty awesome throughout, but the pinnacle is at the end:

The great thing about social media was how it gave a voice to voiceless people, but we’re now creating a surveillance society, where the smartest way to survive is to go back to being voiceless. Let’s not do that. — Jon Ronson

Planet MozillaUpcoming Changes in Compatibility Features

Firefox 57 is now on the Nightly channel (along with a shiny new logo!). And while it isn’t disabling legacy add-ons just yet, it will soon. There should be no expectation of legacy add-on support on this or later versions. In preparation for Firefox 57, a number of compatibility changes are being implemented on addons.mozilla.org (AMO) to support this transition.

Upcoming Compatibility Changes

  • All legacy add-ons will have strict compatibility set, with a maximum version of 56.*. This is the end of the line for legacy add-on compatibility. They can still be installed on Nightly with some preference changes, but may break due to other changes happening in Firefox.
  • Related to this, you won’t be able to upload legacy add-ons that have a maximum version set higher than 56.*.
  • It will be easier to find older versions of add-ons when the latest one isn’t compatible. Some developers will be submitting ports to the WebExtensions API that depend on very recent API developments, so they may need to set a minimum version of 56.0 or 57.0. That can make it difficult for users of older versions of Firefox to find a compatible version. To address this, compatibility filters on search will be off by default. Also, we will give more prominence to the All Versions page, where older versions of the add-on are available.
  • Add-ons built with WebExtensions APIs will eventually show up higher on search rankings. This is meant to reduce instances of users installing add-ons that will break within a few weeks.

We will be rolling out these changes in the coming weeks.

Add-on compatibility is one of the most complex AMO features, so it’s possible that some things won’t work exactly right at first. If you run into any compatibility issues, please file them here.

The post Upcoming Changes in Compatibility Features appeared first on Mozilla Add-ons Blog.

Planet MozillaReps Weekly Meeting Aug. 10, 2017

Reps Weekly Meeting Aug. 10, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Aug. 10, 2017

Reps Weekly Meeting Aug. 10, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaMozilla Science Lab August 2017 Bi-Monthly Community Call

Mozilla Science Lab August 2017 Bi-Monthly Community Call Mozilla Science Lab August 2017 Bi-Monthly Community Call

Planet MozillaMozilla Science Lab August 2017 Bi-Monthly Community Call

Mozilla Science Lab August 2017 Bi-Monthly Community Call Mozilla Science Lab August 2017 Bi-Monthly Community Call

Planet MozillaOverhead analysis for Vulkan Portability

One of the design goals for the portability API is to keep any overhead (when translating to other APIs) to be minimum, optimally providing a zero-cost abstraction. In this article, we’ll dissect the potential sources of the overhead into groups and analyze the prospect of each, suggesting possible solutions. The problem in question is very broad, but we’ll spice it with examples raised in Vulkan Portability Initiative.

Another unit of compilation

When adding an indirection layer from inside a program, given a language with zero-cost abstractions like C++ or Rust, it is possible to have the layer completely optimized away. However, the library will be provided as a static/dynamic binary, which would prevent the linker to inline the calls. That means doubling the cost of a function invocation (as opposed to execution) compared to a native API.

Solutions:

  • whole program optimization
  • pure header library
    • locks into using C/C++
    • inconvenient
    • long compile times

Native API differences

Some aspects of the native APIs don’t exactly match together. This is amplified by the flexible nature of Vulkan, which tends to provide the richest feature set comparing to D3D12 and Metal.

For example, Vulkan allows command buffers to be re-used, and so does D3D12. In Metal, however, it’s not directly supported. If this ability is exposed unconditionally, the Metal backend would have to record all the encoded commands on the side to translating them to the corresponding MTL*CommandEncoder interface.

When the user requests to use the command buffer again, Metal backend would have to re-encode the native command buffer on the spot, which means a considerable delay to otherwise inexpensive operation of submitting a command buffer for execution.

Solutions:

  • more granularity of device capabilities
  • pressure other platforms to add native support for missing features

Skewed idiomaticity

An API typically is associated with a certain way of thinking and approaching the problems to solve. Providing a Vulkan-like front-end to an “alien” API would skew the users to think in terms of Vulkan and what is efficient in it, as opposed to the native APIs.

For example, Vulkan has render sub-passes. Organizing the graphics workload in sub-passes allows tiled hardware (typically found in mobile devices) to re-use intermediate rendering results without a road-trip to VRAM. This can be a big optimization, yielding up to 50% performance increase as well as reduced power usage.

No other API has sub-passes. It is straightforward to emulate them by treating each sub-pass as an independent pass. However, ignoring the fact of intermediate results go back and forth to VRAM would cause the graphics pipeline to wait for these transfers and stall. With a non-multi-pass approach the user would insert an independent graphics job between producers and consumers of data, just to hide the memory latency by not immediately waiting for the producer job to finish.

When the user writes for Vulkan exclusively, they can have a firm believe that the driver optimizes the sub-passes (e.g. by reordering the work) for the non-tiled hardware. When Vulkan translates to D3D12 and Metal, there is no such luxury.

Solutions:

  • device feature
    • either a soft one, serving as a hint that sub-passes are efficient
    • or a hard one, for allowing the use of multiple sub-passes in general
  • pray that other native APIs will catch up one day

Conclusion

We categorized the possible sources of performance overhead by ascending the ladder of abstraction. We started from the low level compiler units, proceeded through the warts of API translation, and finished on idiomatical differences of native APIs.

There are no good solutions to these problems. Our task (as a technical subgroup) is to strike for the balance between making minimal diversion from Vulkan and providing optimal performance on other backends, while keeping the API simple and explicit.

Planet MozillaStatus update, August 2017

Work:

In March I joined the team at eyeo GmbH, the company behind Adblock Plus, as a core developer. Among other things I'm improving the filtering capabilities.

While they are based in Cologne, Germany, I'm still working remotely from Montréal.

It is great to help making the web more user centric.

Personal project:

I started working again on Niepce, currently implementing the file import. I also started to rewrite the back-end in Rust. The long term is to move completely to Rust, this will happen in parallel with feature implementation.

This and other satellite projects are part of my great plan I have for digital photography on Linux with GNOME.

'til next time.

Planet MozillaWhat the RLS can do

IDE support for Rust is one of the most requested features in our surveys and is a key part of Rust's 2017 roadmap. Here, I'm going to talk about one of the things we're doing to bring Rust support to IDEs - the RLS.

Programmers can be pretty picky about their editors, so we want to support as broad a selection of editors as possible. A key step towards that goal is implementing the Rust Language Server (RLS). The RLS is a service for providing information about Rust programs. It works with multiple sources of data, primarily the Rust compiler. It communicates with editors using the Language Server Protocol (LSP) so that clients can perform actions such as 'code completion', 'jump to definition', and 'find all references'.

The intention is that the RLS will support multiple clients. Any editor that wants to provide IDE-type functionality for Rust programs can use the RLS. In fact many editors can get fairly good support by just using a generic LSP client plugin; but you get the best results by using a dedicated Rust client.

We've been working on two very different clients. One is a Visual Studio Code plugin which makes VSCode a Rust IDE. The other is rustw, an experimental web-app for building and exploring Rust programs. Rustw might become a useful tool in its own right or be used to browse source code in Rustdoc. We're also working on a new version of Rustdoc that uses the RLS, rather than being tightly integrated with the compiler.

Visual Studio Code with RLS

rustw - errors
rustw - code browsing

I plan to follow-up this blog post with another going over the RLS internals, for RLS client implementors and RLS contributors. In this post, I'll cover the fun stuff - features that will improve your life as a Rust developer.

Type and docs on hover

Hover over an identifier in VSCode or rustw to see its type and documentation. You'll get a link to rustdoc and the source code for standard library types too.

VSCode - type on hover

Semantic highlighting

When you hover a name in rustw or click inside one in VSCode, we highlight other uses of the same name. Because this is powered by the compiler, we can be smart about this and show exactly the right uses, for example, skipping different variables with the same name.

VSCode - selection highlighting

Code completion

Code completion is where an IDE suggests variable, field, or method names for you based on what you're typing. The RLS uses Racer behind the scenes for code completion, but clients don't need to be aware of this. In the long-term, the compiler should power code completion.

VSCode - code completion

Jump to definition

Jump from the use of a name to where it is defined. This is a key feature for IDEs and code exploration tools. Use F12 in VSCode or a left click in rustw. This works for variables, fields, methods, functions, modules, and more. Lifetimes and macros should work in the next few months.

Find all references

Find all uses of an item throughout a program. The RLS is smart enough to know about different items with the same name, to understand generic types, see through macros, and see references in different crates.

VSCode - find all refs

Find impls

Find all implementations (impls) for a trait or concrete type.

VSCode - find all refs

Apply suggestions

VSCode displays errors from building your program and highlights the errors in your code with squiggles. For some errors, you can apply a suggestion to quickly fix the error. We're working on expanding the errors which support such suggestions.

VSCode - apply error suggestion

Go to symbol

Search for declarations in a file then jump to them.

VSCode go to symbol

Identifier search

Search for a name - we'll show you definitions and uses.

rustw - identifier search

Renaming

The most fundamental refactoring - rename an item. The RLS will rename the definition and all uses, without touching different items with the same name.

VSCode - rename

Note how both instances of the digest variable are renamed but the field with the same name is not.

Deglob refactoring

Replace a glob import (use foo::*;) with a list import of the names actually imported. You can find this refactoring in the command palette in VSCode.

VSCode - deglob

Reformatting

The RLS uses Rustfmt to reformat your code.

VSCode - reformat

Trying out the RLS

The best way to try out the RLS is in Visual Studio Code:

  • download vscode
  • install the Rust (RLS) extension by typing ext install rust into the command pallette (ctrl + P)
  • open a Rust project (i.e., a folder containing Cargo.toml), and then a Rust file

You'll need to be using rustup. See the extension's home in the VSCode marketplace for more information.

Future plans

The RLS beta is currently available using Rustup with the nightly toolchain. Soon we should extend that support to beta and stable toolchains. Note that beta vs 1.0 for the RLS is separate to the Rust toolchain being used.

There's pretty much an infinite number of features we could support in an IDE. Also, there are lots of different ways to construct a Rust program, so there are a lot of edge cases and thus a lot of robustness work to do too. However, I hope we are ready to announce 1.0 for the RLS around the end of 2017 or start of 2018.

The Visual Studio Code extension will continue to evolve. We have no big changes planned, but plan to iteratively improve and make regular releases. Hopefully RLS support will appear in other editors soon. Rustw will hopefully evolve into a more full-featured code browser and will be used in rustdoc as well as some other places.

Helping out

Let us know if you encounter any problems by filing issues on the RLS repo.

If you'd like to help by writing code, tests, or docs, then have a look at the repos for the RLS, our VSCode extension, or rustw. Or come talk to us on IRC in #rust-dev-tools. We would love for you to help us out!

Planet MozillaNew Firefox and Toolkit module peers in Taipei!

Please join me in welcoming three new peers to the Firefox and Toolkit modules. All of them are based in Taipei and I believe that they are our first such peers which is very exciting as it means we now have more global coverage.

  • Tim Guan-tin Chien
  • KM Lee Rex
  • Fred Lin

I’ve blogged before about the things I expect from the peers and while I try to keep the lists up to date myself please feel free to point out folks you think may have been passed over.

Planet MozillaSay Hi to Send 1.1.0

<figure></figure>

We’re excited to announce the arrival of Send 1.1.0. Send now supports Microsoft Edge and Safari! In addition to expanded browser support, we’ve made several other improvements:

  • You can now send files from iOS (results may vary with receiving on iOS).
  • We no longer send file hashes to the server.
  • We fixed a bug that let users accidentally cancel downloads mid-stream.
  • You can now copy to clipboard from a mobile device, and we detect if copy-to-clipboard is disabled.
  • We now ship in 36 languages!

Right now we’re working on a raft of minor fixes, before moving on to larger features such as PIN protected files and multi-file uploads. We’re hoping to maintain a steady shipping schedule in the coming weeks even though we’re losing our beloved interns. I’ll post about performance and feature improvements as they ship.


Say Hi to Send 1.1.0 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet WebKitMichael Catanzaro: On Firefox Sync

Epiphany 3.26 is, unfortunately, not going to be packed with cool new features like 3.24 was. We’ve just been too busy working on improving WebKit this cycle. But there is one cool new thing: Firefox Sync support. You can sync bookmarks, history, passwords, and open tabs with other Epiphany instances and as well as both desktop and mobile Firefox. This is already enabled in 3.25.90. Just go to the Sync tab in Preferences and sign in or create your Firefox account there. Please test it out and report bugs now, so we can quash problems you find before 3.26.0 rather than after.

Some thank yous are in order:

  • Thanks to Gabriel Ivascu, for writing all the code.
  • Thanks to Google and Igalia for sponsoring Gabriel’s work.
  • Thanks to Mozilla. This project would never have been possible if Mozilla had not carefully written its terms of service to allow such use.

Go forth and sync!

Planet WebKitRelease Notes for Safari Technology Preview Release 37

Safari Technology Preview Release 37 is now available for download for macOS Sierra and betas of macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 219567-220128.

Web API

  • Added initial support for navigator.sendBeacon behind an experimental feature flag (r220121)
  • Implemented document.elementsFromPoint (r219961)
  • Made cross-origin properties enumerable (r219659)
  • Fixed dispatching the click event to the parent when the child target stops hit testing after mouseDown (r219568)
  • Moved DOMException properties to the prototype and changed to use Error.prototype.toString() (r219607, r219663)

JavaScript

  • Added finally method support to Promise (r219989)
  • Added support for optional catch binding (r220068)

WebAssembly

  • Reduced the size of generated binaries (r219899)

Apple Pay

  • Added "carteBancaire" as a supported payment network (r219896)

CSS

  • Aligned quirky number parsing with other browsers (r219642)

Web Inspector

  • Added a context menu item to the Elements tab for taking a screenshot of a node (r219870)
  • The debugger now captures async stack traces when web content calls addEventListener (r220036)
  • Prevented outputting “No message” for multi-value logs like console.log(x, y) (r219900)
  • Fixed warnings about console.assert lines without semicolons (r219894)
  • Inlined multiple console log values if they are simple (r219893)
  • Fixed inspect(aFunction) to jump to the function definition (r219749)
  • Fixed the page overlay highlight to fade out when a page is constantly updating (r219596)
  • Fixed some controls overlaying the header in the Settings tab (r219851)

WebDriver

  • Fixed an issue where implicit navigations didn’t cause a browsing context switch (r219723)
  • Fixed link and partial link queries if the text link contains trailing or leading whitespaces (r219604)
  • Fixed an issue that caused some script evaluations to be attributed to the wrong frame (r219649)

Rendering

  • Changed to disable async image decoding for large images after the first time a tile is painted (r219876)
  • Fixed the minimum font size preference to affect absolute line-height values and prevent text lines from overlapping (r219665)
  • Fixed getting round-trip stroke-width styles causing text to gain a stroke (r219755)
  • Fixed Reeder’s default font to correctly use San Francisco (r220009)

Accessibility

  • Fixed zoom to follow the keyboard insertion point (r219987)
  • Added a background color for the focus state of the icon buttons in the media controls (r220041)
  • Fixed the incorrect range from index and length on <p> tags with contenteditable (r219949)
  • Changed to dispatch accessibilityPerformPressAction asynchronously on macOS (r219906)
  • Fixed silent VoiceOver or skipping over time values on the media player (r219983)
  • Fixed the web page getting reloaded when a node is labelling multiple child nodes (r219661)

Media

  • Fixed media controls missing content in fullscreen when the document has a scroll offset (r219645)
  • Fixed the mouse pointer not hiding during fullscreen playback (r219625)
  • Fixed pressing the Escape key to not be a valid user gesture to enter fullscreen (r219950)

Planet MozillaThe Joy of Coding - Episode 109

The Joy of Coding - Episode 109 mconley livehacks on real Firefox bugs while thinking aloud.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>