Planet MozillaPowerful New Additions to the CSS Grid Inspector in Firefox Nightly

CSS Grid is revolutionizing web design. It’s a flexible, simple design standard that can be used across all browsers and devices. Designers and developers are rapidly falling in love with it and so are we. That’s why we’ve been working hard on the Firefox Developer Tools Layout panel, adding powerful upgrades to the CSS Grid Inspector and Box Model. The latest improvements are now available in Firefox Nightly.

Layout Panel Improvements

The new Layout Panel lists all the available CSS Grid containers on the page and includes an overlay to help you visualize the grid itself. Now you can customize the information displayed on the overlay, including grid line numbers and dimensions.

This is especially useful if you’re still getting to know CSS Grid and how it all works.

There’s also a new interactive grid outline in the sidebar. Mouse over the outline to highlight parts of the grid on the pages and display size, area, and position information.

The new “Display grid areas” setting shows the bounding areas and the associated area name in every cell. This feature was inspired by CSS Grid Template Builder, which was created by Anthony Dugois.

Finally, the Grid Inspector is capable of visualizing transformations applied to the grid container. This lets developers accurately see where their grid lines are on the page for any grids that are translated, skewed, rotated or scaled.

Improved Box Model Panel

We also added a Box Model Properties component that lists properties that affect the position, size and geometry of the selected element. In addition, you’ll be able to see and edit the top/left/bottom/right position and height/width properties—making live layout tweaks quick and easy.

Finally, you’ll also be able to see the offset parent for any positioned element, which is useful for quickly finding nested elements.

As always, we want to hear what you like or don’t like and how we can improve Firefox Dev Tools. Find us on Discourse or @firefoxdevtools on twitter.

Thanks to the Community

Many people were influential in shipping the CSS Layout panel in Nightly, especially the Firefox Developer Tools and Developer Relations teams. We thank them for all their contributions to making Firefox awesome.

We also got a ton of help from the amazing people in the community, and participants in programs like Undergraduate Capstone Open Source Projects (UCOSP) and Google Summer of Code (GSoC). Many thanks to all the contributors who helped land features in this release including:

Micah Tigley – Computer science student at the University of Lethbridge, Winter 2017 UCOSP student, Summer 2017 GSoC student. Micah implemented the interactive grid outline and grid area display.

Alex LockhartDalhousie University student, Winter 2017 UCOSP student. Alex contributed to the Box Model panel with the box model properties and position information.

Sheldon Roddick –  Student at Thompson Rivers University, Winter 2017 UCOSP student. Sheldon did a quick contribution to add the ability to edit the width and height in the box model.

If you’d like to become a contributor to Firefox Dev Tools hit us up on GitHub or Slack or #devtools on irc.mozilla.com. Here you will find all the resources you need to get started.

Planet MozillaReps Weekly Meeting Jun. 22, 2017

Reps Weekly Meeting Jun. 22, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaRepsNext – Status Update June 2017

In the past few months we have kept working on the implementation of our RepsNext initiative. The RepsNext initiative has started more than a year ago with the goal to bring the Mozilla Reps program to the next level. Back in January we wrote a status update. Almost half a year later, we want to provide a further update. We have also published our OKRs for the current quarter with goals to further the implementation of RepsNext.

RepsNext overview explaining what is done and what not, explained further down in text in this article.

Resources

The Resources training is finalized. It’s still a little bit text-heavy, but we want to move forward with the training and iterate based on feedback. For this, we have reached out to a few selected Reps based on the past 6 months to ask them to test the training and give initial feedback about the process and content. Once we have this feedback, we will adjust the training if needed and then open up the Resources track for applications. Applications will most probably be done in a Google Form and will include general info about the Rep as well as a free-text input field where the Rep can explain why they are fitted for the track as well as provide some links to previous, good budget requests they filed. You can learn more about the Resources Track on the Resources Wiki page.

Onboarding Process

We have simplified and streamlined the on-boarding process for new Reps. Until April we had a lot of applications that were open for more than 6 months. We are happy to report that we have started to on-board 20 new Reps between April and now. Further 10 Reps are in the administrative process of signing the agreement and creating profiles on the Portal. All of this is thanks to a new Webinar. The Webinar allows us to give Reps the very needed first information about Reps and what to expect being a Rep.

Participation Alignment

The Council is working with the Participation team in order to co-create the quarterly and yearly goals and OKRs for 2017. This happened twice already this year and we will continue to give our valuable input and feedback for the quarters to come. The program’s goals are also being created based the team’s goals and priorities. We are also attending the monthly Open Innovation Team calls. Of course this is an ongoing work that will continue. The Reps Council is also involved in strategic and operational discussions as representatives for the broader community, giving feedback on the currently ongoing strategic projects. All of this work will continue at the All Hands in San Francisco later this month.

Leadership

At the beginning of our work on RepsNext, we wanted to do a specific Leadership Track Reps can apply for as a specialization. Throughout the past months it became clear that we want all Reps to improve their leadership skills to help out other Reps as well as their communities. Therefore we created an initial list of good leadership resources for everyone to access and learn. At first this is a basic list of resources which will be improved on in the future. We want all Reps to be able to improve their leadership skills as soon as possible and later build on top of this knowledge with further resources. Please provide your feedback in the Discourse topic!

Coaching

Previously known as Regional Coaches, Community Coaches will continue to support local communities. Additionally to that we are currently creating a Coaches Training to train new Reps on coaching skills as well as existing mentors to improve their skills. These coaches will be able to coach Reps in regards to personal development. The idea is to have the Coaches Training on a self-serve basis, so everyone can take the training and complete a narrative which will be evaluated at the end to graduate from the training. This will help us to increase the quality of coaching/mentoring in the Reps program as well as in local communities. Additionally it will decrease the current bottleneck we have onboarding new Reps and we will be able to assign a coach to every Rep on a one year commitment basis with the option to switch the coach after this period. We are currently reviewing the implementation proposal so we can add the training to Teachable and publish it for all Reps.

Functional areas

We recently asked all Reps to choose their path for the future. This gives us a valuable basis to argue around functional doers in the Reps program. We will further build out the exact details about functional doers and their interest. The ongoing strategy projects will additionally give us valuable guidance in coming up with the perfect opportunities for functional doers. If you are interested in statistics about this survey, join our discussion on Discourse.

Upcoming work

We are in the last steps to finish our work on the Resources track and the Coaching training. This allows us to start talks on further improvements in the third quarter of this year. We are also going to the All Hands to discuss Reps, Strategy, Mobilizers and more with the Open Innovation team. We will update you about the outcomes of that after the All Hands.

You can follow all the Reps program’s goals and progress in the Reps Issue Tracker.

Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

Planet MozillaValidating directory inputs?

A quick thought here.  I spent several hours today trying to figure out why a simple Firefox toolkit application wouldn’t work.  (I don’t know what to call “-app application.ini” applications anymore, as “XULRunner” has definitely fallen from favor…)  It took me far too long to realize that the “default” subdirectory should’ve been named “defaults” – something that I already know about these apps, but I only build them from scratch every two years or so…

Catching this sort of rookie mistake is, fundamentally, an argument validation exercise:  the main difference is instead of the argument being an object of some kind, it’s a directory on the filesystem.  If Mozilla has a module or component for validating a directory’s structure in general, I haven’t heard of it…

Which is the point of my post here.  I’m wondering what general-purpose libraries exist for validating a directory tree’s structure and contents at a basic level.  Somebody out there must have run into this problem before and created libraries for this.  I’d love to see libraries written in C++, D, Python, NodeJS and/or privileged JavaScript.  Please reply to my post if you can point me to them.  (For once, a quick search on the world’s most popular search engine fails me…)  Bonus points for libraries that allow passing in callbacks for file-specific validation. (“Is there a syntactically correct .ini file at (root)/application.ini?”)

Dev.OperaWhat’s new in Chromium 59 and Opera 46

Opera 46 (based on Chromium 59) for Mac, Windows, Linux is out! To find out what’s new for users, see our Desktop blog post. Here’s what it means for web developers.

Animated PNG

Opera now supports animated PNG, or APNG for short. APNG is a file format that works similarly to GIF. The difference is that APNG is smaller and supports both 24-bit images and 8-bit transparency. It has become quite popular recently, particularly since Apple adopted the APNG file format for the iOS 10 iMessage apps. APNG was also supported in Presto-based Opera 12.

Watch the APNG demo and learn more about this format.

SVG favicons

This was actually added in Opera 44, but we missed to write about it!

This might still be a bit rough in the edges, but in general SVG favicons should now work in Opera! (Not yet in Chromium.) To see this in action, check some of the WHATWG standards which use SVG favicons. Firefox also supports SVG favicons, and it was also supported in Presto-based Opera 12.

Expensive background tab throttling

Opera now throttles expensive background tabs. This reduces the processing power required for background tabs and improves battery life and browsing performance. Try it out yourself with this background timer throttling demo.

As a web developer, you also can help with reducing the work that your page or app does while being in a background tab by using requestAnimationFrame instead of timers to drive animations, and the Page Visibility API to stop unnecessary work when the page is hidden.

Service worker navigation preload

The Service Worker navigation preload API enables the browser to preload navigation requests while a service worker is starting up. See Speed up Service Worker with Navigation Preloads by Jake Archibald for more information.

Other features in this release

  • Developers can now use MediaError’s message property to obtain greater detail about a MediaError produced by <audio> or <video>.
  • WritableStreams are now available as part of the Streams API for processing streams of data, while providing a standard abstraction for writing streaming data to a sink with built-in backpressure and queuing.
  • The Streams API has been expanded with the ability to pipe between ReadableStreams and WritableStreams via the pipeTo() and pipeThrough() methods, allowing easier consumption of streaming data.
  • The Image Capture API now allows sites to take higher resolution images than before, providing full control over camera settings such as zoom, ISO, and white balance.
  • To provide enhanced privacy, CSS stylesheets can now specify their own referrer policy via the HTTP header, rather than always inheriting the referrer policy of the document that originally referenced it.
  • Touch events are now aligned to requestAnimationFrame, ensuring that input is processed as part of the document lifecycle and creating a more efficient and adaptive input response.
  • The new worker-src Content Security Policy directive restricts which URLs may be loaded as a Worker, SharedWorker, or ServiceWorker.

Deprecations and interoperability improvements

  • The <dialog> element has changed from display: inline to block by default to better align with the spec.
  • Following removal from the Media Queries spec, support for hover: on-demand and any-hover: on-demand media queries have been removed.
  • To better align with spec and help avoid race conditions, decodeAudioData now detaches the given ArrayBuffer before decoding, removing all content from the object and making it unable to be reused or examined.
  • The -internal-media-controls-cast-button CSS selector has been removed in favor of the Remote Playback API.
  • The -internal-media-controls-text-track-list* CSS selectors have been removed in favor of custom-built video controls.
  • The SVGTests.requiredFeatures attribute has been removed following its removal from the spec.
  • initDeviceMotionEvent() and initDeviceOrientationEvent() were removed in favor of DeviceOrientationEvent() and DeviceMotionEvent(), following a spec trend of moving away from initialization functions and toward constructors.
  • To preserve consistency across browsers, the sample property will now be included in a violation report (and associated SecurityPolicyViolationEvent object) if a report-sample expression is present in the violated directive.
  • To increase security, Opera will now block requests for subresources that contain embedded credentials, and instead handle them as network errors.
  • To increase security, Opera will now block requests from HTTP/HTTPS documents to ftp: URLs.
  • To preserve consistency across browsers, injecting JavaScript via AppleScript is longer supported in Opera for Mac.
  • The ability to call Notification.requestPermission() from non-main frames has been deprecated to align the requirements for notification permission with requirements for push notifications, and ease friction for developers.
  • Support for Shared Dictionary Compression (SDCH) has been disabled until a stable API has been standardized.

What’s next?

If you’re interested in experimenting with features that are in the pipeline for future versions of Opera, we recommend following our Opera Developer stream.

Planet MozillaCommunity Participation Guidelines Revision Brownbag (APAC)

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaCommunity Participation Guidelines Revision Brownbag (APAC)

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaThe Joy of Coding - Episode 103

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 103

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaUpcoming changes for add-on usage statistics

We’re changing the way we calculate add-on usage statistics on AMO so they better reflect their real-world usage. This change will go live on the site later this week.

The user count is a very important part of AMO. We show it prominently on listing and search pages. It’s a key factor of determining add-on popularity and search ranking.

Most popular add-ons on AMO

However, there are a couple of problems with it:

  • We count both enabled and disabled installs. This means some add-ons with high disable rates have a higher ranking than they should.
  • It’s an average over a period of several weeks. Add-ons that are rapidly growing in users have user numbers that are lagging behind.

We’ll be calculating the new average based on enabled installs for the past two weeks of activity. We believe this will reflect add-on usage more accurately.

What it means for add-on developers

We expect most add-ons to experience a small drop in their user numbers, due to the removal of disabled installs. Most add-on rankings on AMO won’t change significantly. This change also doesn’t affect the detailed statistics dashboard developers have access to. Only the number displayed on user-facing sections of the site will change.

If you notice any problems with the statistics or anything else on AMO, please let us know by creating an issue.

The post Upcoming changes for add-on usage statistics appeared first on Mozilla Add-ons Blog.

Planet MozillaDesigning for performance: A data-informed approach for Quantum development

When we announced Project Quantum last October, we talked about how users would benefit from our focus on “performance gains…that will be so noticeable that your entire web experience will feel different.”

We shipped the first significant part of this in Firefox 53, and continue to work on the engineering side. Now let’s dive into the performance side and the work we’re doing to ensure that our users will enjoy a faster Web experience.

What makes work on performance so challenging and why is it so important to include the user from the very beginning?


Performance — a contested subject, to say the least!

Awareness of performance as a UX issue often begins with a negative experience – when things get slow or don’t work as expected. In fact, good performance is already a table stake, something that everyone expects from an online product or service. Outstanding performance will very soon become the new baseline point of reference.

The other issue is that there are different perspectives on performance. For users, performance is about their experience and is very often unspecific. For them, perception of good performance can range from “this is amazingly fast” to “SLOW!”, from “WOW!” to “NO!”. For engineers, performance is about numbers and processes. The probes that collect data in the code often measure one specific task in the pipeline. Measuring and tracking capabilities like Garbage Collection (GC) enables engineers to react to regressions in the data quickly, and work on fixing the root causes.

This is why there can be a disconnect between user experience and engineering efforts at mitigation. We measure garbage collection, but it’s often measured without context, such as whether it runs during page load, while the user interacts with a website, or during event queue idle time. Often, GC is within budget, which means that users will hardly perceive it. More generally, specific aspects of what we measure with our probes can be hard to map to the unspecific experience of performance that users have.

Defining technical and perceived performance

To describe an approach for optimizing performance for users, let us start by defining what performance means. For us, there are two sides to performance: technical performance and perceived performance.

Under technical performance, we include the things that we can measure in the browser: how long page elements take to render, how fast we can parse JavaScript or — and that is often more important to understand — how slow certain things are. Technical performance can be measured and the resulting data can be used to investigate performance issues. Technical performance represents the engineer’s viewpoint.

On the other hand, there is the topic of how users experience performance. When users talk about their browser’s performance, they talk about perceived performance or “Quality of Experience” (QoE). Users express QoE in terms of any perceivable, recognized, and nameable characteristic of the product. In the QoE theory, these are called QoE features. We may assume that these characteristics are related to factors in the product that impact technical performance, the QoE factors, but this is not necessarily given.

A promising approach to user-perceived optimization of performance is to identify those factors that have the biggest impact on QoE features and focus on optimizing their technical performance.

Understanding perception

The first step towards optimizing Quantum for perceived performance is to understand how human perception works. We won’t go into details here, but it’s important to know that there are perception thresholds of duration that we can leverage. The most prominent ones for Web interactions were defined by Jacob Nielsen back in the 1990s, and even today, they are informing user-centric performance models like RAIL. Following Nielsen’s thresholds gives a first good estimate about the budget available for certain tasks to be performed by the browser engine.

With our user research team, we are validating and investigating these perceptual thresholds for modern web content. We are running experiments with users, both in the lab and remotely. Of course, this will only happen with users’ consent and everybody will be able to opt in and opt out of these studies at any time. With tools like Shield, we run a set of experiments that allow us to learn about performance and how to improve it for users.

However, knowing the perceptual thresholds and the respective budget is just an important first step. Following, we will go a bit more into detail about how we use a data-informed approach for benchmarking and optimizing performance during the development of our new browser engine.

Three pillars of perceived Web performance

The challenge with optimizing perceived performance of a browser engine is that there are many components involved in bringing data from the network to our screens. All these components may have an impact on the perceived performance and on the underlying perceptual thresholds. However, users don’t know about this structure and the engine. From their point of view, we can define three main pillars for how users perceive performance on the Web: page load, smoothness and responsiveness.

  • Page load: This is what people notice each time when loading a new page. Users care about fast page loads, and we have seen in user research that this is often the way users determine good or bad performance in their browser. Key events defining the perceptual budget during page load are: an immediate response to the user request for a new page, also known as “First Render” or “First non-blank Paint“, and the moment when all important elements are displayed, currently discussed as Hero Element Timing.
  • Smoothness: Scrolling and panning have become challenging activities on modern websites, with infinite scrolling, parallax effects, and dynamic sticky elements. Animations create a better user experience when interacting with the page. Our users want to enjoy a smooth experience for scrolling the web and web animations, be it on social media pages or when shopping for the latest gadget. Often, people nowadays also refer to smoothness as “always 60 fps”.
  • Responsiveness: Beyond scrolling and panning, the other big group of user interactions on websites are mouse, touch, and keyboard inputs. As modern web services create a native-like experience, user expectations for web services are more demanding, based on what they have come to expect for native apps on their laptops and desktop computers. Users have become sensitive to input latency, so we are currently looking at an ideal maximum delay of 100ms.

Targeted optimization for the whole Web

But how do we optimize these three pillars for the whole of the Web? It’s a bigger job than optimizing the performance of a single web service. In building Firefox, we face the challenge of optimizing our browser engine without knowing which pages our users visit or what they do on the Web, due to our commitment to user privacy. This also limits us in collecting data for specific websites or specific user tasks. However, we want to create the best Quality of Experience for as many users and sites as possible.

To start, we decided to focus on the types of content that are currently most popular with Web users. These categories are:

  • Search (e.g.Yahoo Search, Google, Bing)
  • Productivity (e.g. Yahoo Mail, Gmail, Outlook, GSuite)
  • Social (e.g. Facebook, LinkedIn, Twitter, Reddit)
  • Media (e.g. YouTube, Netflix, SoundCloud, Amazon Video)
  • E-commerce (e.g. eBay or Amazon)
  • News & Reference (e.g. NYTimes, BBC, Wikipedia)

Our goal is to learn from this initial set of categories and the most used sites within them and extend our work on improvements to other categories over time. But how do we now match technical to perceived performance and fix technical performance issues to improve the perceived ones?

A data-informed approach to optimizing a browser engine

The goal of our approach here is to take what matters to users and apply that knowledge to achieve technical impact in the engine. With the basics defined above, our iterative approach for optimizing the engine is as follows:

  1. Identification: Based on the set of categories in focus, we specify scenarios for page load, smoothness, and responsiveness that exceed the performance budget and negatively impact perceived performance.
  2. Benchmarks: We define test cases for the identified scenarios so that they become reproducible and quantifiable in our benchmarking testbeds.
  3. Performance profiles: We record and analyze performance profiles to create a detailed view into what’s happening in the browser engine and guide engineers to identify and fix technical root causes.

Identification of scenarios exceeding performance budget

Input for identifying those scenarios come through different sources. They are either informed by results from user research or can be reported through bugs or user feedback. Here are two examples of such a scenario:

  • Scenario: browser startup
  • Category: a special case for page load
  • Performance budget: 1000ms for First Paint and 1500ms for Hero Element
  • Description: Open the browser by clicking the icon > wait for the browser to be fully loaded as maximized window
  • What to measure: First Paint: browser window appears on Desktop, Hero Element: “Search” placeholder in the search box of the content window
  • Scenario: Open chat window on Facebook
  • Category: Responsiveness
  • Performance budget: 150ms
  • Description: Log in to Facebook > Wait for the homepage to be fully loaded > click on a name in the chat panel to open chat window
  • What to measure: time from mouse-click input event to showing the chat window on screen

Benchmarks

We have built different testbeds that allow us to obtain valid and reproducible results, in order to create a baseline for each of the scenarios, and also to be able to track improvements over time. Talos is a python-driven performance testing framework that, among many other tests, has a defined set of tests for browser startup and page load. It’s been recently updated to match the new requirements and measure events closer to user perception like First Paint.

Hasal, on the other hand, focuses on benchmarks around responsiveness and smoothness. It runs a defined set of scripts that perform the defined scenarios (like the “open chat window” scenario above) and extracts the required timing data through analyzing videos captured during the interaction.

Additionally, there is still a lot of non-automated, manual testing involved, especially for first rounds of baselining new scenarios before scripting them for automated testing. Therefore, we use a HDMI capture card and analyze the recorded videos frame-by-frame manually.

All these testbeds give us data about how critical the identified scenarios are in terms of exceeding their respective perceptual budgets. Running benchmarks regularly (once a week or even more often) for critical scenarios like browser startup also tracks improvements over time and provides good direction when improvements have moved the scenario into the perceptual budget.

Performance profiles

Now that we have defined our scenarios and understand how much improvement is required to create good Quality of Experience, the last step is to enable engineers to achieve these improvements. The way that engineers look at performance problems in the browser engine is through performance profiles. Performance profiles are a snapshot of what happens in the browser engine during a specific user task such as one of our defined scenarios.

A performance profile using the Gecko Profiler. The profile shows Gecko’s main thread, four content threads, and the compositor main thread. Below is the call stack.

 

A profile consists of a timeline with tracing markers, different thread timelines and the call tree. The timeline consists of several rows that indicate interesting events in terms of tracing markers (colored segments). With the timeline, you can also zoom in to get more details for marked areas. The thread timelines show a list of profiled threads, like Gecko’s Main Thread, four content process threads (thanks to multi-process), and the main thread of the compositor process, as seen in the profile above. The x-axis is synced to the timeline above, and the y-axis shows the stack depth at a given point in time. Finally, the call tree shows the collected samples within a given timeframe organized by ‘Running Time’.

It requires some experience to be able to read these performance profiles and translate them into actions. However, because they map critical user scenarios directly to technical performance, performance profiles serve as a good tool to improve the browser engine according to what users care about. The challenge here is to identify root causes to improve performance broadly, rather than focus on specific sites and individual bugs. This is also the reason why we focus on categories of pages and not an individual set of initial websites.

For in-depth information about performance profiles, here is an article and a talk from Ehsan Akhgari about performance profiles. We are continuously working on improving the profiler addon which is now written in React/Redux.

Iterative testing and profiling performance

The initial round of baselining and profiling performance for the scenarios above can help us go from identifying user performance issues to fixing those issues in the browser engine. However, only iterative testing and profiling of performance can ensure that patches that land in the code will also lead to the expected benefits in terms of performance budget.

Additionally, iterative benchmarking will also help identify the impact that a patch has on other critical scenarios. Looking across different performance profiles and capturing comparable interactions or page load scenarios actually leads to fixing root causes. By fixing root causes rather than focusing on one-off cases, we anticipate that we will be able to improve QoE and benefit entire categories of websites and activities.

Continuous performance monitoring with Telemetry

Ultimately, we want to go beyond a specific set of web categories and look at the Web as a whole. We also want to go beyond manual testing, as this is expensive and time-consuming. And we want to apply knowledge that we have obtained from our initial data-driven approach and extend it to monitoring performance across our user base through Telemetry.

We recently added probes to our Telemetry system that will help us to track events that matter to the user, in the wild across all websites, like first non-blank paint during page load. Over time, we will extend the set of probes meaningfully. A good first attempt to define and include probes that are closer to what users perceive has been taken by the Google Chrome team and their Progressive Web Metrics.

A visualization of Progressive Web Metrics during page load and page interaction. The upper field shows the user interaction level and critical interactions related to the technical measures.

 

As mentioned in the beginning, for users performance is a table stake, something that they expect. In this article, we have explored: how we capture issues in perceived performance, how we use benchmarks to measure the criticality of performance issues, and how to fix the issue by looking at performance profiles.

Beyond the scope of the current approach to performance, there’s an even more interesting question: Will improved performance lead to more usage of the browser or changes to how users use their browser? Can performance improvements increase user engagement?

But these are topics that still need more research — and, at some point in time, will be the subject for another blog post.

Meanwhile, if you are now interested to follow along on performance improvements and experience the enhanced performance of the Firefox browser, go download and install the latest Firefox Nightly build and see what you think of its QoE.

Planet MozillaCommunity Participation Guidelines Revision Brownbag (EMEA)

Community Participation Guidelines Revision Brownbag (EMEA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaCommunity Participation Guidelines Revision Brownbag (EMEA)

Community Participation Guidelines Revision Brownbag (EMEA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaA $2 Million Prize to Decentralize the Web. Apply Today

We’re fueling a healthy Internet by supporting big ideas that keep the web accessible, decentralized and resilient. What will you build?

 

Mozilla and the National Science Foundation are offering a $2 million prize for big ideas that decentralize the web. And we’re accepting applications starting today.

Mozilla believes the Internet is a global public resource that must be open and accessible to all.  In the 21st century, a lack of Internet access is far more than an inconvenience — it’s a staggering disadvantage. Without access, individuals miss out on substantial economic and educational opportunities, government services and the ability to communicate with friends, family and peers.

Currently, 34 million people in the U.S. — 10% of the country’s population — lack access to high-quality Internet connectivity. This number jumps to 39% in rural communities and 41% on Tribal lands. And when disasters strike, millions more can lose vital connectivity right when it’s needed most.

To connect the unconnected and disconnected across the U.S., Mozilla today is accepting applications for the Wireless Innovation for a Networked Society (WINS) challenges. Sponsored by NSF, a total of $2 million in prize money is available for wireless solutions that get people online after disasters, or that connect communities lacking reliable Internet access.

The details:

Off-the-Grid Internet Challenge

When disasters like earthquakes and hurricanes strike, communications networks are among the first pieces of critical infrastructure to overload or fail. How can we leverage both the Internet’s decentralized design and current wireless technology to keep people connected to each other — and vital messaging and mapping services — in the aftermath of a disaster?

Challenge applicants will be expected to design both the means to access the wireless network (i.e. hardware) and the applications provided on top of that network (i.e. software). Projects should be portable, easy to power and simple to access.

Here’s an example: A backpack containing a hard drive computer, battery and Wi-Fi router. The router provides access, via a Wi-Fi network, to resources on the hard drive like maps and messaging applications.

Smart Community Networks Challenge

Many communities across the U.S. lack reliable Internet access. Sometimes commercial providers don’t supply affordable access; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow. How can we leverage existing infrastructure — physical or network — to provide high-quality wireless connectivity to communities in need?

Challenge applicants should plan for a high density of users, far-reaching range and robust bandwidth. Projects should also aim to make a minimal physical footprint and uphold users’ privacy and security.

Here’s an example: A neighborhood wireless network where the nodes are housed in, and draw power from, disused phone booths or similarly underutilized infrastructure.

These challenges are open to individuals and teams, nonprofits and for-profits. Applicants could be academics, technology activists, entrepreneurs or makers. We’re welcoming anyone with big ideas and passion for a healthy Internet to apply. Prizes will be available for both early-stage design concepts and fully-working prototypes.

To learn more and apply, visit https://wirelesschallenge.mozilla.org. This challenge is one of Mozilla’s open innovation competitions, which also includes the Equal Rating Innovation Challenge.

Related Reading: Internet access is an essential part of life, but the quality of that access can vary wildly, writes Mozilla’s Executive Director Mark Surman in Quartz

The post A $2 Million Prize to Decentralize the Web. Apply Today appeared first on The Mozilla Blog.

Planet MozillaThese Weeks in Firefox: Issue 19

Highlights

The new Photon hamburger menu

The new hamburger menu

  • We populated the onboarding overlay on about:newtab according to the UI spec. There are 5 tours with call-to-action buttons & you can hide the tour with a checkbox.
The new onboarding panel.

The new onboarding panel

Autocomplete 1st Result Time Graph

Mean!

Friends of the Firefox team

Project Updates

Add-ons

Activity Stream

  • June 15th Newsletter
  • Highlights:
    • Graduation Team has landed the about:newtab Preferences Pane, put in place a bunch of performance telemetry, and made some amazing progress on making sure all existing Firefox about:newtab tests pass.
    • The MVP Team has built out controls for collapsing/expanding AS sections, as well as landed the very cool ‘Recent Bookmarks’ section.
      • This iteration, we will be preffing on a minimal version of Activity Stream in Firefox Nightly!
    • Activity Stream Release Schedule

      The Activity Stream Release Schedule

    • Shield Study to examine AS vs. Tiles starting June 26th

Electrolysis (e10s)

  • Windows support for e10s+accessibility has been delayed until Firefox 56+ due to stability issues. Here’s the meta bug for the project. This bug appears to be the major blocker.

Firefox Core Engineering

Form Autofill

Form Autofill Demo

Note the neat form fill preview!

  • We enabled the heuristics algorithm. Form Autofill now supports forms without @autocomplete attributes, i.e. most en-US sites.
  • Enabled the auto-save feature and implemented the door hanger for notifying users of this.
  • Landed preview & highlight feature. You can now preview how addresses will be filled in multiple fields.
  • Landed the footer in the suggestion dropdown, which makes users easier to find the preferences page.
  • Landed basic select element support.
  • Polished the dialogs in about:preferences (two-layer in-content dialog and select element dropmark).

Photon

Performance
Structure
Animation
Visuals
Onboarding
  • The Sync tour and tour notification will land soon.
Preferences

Search

Test Pilot

  • A new version of Firefox Screenshots landed in Nightly last night with some nice bug fixes
  • Tab Center only works up to Firefox 55
  • Graduating July 5th:  Page Shot, Activity Stream
  • Graduating “soon” after July 5th:  Pulse, Tab Center
  • Keep your eyes open for three new experiments coming mid-July!

Planet MozillaMessage Broker: Into String

Strings in most native or performance focused languages tend to present a fair amount of complexity and Rust is no exception to this. There are cases where you have a struct that needs to have a name, such as:

struct ServiceHandle {
    name: String,
}

The first ServiceHandle::new() function I would typically write when first learning Rust would look like this:

fn new(name: String) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name }
}

In the real world I generally have a String to pass to ServiceHandle::new(String) so this API works well. But when I’m writing tests for my code, I want to pass hard coded values with type &'static str. In order to do that, I have to call one of the conversion functions.

ServiceHandle::new("listener".into());
ServiceHandle::new("listener".to_owned());
ServiceHandle::new(String::from("listener"));

If I change the signature to something like:

fn new(name: &str) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name.to_owned() }
}

Then I have to remember to prefix String with & when passing to the function. Another possibility is to from_string and from_str which is what I started to rely on next. Then I don’t have to remember as much, just use the right one for the right type.

fn from_str(name: &str) -> ServiceHandle {
    // skip init stuff, let from_string do it
    ServiceHandle::from_string(name.to_owned())
}

fn from_string(name: String) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name }
}

This gets me into a better state where it is easy to remember what string type it takes, but it feels clumsy. Rust provides the From and Into traits that types can implement to enable more generic coding. They are as their name implies a way to automatically change types. They are “reflexive” which means that if one of them is implemented the other can use it. For example, impl From<A> for B would allow you to write a function like fn thing<T: Into<B>>(arg: T) which could be called like thing(A {}).

So the next iteration of my ServiceHandle::new() went generic:

fn new<S: Into<String>>(name: S) -> ServiceHandle {
    ServiceHandle { name: name.into() }
}

This allows calling with String, &str or several other types that can automatically be converted into a String. Making writing testing code with &'static str simple, while dynamically generated String objects still a first class citizen.


Planet MozillaOpus 1.2 is out!

Opus gets another major upgrade with the release of version 1.2. This release brings quality improvements to both speech and music, while remaining fully compatible with RFC 6716. There are also optimizations, new options, as well as many bug fixes. This Opus 1.2 demo describes a few of the upgrades that users and implementers will care about the most. You can download the code from the Opus website.

Planet MozillaRain of Rust -3rd online meeting

Rain of Rust -3rd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Planet MozillaRain of Rust -3rd online meeting

Rain of Rust -3rd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Planet MozillaI think I know what you mean, Kev, and I understand, but if you ask me it is your day too.

I think I know what you mean, Kev, and I understand, but if you ask me it is your day too. Happy Father’s Day, from one father to another.

Planet MozillaMartes Mozilleros, 20 Jun 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaMartes Mozilleros, 20 Jun 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaFirefox Focus New to Android, blocks annoying ads and protects your privacy

Last year, we introduced Firefox Focus, a new browser for the iPhone and iPad, designed to be fast, simple and always private. A lot has happened since November; and more than ever before, we’re seeing consumers play an active role in trying to protect their personal data and save valuable megabytes on their data plans.

While we knew that Focus provided a useful service for those times when you want to keep your web browsing to yourself, we were floored by your response  – it’s the highest rated browser from a trusted brand for the iPhone and iPad, earning a 4.6 average rating on the App Store.

Today, I’m thrilled to announce that we’re launching our Firefox Focus mobile app for Android.

Like the iPhone and iPad version, the Android app is free of tabs and other visual clutter, and erasing your sessions is as easy as a simple tap.  Firefox Focus allows you to browse the web without being followed by tracking ads which are notoriously known for slowing down your mobile experience.  Why do we block these ad trackers? Because they not only track your behavior without your knowledge, they also slow down the web on your mobile device.

Check out this video to learn more:

 

New Features for Android

For the Android release of Firefox Focus, we added the following features:

  • Ad tracker counter – For the curious, there’s a counter to list the number of ads that are blocked per site while using the app.
  • Disable tracker blocker – For sites that are not loading correctly, you can disable the tracker blocker to quickly take care of it and get back to where you’ve left off.
  • Notification reminder – When Focus is running in the background, we’ll remind you through a notification and you can easily tap to erase your browsing history.

For Android users we also made Focus a great default browser experience. Since we support both custom tabs and the ability to disable the ad blocking as needed, it works great with apps like Facebook when you just want to read an article without being tracked. We built Focus to empower you on the mobile web, and we will continue to introduce new features that make our products even better. Thanks for using Firefox Focus for a faster and more private mobile browsing experience.

 

Firefox Focus Settings View

Firefox Focus Settings View

You can download Firefox Focus on Google Play and in the App Store.

The post Firefox Focus New to Android, blocks annoying ads and protects your privacy appeared first on The Mozilla Blog.

Planet MozillaFirefox 55 Beta 4 Testday, June 23rd

Hello Mozillians,

We are happy to let you know that Friday, June 23rd, we are organizing Firefox 55 Beta 4 Testday. We’ll be focusing our testing on the following new features: Screenshots and Simplify Page.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Planet MozillaResolved: Fixed – A short story about a community-reported bug

Firefox Nightly users, thanks to the telemetry and crash reports they send to Mozilla, are an amazing help to Firefox developers. The aggregated data sent by our community is extremely useful and allows spotting performance or stability regressions at the earliest stages of development. It probably can’t be emphasized enough how just using Nightly is a great way to get involved in Mozilla.

That said, we have in our Nightly community people that also actively hunt and report bugs and regressions and provide us detailed feedback, usually via the opening of a bug report in Bugzilla. These people are our core community, our first line of defense against regressions, they allow us to ship faster and better software and many of them have been involved in Mozilla and Firefox for a long time.

Mozilla Suite nightly start screen

Have you filed a bug?” is something you often hear open source developers say to people that report them some anomaly or regression, and for the majority of our users, this sounds like a complicated process. Just explaining the bug they experience in terms that make the bug report actionable by the developer that will fix it is a skill in itself. And this is where our core community of power-users on Nightly shines, they have this skill.

But what if the reporter has these skills but is not comfortable communicating in English because it is not her native language? Yes, language can also be a barrier to giving feedback…

A few days ago, a mozillian from our Spanish community (web developer in an IT company in Spain), sent me an email about a regression he was experiencing at work with nightly in the last days. This is the story I want to tell because it illustrates how powerful community work in open source can be and how lucky Mozilla is to have a dedicated global community.

Fernando, or StripTM as we know him in the Mozilla Hispano community, sent me an email about a major performance regression with forms on a page they have on their intranet, clicking in a form field would freeze the browser for seconds, and he wanted to know if I had heard about it. I didn’t so I asked him if there was a way I could see this page.

Intranets are tricky, the content there is by definition not public. But StripTM being a Web developer, he emailed me a reduced anonymized version of the page so as that I could test locally if I could see the bug and yes, I was experiencing it as well.

Since StripTM is not always comfortable writing bug reports in English, I did it for him and filed bug 1372843 a week ago and attached his test case. I fired up mozregression and found out that the bug was caused by the recent activation of Form Autofill in Nightly (see our article Preview Form Autofill in Firefox Nightly). In a nutshell, the performance problem was caused by the fact that this intranet page had 170(!) forms and our heuristics were cycling through all of the input fields in the page instead of only  the ones for the form we had clicked in.

All in all, it took a total of 3 days to discover the performance problem, file a bug and get a patch for it in mozilla-central. This is what happens when you can put passionate and skilled volunteers in contact with our equally passionate and skilled staff!

So thank you Fernando for using Nightly all these years and yes, the publishing date of this post is also a way for us to thank you for your involvement in Mozilla and wish you a happy birthday!

Planet Mozillac-ares 1.13.0

The c-ares project may not be very fancy or make a lot of noise, but it steadily moves forward and boasts an amazing 95% code coverage in the automated tests.

Today we release c-ares 1.13.0.

This time there’s basically three notable things to take home from this, apart from the 20-something bug-fixes.

CVE-2017-1000381

Due to an oversight there was an API function that we didn’t fuzz and yes, it was found out to have a security flaw. If you ask a server for a NAPTR DNS field and that response comes back crafted carefully, it could cause c-ares to access memory out of bounds.

All details for CVE-2017-1000381 on the c-ares site.

(Side-note: this is the first CVE I’ve received with a 7(!)-digit number to the right of the year.)

cmake

Now c-ares can optionally be built using cmake, in addition to the existing autotools setup.

Virtual socket IO

If you have a special setup or custom needs, c-ares now allows you to fully replace all the socket IO functions with your own custom set with ares_set_socket_functions.

Planet MozillaThis Week in Rust 187

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is include_dir, a crate that lets you include entire directory contents in your binary – like include_str!, but on steroids. Thanks to Michael Bryan for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

122 pull requests were merged in the last week.

New Contributors

  • Marco Castelluccio
  • Thomas Lively
  • Wonwoo Choi

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

impl<T> Clone for T {
  fn clone(&self) -> T {
    unsafe { std::ptr::read(self) }
  }
}

@horse_rust on twitter.

Thanks to llogiq for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Planet MozillaMozMEAO SRE Status Report - June 20, 2017

Here’s what happened on the MozMEAO SRE team from June 13th - June 20th.

Current work

Static site hosting

  • The irlpodcast site now has a staging environment also hosted in S3 with CloudFront. Additionally, Jenkins has been updated to deploy to staging and production via git push.

  • We’re going to move viewsourceconf.org from Kubernetes to S3 and CloudFront hosting. Production and staging environments have been provisioned, but we’ll need to update Jenkins to push changes to these new environments.

Basket move to Kubernetes

Kubernetes (general)

Our DataDog, New Relic and MIG DaemonSets have been configured to use Kubernetes tolerations to schedule pods on master nodes. This allows us to capture metrics from K8s master nodes in additional to worker nodes.

Frankfurt Kubernetes cluster provisioning

Work continues to enable our apps in the new Frankfurt Kubernetes cluster. In addition, we’re working on automating our app installs as must as possible.

MDN

  • ElasticSearch will be upgraded to 2.4 in SCL3 production, June 21 11 AM PST

  • We may reconsider self-hosting ElasticSearch.

Links

Planet Mozilla“OPTIONS *” with curl

(Note: this blog post as been updated as the command line option changed after first publication, based on comments to this very post!)

curl is arguably a “Swiss army knife” of HTTP fiddling. It is one of the available tools in the toolbox with a large set of available switches and options to allow us to tweak and modify our HTTP requests to really test, debug and torture our HTTP servers and services.

That’s the way we like it.

In curl 7.55.0 it will take yet another step into this territory when we finally introduce a way for users to send “OPTION *” and similar requests to servers. It has been requested occasionally by users over the years but now the waiting is over. (brought by this commit)

“OPTIONS *” is special and peculiar just because it is one of the few specified requests you can do to a HTTP server where the path part doesn’t start with a slash. Thus you cannot really end up with this based on a URL and as you know curl is pretty much all about URLs.

The OPTIONS method was introduced in HTTP 1.1 already back in RFC 2068, published in January 1997 (even before curl was born) and with curl you’ve always been able to send an OPTIONS request with the -X option, you just were never able to send that single asterisk instead of a path.

In curl 7.55.0 and later versions, you can remove the initial slash from the path part that ends up in the request by using –request-target. So to send an OPTION * to example.com for http and https URLs, you could do it like:

$ curl --request-target "*" -X OPTIONS http://example.com
$ curl --request-target "*" -X OPTIONS https://example.com/

In classical curl-style this also opens up the opportunity for you to issue completely illegal or otherwise nonsensical paths to your server to see what it does on them, to send totally weird options to OPTIONS and similar games:

$ curl --request-target "*never*" -X OPTIONS http://example.com

$ curl --request-target "allpasswords" http://example.com

Enjoy!

Planet MozillaMessage Broker: Goals and (De)Motivations

Recently, I read a twitter rant that described message brokers as poor combination load balancer, database, and service discovery tools. It hit me hard since I’d just spent a week diving into writing my own message broker. While I had my dislikes of brokers, I think they are handy tools. The tweeter stated that many of these things should be built into the services. The goal of which to keep the heavy work out of the center of the system. Message brokers doing the opposite when used as a central bus.

Having this description of the problem space is turning out to be nice. It gives me some different framing for the various parts of the message broker I’ll be building and the underlying needs. It also pointed out a heavy flaw that message brokers as a central bus can cause trouble in some systems. While that twitter rant dismayed me at first, I now feel even more energized in building this tool.

This framing of load balancer, database, and service discovery reminds me to go read up on that tech as well. Sourcing papers for those problems while looking into queuing related things. I can acknowledge and make sure these subproblems get solved well enough for my intended scale. That will be a key part of my design going forward, keeping my decisions favoring small to medium scale. I’ve seen message brokers work well in those scenarios and want to make an even better one of those.

This doesn’t mean one couldn’t use the broker in a larger scale operation. But, I’m architecting it to encourage deliberate clustering beyond medium scale. Clustering acknowledges the fact that there are usually groups of services that are able to meet a work request without speaking outside of their group except for one or two edges. What I hope to discover as part of the development process is how to encourage this. Whether documenting and creating examples will be enough, or if I’ll need more core features.

I think keeping the message broker light weight will be instrumental in encouraging clustering. If the message broker is heavy, folks wouldn’t want to run too many instances. If it requires a lot of tuning to be useful, folks will want to only tune it once as a central bus. Side note: as I typed this I realized this is why Redis is so good.

Among the lofty design and architecture goals I want to mention my motivations and put the goals in perspective. This project’s main goal is to be a learning project. I want to better understand the internals of message buses. Most green field backend projects will be utilizing a message bus and smaller services. Understanding the internals of the message bus and keeping them in mind will let me design better services.

I also want to build a complex, performance focused, realistic piece of software in Rust. I find the language fun to work with and writing my own thread orchestration that is safe is delightful. As I build up the basics in the broker and client, I’m learning a lot of practical Rust skills. Like many others writing and coding in Rust in their free time, I’m hoping this will help encourage more jobs writing Rust. If I’m lucky enough, I’ll get to secure one of those jobs.


Planet MozillaCommunity Participation Guidelines Revision Brownbag (NALA)

Community Participation Guidelines Revision Brownbag (NALA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaCommunity Participation Guidelines Revision Brownbag (NALA)

Community Participation Guidelines Revision Brownbag (NALA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Planet MozillaRep. Eshoo Net Neutrality Roundtable

Rep. Eshoo Net Neutrality Roundtable Congresswoman Anna Eshoo (D-CA) will convene a roundtable to discuss the impacts of net neutrality and the consequence of eviscerating the policy. Eshoo will hear...

Planet MozillaRep. Eshoo Net Neutrality Roundtable

Rep. Eshoo Net Neutrality Roundtable Congresswoman Anna Eshoo (D-CA) will convene a roundtable to discuss the impacts of net neutrality and the consequence of eviscerating the policy. Eshoo will hear...

Planet MozillaReminder :) Please take part in the Sheriff Survey!

Hi,
just a reminder that we have our Sheriff Survey Running and please take part in it, it helps us a lot to improve our work!

Link: https://docs.google.com/a/mozilla.com/forms/d/e/1FAIpQLSfGBZ50zkG9W-Wnk1ACBfFvj1iu8e46I5gs9t-G3ZWDpcy4-A/viewform

 

thanks!

Tomcat

Planet MozillaTenFourFox FPR1 available

TenFourFox Feature Parity Release 1 is available for testing (downloads, hashes, release notes). There are no major changes from the beta except for a couple minor efficiency updates and a font blacklist update, and all remaining applicable security issues have been backported as well.

Chris T reported that old issue 72 (a/k/a bug 641597) has resurfaced in FPR1. Most likely this bug was never actually fixed, just wallpapered over by something or other, and the efficiency improvements in FPR1 have made it easier to trigger again. That said, it has only ever manifested on certain 10.5 systems; it has never been reproduced on 10.4 by anyone, and I can't reproduce it myself on my own 10.5 DLSD PowerBook G4. For that reason I'm proceeding with the release as intended but if your system is affected, please post your steps to replicate and we'll compare them with Chris' (especially if you have a 10.4 system, since that will be much easier for me to personally debug). Please also note any haxies or system extensions as the issue can be replicated on a clean profile, meaning addons or weird settings don't appear to be a factor. If we find a fix and enough people are bitten, it should be possible to spin a point release.

The plan is for a Tuesday/Wednesday release ahead of schedule, so advise if there are any new showstoppers.

Planet MozillaEscaping the economy of souls — starting with Facebook (in 4 steps)

“I think it’s time for a reclamation movement.”

Tim Wu author of The Attention Merchant in a talk at @ Mozilla Toronto last week

<figure class="graf graf--figure">

A little over two months ago, I removed the web-warping, soul exploiting, goggles of a ‘free’ Facebook account — free as in guinea pig. I lost my best friend and partner to cancer around this time, and Facebook knew that.

<figure class="graf graf--figure"></figure>

I found myself staring at content curated just for me — a Ted Talk about end of life care, cancer foundations, hospital foundations, an ‘inspiring’ story of a boy who survived cancer, and a review of ‘Option B’, Sheryl Sandburg’s book on grief… I had joined her Facebook group, but they knew that too:

<figure class="graf graf--figure"></figure>

And there I was, as if waking in a horror movie finding vile tentacles of a venomous creature wrapped around me, I saw; I witnessed and felt the cost of free. The cost of my well being, of dignity and for all those around me — the cost of my attention, focus and awareness of the world around me.

Was my feed part of an experiment or just really shitty and cruel algorithms? Facebook doesn’t hide the fact it’s learning from people like me during personal crises. Rather, it publishes reports on the findings:

<figure class="graf graf--figure"></figure>

 

And probably what upset me the most was that Sheryl Sandburg of Facebook, whose book I liked and shared, who should be protective of people in grief was bringing large numbers of people to her Facebook group —so much heartbreak, so much trauma data. And Sheryl is aware…

“ However, the company was widely criticised for manipulating material from people’s personal lives in order to play with user emotions or make them sad.

In response on Thursday, Facebook said that it was introducing new rules for conducting research on users with clearer guidelines, better training for researchers and a stricter review process.

But, it did not state whether or not it would notify users — or seek their consent — before starting a study.”

— BBC News “Facebook admits failings over emotion manipulation study”

The reason I write this is to wake you up as well, although you are likely partially there — you need to get all the way there. Please stumble with me to some type of reclamation movement, it’s important for humanity (no exaggeration). Facebook, and others in the economy of souls design addictive technology to keep us there.

I’ve used the same excuses you are. The spine of Facebook’s business model is your contact list — and this should be the center of reclaim.

<figure class="graf graf--figure"></figure>

 

 

 

 

 

 

 

Below are the steps I’ve taken to wean myself off Facebook and my contact list off Facebook for good. I want an empowered online life, and I want that for you too.

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.

Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

<figure class="graf graf--figure">

Public Pages are trapped in a ‘Free Membership Paywall’

</figure>

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go explore the web again.

</figure> <figure class="graf graf--figure">https://twitter.com/clintlalonde/status/868661574914891777

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.

Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

<figure class="graf graf--figure">

Public Pages are trapped in a ‘Free Membership Paywall’

</figure>

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go reclaim the web.

</figure>

Feature Photo by Marco Gomes Attribution-NonCommercial-ShareAlike License

Cross posted to Medium

FacebookTwitterGoogle+Share

Planet Mozillacurl doesn’t spew binary anymore

One of the least favorite habits of curl during all these years, I’ve been told, is when users forget to instruct the command line tool where to store the downloaded file and as a direct consequence, curl instead sends a lot of binary “gunk” to the terminal. The end result of that is at best just a busload of weird-looking characters on the screen, but with just a little bit of bad luck it can also lock up the terminal completely or change it in other ways.

Starting in curl 7.55.0 (from this commit), curl will inspect the beginning of each download that has been told to get sent to the terminal (tty!) and attempt to detect and prevent raw binary output to get sent there. The code is only simply looking for a binary zero in the data.

$ curl https://example.com/image.jpg
Warning: Binary output can mess up your terminal. Use "--output -" to tell curl to output it to your terminal anyway, or consider "--output <FILE>" to save to a file.

As the warning message says, there’s an option to use to switch off this emergency check for when you truly know what you’re doing and you don’t need curl to prevent you from doing this. Then you just tell curl explicitly that you want the output to stdout, with “–output -” (or “-o -” for a shorter version):

$ curl -o - https://example.com/binblob.img

We’re eager to get your input and feedback on how this works. We are aware of the risk of false positives for UTF-16 and UTF-32 outputs, but we think they are rare enough to not make this a huge problem.

This feature should be able to drastically reduce the risk for this:

Pipes

(Update, added after the initial posting.)

So many have remarked or otherwise asked how this affects when stdout is piped into something else. It doesn’t affect that! The whole point of this check is to only show the warning message if the binary output is sent to the terminal. If you instead pipe the output to another program or if you redirect the output with >, that will not trigger this warning but will instead continue just like before. Just like you’d expect it to.

Planet MozillaWebdev Beer and Tell: June 2017

Webdev Beer and Tell: June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaWebdev Beer and Tell: June 2017

Webdev Beer and Tell: June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Planet MozillaThe Soloists

Building Firefox is a big endeavor. There are many teams and projects covering initiatives, maintenance, bug fixing, triage, localization, support, understanding feedback, marketing, communication, releasing, supporting infrastructure, crash analysis, and a bazillion other activities all to build a family of browsers and applications.

Teams and projects aren't static. People move around as priorities change and the landscape shifts and projects complete or are scuttled.

Sometimes projects get started up with a single person. Sometimes all the people except one move off a project. Sometimes we find ourselves working alone, in a basement office, with only a stapler equivalent to keep us company.

We are the soloists. You wouldn't believe the list of things we work on. Alone.

Where to find soloists: IRC, Slack

There's an IRC channel #soloists on irc.mozilla.org.

There's also a Slack channel #soloists on the Mozilla Slack [1].

These two places (and whatever other places soloists want to hang out at) are places where we can:

  • find some solace from the weary drudgery of being alone on their projects for days on end
  • ask for help
  • bounce ideas off each other
  • vent frustrations in a friendly forgiving place
  • get advice on dealing with things like code reviews and how to go on vacation
  • get recognition for a job well done

and a variety of other things that alleviate many of the problems we have as soloists.

[1]I just created it, so it's kind of empty. I'm feeling alone in the #soloists Slack channel. So alone.

Stickers at the All Hands!

Over the last month or so, we spent some time figuring out #soloists stickers because we like stickers and you like stickers and everyone likes stickers.

They look like this:

/images/soloist_2017_handdrawn.thumbnail.png

Soloist 2017 sticker.

They're 2" by 2" and round. They're warm to the touch. They make you want to climb things. By yourself. Alone. With appropriate safety gear. [2]

If you're a soloist, come find one of us and get a sticker. Also, consider joining soloist channels.

If you support soloists, come find one of us and get a sticker. Ask us about the things we're working on. We may be solo, but we're working on real projects that almost certainly affect you. As a group, we did great things in the last 6 months. Alone. So alone.

[2]That's how they make me feel, anyhow.

Planet Mozillacurl: read headers from file

Starting in curl 7.55.0 (since this commit), you can tell curl to read custom headers from a file. A feature that has been asked for numerous times in the past, and the answer has always been to write a shell script to do it. Like this:

#!/bin/sh
while read line; do
  args="$args -H '$line'";
done
curl $args $URL

That’s now a response of the past (or for users stuck on old curl versions). We can now instead tell curl to read headers itself from a file using the curl standard @filename way:

$ curl -H @headers https://example.com

… and this also works if you want to just send custom headers to the proxy you do CONNECT to:

$ curl --proxy-headers @headers --proxy proxy:8080 https://example.com/

(this is a pure curl tool change that doesn’t affect libcurl, the library)

Planet MozillaQuantum Flow Engineering Newsletter #13

I’m back with some more updates on another week worth of work on improving various performance aspects of Firefox.

Similar to the past weeks, Speedometer remains a big focus area for performance work.  In addition to the many already identified bugs to work on, we are also still measuring the benchmark quite actively looking for more optimization opportunities.

Another item worthy of an update is Background Hang Reports.  Michael Layzell earlier today enabled collection of native stack traces on Win64 (and Mac) using the Gecko Profiler stack walking backend (Linux support soon to follow).  Because we are now using the Gecko Profiler backend for BHR, we can soon get interleaved native and pseudo-stacks from BHR similar to the ones that we have come to know and love in Gecko Profiler for a long time now!  Also, Doug Thayer has made a lot of progress on hangs.html, his front-end for exploring the native stack traces uploaded from BHR.  This is a nice and super fast tool to explore the hangs that our users are experiencing on the Nightly channel and it shows you the corresponding pseudo-stacks that are extremely helpful if for example the hang is coming from chrome-privileged JS (where we get full call stack information through telemetry).  Please have a look, and send him feedback.

This edition is exceptionally short, but the most interesting part of these is probably the last part anyway, the credits section, where I acknowledge the hard work of the people who worked on improving the performance of Firefox in the past week.  So let’s get to that, and I do hope I’m not dropping any names:

Planet MozillaAnnouncing git-cinnabar 0.5.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?

  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.

Planet Mozillacurling over HTTP proxy

Starting in curl 7.55.0 (this commit), curl will no longer try to ask HTTP proxies to perform non-HTTP transfers with GET, except for FTP. For all other protocols, curl now assumes you want to tunnel through the HTTP proxy when you use such a proxy and protocol combination.

Protocols and proxies

curl supports 23 different protocols right now, if we count the S-versions (the TLS based alternatives) as separate protocols.

curl also currently supports seven different proxy types that can be set independently of the protocol.

One type of proxy that curl supports is a so called “HTTP proxy”. The official HTTP standard includes a defined way how to speak to such a proxy and ask it to perform the request on the behalf of the client. curl supports using that over either HTTP/1.1 or HTTP/1.0, where you’d typically only use the latter version if you the first really doesn’t work with your ancient proxy.

HTTP proxy

All that is fine and good. But HTTP proxies were really only defined to handle HTTP, and to some extent HTTPS. When doing plain HTTP transfers over a proxy, the client will send its request to the proxy like this:

GET http://curl.haxx.se/ HTTP/1.1
Host: curl.haxx.se
Accept: */*
User-Agent: curl/7.55.0

… but for HTTPS, which should provide end to end encryption, a client needs to ask the proxy to instead tunnel through the proxy so that it can do TLS all the way, without any middle man, to the server:

CONNECT curl.haxx.se:443 HTTP/1.1
Host: curl.haxx.se:443
User-Agent: curl/7.55.0

When successful, the proxy responds with a “200” which means that the proxy has established a TCP connection to the remote server the client asked it to connect to, and the client can then proceed and do the TLS handshake with that server. When the TLS handshake is completed, a regular GET request is then sent over that established and secure TLS “tunnel” to the server. A GET request that then looks like one that is sent without proxy:

GET / HTTP/1.1
Host: curl.haxx.se
User-Agent: curl/7.55.0
Accept: */*

FTP over HTTP proxy

Things get more complicated when trying to perform transfers over the HTTP proxy using schemes that aren’t HTTP. As already described above, HTTP proxies are basically designed only for doing HTTP over them, but as they have this concept of tunneling through to the remote server it doesn’t have to be limited to just HTTP.

Also, historically, for decades people have deployed HTTP proxies that recognize FTP URLs, and transparently handle them for the client so the client can almost believe it is HTTP while the proxy has to speak FTP to the remote server in the other end and convert it back to HTTP to the client. On such proxies (Squid and Apache both support this mode for example), this sort of request is possible:

GET ftp://ftp.funet.fi/ HTTP/1.1
Host: ftp.funet.fi
User-Agent: curl/7.55.0
Accept: */*

curl knows this and if you ask curl for FTP over an HTTP proxy, it will assume you have one of these proxies. It should be noted that this method of course limits what you can do FTP-wise and for example FTP upload is usually not working and if you ask curl to do FTP upload over and HTTP proxy it will do that with a HTTP PUT.

HTTP proxy tunnel

curl features an option (–proxytunnel) that lets the user forcible tell the client to not assume that the proxy speaks this protocol and instead use the CONNECT method with establishing a tunnel through the proxy to the remote server.

It should of course be noted that very few deployed HTTP proxies in the wild allow clients to CONNECT to whatever port they like. HTTP proxies tend to only allow connecting to port 443 as that is the official HTTPS port, and if you ask for another port it will respond back with a 4xx response code refusing to comply.

Not HTTP not FTP over HTTP proxy

So HTTP, HTTPS and FTP are sent over the HTTP proxy fine. That leaves us with nineteen more protocols. What happens with them when you ask curl to perform them over a HTTP proxy?

Now we have finally reached the change that has just been merged in curl and changes what curl does.

Before 7.55.0

curl would send all protocols as a regular GET to the proxy if asked to use a HTTP proxy without seeing the explicit proxy-tunnel option. This came from how FTP was done and grew from there without many people questioning it. Of course it wouldn’t ever work, but also very few people would actually attempt it because of that.

From 7.55.0

All protocols that aren’t HTTP, HTTPS or FTP will enable the tunnel-through mode automatically when a HTTP proxy is used. No more sending funny GET requests to proxies when they won’t work anyway. Also, it will prevent users from accidentally leak credentials to proxies that were intended for the server, which previously could happen if you omitted the tunnel option with a few authentication setups.

HTTP/2 proxy

Sorry, curl doesn’t support that yet. Patches welcome!

Planet MozillaPhoton Engineering Newsletter #6

More exciting progress this week! Here’s Photon update #6!

New Menus

Work on the new Photon menus has reached the point where we’re ready to turn them on by default (for Nightly). Bug 1372309 is tracking the last remaining work (mostly test fixes), and you should see this happen in tomorrow’s Nightly. Up until now you’ve needed to manually enable the “browser.photon.structure.enabled” pref to play with the new menus – you’ll no longer need to flip that pref as it will already be enabled.

The biggest change you’ll notice is that the application menu (a.k.a. the “hamburger menu”) contents look different. Instead of a grid of icons, it’s a linear list of commands. Opening the menu and entering submenus is much snappier than before. Here’s the new look on Windows 10 (left) and macOS (right):

menus

The overflow menu (under the “>>” icon) has existed for a long time now, normally it’s only shown when the window is so narrow that we run out of space to show all the toolbar icons. You can now pin items to it permanently, as the new destination for commands you want easily accessible without taking up toolbar space. (Previously you could do this by adding items to the hamburger menu. That’s no longer customizable.)

overflows

There are also some minor related changes to Customization Mode, which now shows the overflow menu as a customization target instead of the old hamburger menu.

Recent changes

Menus/structure:

  • Enabling the new menus, as mentioned above.
  • The sidebar toolbar button no longer has a panel dropdown, instead it just toggles the display of the sidebar (you can change which sidebar is shown from inside the sidebar itself).
  • Various smaller styling/polish fixes to the different panels and toolbar items have landed and will continue to land this week.
  • WebExtension browser actions will now be pinned to the overflow panel instead of the hamburger menu (though we are aware of at least one remaining issue with this).

 

Animation:

  • The Photon-themed download icon landed, this was spun out of the main download animation bug to start landing pieces as they’re ready.
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator. We’re working through some performance issues with the latter two — these animations are triggered during our performance test suites, and we see some impact to the measurements.
  • New arrow-panel animations are underway. We’re updating the way panels and menus animate when they’re opened and closed. On macOS we’re temporarily removing the current animation entirely, while we await platform improvements that allow us to get the effect we want in a way that performs well.

 

Preferences:

  • QA-sign off received for the old preferences shipping in Firefox 55 (which have not been the default on Nightly since landing the new preference reorg).
  • Search followups are largely complete, and we are enabling the search feature this week.
    search-prefs-demo

 

Visual redesign:

  • We got some good contributions from community member UK92! Thanks!
    • Updated two of our in-content pages (about:about and about:rights) to use the new Photon style.
    • With maximized windows on Windows 10, the window control buttons now span the entire height of the tabstrip, eliminating a small gap.
  • Landing updates to the sidebar styling (header and search box)
  • Updated the Synced Tabs button icon in the toolbar.
  • Starting work on changing the color of the titlebar on macOS (making it darker, similar to Windows 10).

 

Onboarding:

  • Lots of discussion and decisions, finalized scope and content for Firefox 56 tour.
  • De-scoped automigration, and are instead moving ahead with a manual import option accessible from the new Activity Stream page.
  • Simplified tour and notification logic
  • Outstanding technical issues resolved and a few 56 tour contents are ready to land this week. No more blank tour overlay in Nightly!

 

Performance:

 

Stay tuned for more updates next week!


Planet MozillaMessage Broker: Channel Naming

I’ve started building a message broker as a learning project. There are several out there such a RabbitMQ, Kafka, Redis’ pub/sub layer, and some brokerless message queue solutions like 0mq. Having used many of them over the years and studying the topic both from the classic “Enterprise Integration” side and the more modern/agile “Microservices” side, I figured I’d try my hand at implementing one.

In this blog post, I’m going to go over how I designed and implemented channel naming. Channel names fill the role of data descriptor used by the publishers and the role of query language for subscribers. This turned out to be a delightful series of problems to explore. Questions such as “how do people use them”, “what features are expected”, “what limitations are common” came up. Realizing that my message broker is essentially providing a naming framework that will have at least some opinion, I needed to ask “are there practices I want to encourage or discourage” and recognize my influence.

In almost all message queue systems, channels have a string based name. This name may be broken up by delimiters, and that delimiter is sometimes chosen by the client or in a config on the broker. Most support the ability to wild card parts of the channel when subscribing. Sometimes the wild card is purely string based, in delimited channels often the wild card is done by chunk. Wild cards are sometimes restricted in what positions they can be in, almost always allowed at the end, sometimes in the middle and sometimes disallowed at the start.

When deciding between these I also wanted to keep in mind what solutions are relatively easy to code, straightforward to debug, and allow for acceptable performance. Not allowing wild cards at all means that simple string comparison works, but omits a common feature. Using simple strings and a well known text query setup like regular expressions would give huge amounts of flexibility in selecting what messages you’d like, but comes with a large performance cost, libraries, and is much harder to spot debug. I decided that since I’m not going to use a common query language, that I’d want to make the names as simple as possible to parse.

A Rope is a data structure I’d heard about a few times and I knew it had something to do with making string operations faster, but I was hazy on the details. Since this is a project without deadlines, I set about reading a paper on them to learn more. Shortly into the paper it was clear this wasn’t the solution for me, ropes are designed for larger bodies of text, manipulating that text in a variety of ways, and making common editor features easier to implement. But it shared that part of the problem was breaking down the text, and part of it was comparing text.

Since channel names are often a hierarchy that uses the delimiter to separate the layers, I split the name using that delimiter into an array. But that then left me with a bunch of smaller strings I had to compare, which seemed much slower than just walking through the name and query once, using the wild cards to skip characters. To avoid a char by char parse on every channel name comparison, I drew on the common native layer practice of hashing strings then comparing the hashes. Hashing has an up front cost of processing the string into a numeric form, but then makes comparisons extremely fast. Since a message’s channel would be used to query for appropriate subscribers, it could be hashed once and compared many times. An unfortunate side effect of hashing the substrings though, I wouldn’t be able to allow partial segment matches. The wild card would be all or nothing in a given position, just a.* no a.b*.

I decided that side effect of only allowing wild cards at the segment layer was ultimately a good thing. While there may be transition times where the feature of partial segment matching would help, it would allow people to break the idea that each segment is a complete chunk of data. This similar reasoning is why the only selectors are exact and wild card, no numeric or alpha only sorts of selection.

After all this research, thinking, note taking, and general meandering thing the computer science fields, I started to implement my solution in Rust. My message broker isn’t actually ready for the channel names yet, as I’m still working on how to manage connections and properly do the various forms of store and forward. But this problem tickled me so I set about solving it anyway. I created a file channel.rs to try building these ideas.

I started with a basic struct with the string form of the name for debugging, and a hashed form of the name for comparisons.

struct Channel {
    raw_name: String,
    hashed_name: Vec<Option<u64>>
}

raw_name is a String so the struct can own the string without any lifetime concerns of a str, this value will mostly be used for debugging or admin purposes. The hashed_name is a Vec so it can be variable sized, currently I have no limit on the number of delimiters you can use. Option is what I used to handle the wild card. If it was Some<u64> then you have a hash to compare. If it was None then it was wild card and you don’t have to compare it. After thinking harder though, I realized that I didn’t want to have the binary Option as my indicator for whether to use a wild card or not. If I added a new type of wild card, for instance, one that allowed any number of segments, I’d have to replace my Option usage everywhere. So instead I’ve preemptively changed to using my own ChannelSegment type like so:

struct Channel {
    raw_name: String,
    hashed_name: Vec<ChannelSegment>
}

enum ChannelSegment {
    Wild,
    Hash(u64)
}

Next, I set about parsing the input. I knew that hard coding strings in tests meant that I wanted to have a from_str variant. People will often have hard coded channel names but there will also be generated ones, and for that allowing a from_string is nice. I also knew I was going to turn it into a String anyway to assign to raw_name. So I did the following to enable both:

pub fn from_str(input: &str) -> Result<Channel, String> {
    Channel::from_string(input.to_owned())
}

pub fn from_string(input: String) -> Result<Channel, String> {
    // parsing code goes here
}

Parsing was a pretty simple matter. Using String.split(char) to get an iterator returning each segment. Then relying on Rust’s pattern matching for cases I specifically cared about matching, like empty string (which I made into an error to prevent mistakes) and "*" which is the wild card character. Building up the hashed name, then returning it.

let mut hashed_name = Vec::new();
for chunk in input.split('.') {
    match chunk {
        "" => {
            return Err("empty entry is invalid".to_owned());
        }
        "*" => {
            hashed_name.push(ChannelSegment::Wild);
        }
        _ => {
            hashed_name.push(ChannelSegment::Hash(calculate_hash(&chunk)));
        }
    }
}
Ok(Channel{
    raw_name: input,
    hashed_name: hashed_name
})

One could easily see using a LRU Cache (Least Recently Used cache that removes the oldest entries when it gets too full) to skip parsing the channel name for the most commonly used channels, but I’m not doing that yet until this proves to be a part that is slowing me down.

To compare between two Channels I added a .matches(&Channel) method. I decided against implementing PartialEq since I wouldn’t be testing for exact match, but instead match when considering wild cards, and most developers expect a more exact match when using ==.

pub fn matches(&self, other: &Channel) -> bool {
    if self.hashed_name.len() != other.hashed_name.len() {
        return false;
    }
    for (a, b) in self.hashed_name.iter().zip(&other.hashed_name) {
        if let (&ChannelSegment::Hash(inner_a), &ChannelSegment::Hash(inner_b)) = (a, b) {
            if inner_a != inner_b {
                return false;
            }
        }
    }
    
    true
}

Since my only wild card is for a single whole segment, I know I can immediately return if they have different lengths. Then I zip the two hashed_name together so I can iterate through them at the same time. In other languages one would commonly just create an integer, increment that and use it to index into both of the sequences at the same time, ensuring to not walk past the end ourselves. In Rust we rely on iterators to give us fast access to our sequences with minimal checking, keeping us safe against others (or ourselves) mutating the sequences in dangerous ways while iterating. Using zip we can create an iterator that walks both sequences at the same time keeping our to our land of safety and speed.

As a side note, the calculate_hash function is one I pulled from the docs but changed a little bit to fit my style better:

fn calculate_hash<T: Hash>(t: &T) -> u64 {
    let mut hasher = DefaultHasher::new();
    t.hash(&mut hasher);
    hasher.finish()
}

A feature of Rust I really enjoyed while developing this, was the ability to write tests in the same file as I went. I could quickly write a few examples then make the code pass, focusing on just the higher level parts of the API and not testing the internals so much. Think more “this errors, this does not error” and less “this returns a vector of integers that are ascending…” while you are adding such tests, to make sure you can quickly change out the implementation while the idea keeps working, here are the tests I wrote as I was developing:

#[test]
fn create_basic_channel() {
    Channel::from_str("a.b.c").unwrap();
    Channel::from_str("name").unwrap();
}

#[test]
fn create_with_wildcard() {
    Channel::from_str("a.*.b").unwrap();
    Channel::from_str("*").unwrap();
    Channel::from_str("*.end").unwrap();
    Channel::from_str("start.*").unwrap();
}

#[test]
fn create_invalid() {
    assert!(Channel::from_str(".a.b").is_err());
    assert!(Channel::from_str("c.b.").is_err());
    assert!(Channel::from_str("g.l..b").is_err());
    assert!(Channel::from_str("").is_err());
}

#[test]
fn matches_with_self_exact() {
    let channel = Channel::from_str("a.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("a").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("dabbling.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("abba.bobble").unwrap();
    assert!(channel.matches(&channel));
}

#[test]
fn not_matches_exact() {
    let channel_full = Channel::from_str("s.t.r").unwrap();
    let channel_sub = Channel::from_str("s.t").unwrap();
    assert!(!channel_full.matches(&channel_sub));
    assert!(!channel_sub.matches(&channel_full));
}

#[test]
fn matches_with_self_wild() {
    let channel = Channel::from_str("a.*.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("*").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("*.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("abba.*").unwrap();
    assert!(channel.matches(&channel));
}

#[test]
fn matches_wild_card_one_side() {
    let wild_channel = Channel::from_str("alpha.beta.*").unwrap();
    let tame_channel = Channel::from_str("alpha.beta.charlie").unwrap();
    assert!(wild_channel.matches(&tame_channel));
    assert!(tame_channel.matches(&wild_channel));
}

Mostly mundane stuff, values all quickly typed in to ensure the general concept works, and common edge cases in parsing (start, end, middle) are covered. But note this isn’t combinatorially testing this, there isn’t unicode or other trouble points, those sorts of tests can come with time. Being able to add in a new test quickly when I noticed an edge case is easily one of my favorite features of Rust.

I’ll be open sourcing my work in progress soon, but for now it mostly lives in my notebook as some scribbles, and a smattering of mostly disconnected code on my laptop. If you have feedback on this blog post, how I’ve setup channels in my message broker, or generally about my rust code I’d love to hear it in comments below. Before the comments roll in though, I know using Strings for errors is not great, I just don’t have an error system setup in my project yet.


Planet WebKitMichael Catanzaro: Debian Stretch ships latest WebKitGTK+

I’ll keep this update short. Debian has decided to ship the latest version of WebKitGTK+, 2.16.3, in its upcoming Stretch release. Since Debian was the last major distribution holding out on providing WebKit security updates, this is a big deal. Huge thanks to Jeremy Bicha for making this possible.

The bad news is that Debian is still considering whether or not to provide periodic security updates after the release, so there might not be any. But maybe there will be. We will have to wait and see. At least releasing with the latest version is a big step in the right direction.

Planet MozillaNetwork Monitor Reloaded (Part 1)

The Network Monitor tool has been available in Firefox since the earliest days of Firefox Dev Tools. It’s an invaluable tool for anyone who cares about page load performance and fast modern web pages. This tool went through extensive refactoring recently (under the project codename Netmonitor.html), and this post is intended as an explanation of how we designed the new architecture and what cool new technologies we used.

See the Network Monitor running inside the Firefox Developer Toolbox:

Goals

One of the main goals of the refactoring was to rebuild the entire tool on top of standard web technologies. We removed all Firefox-specific legacy code like XUL (XML User Interface Language) but also the code that used Firefox-specific APIs. This is a great step forward since using web standards now allows you to run the entire tool’s code base in two different environments:

  • The Developer Toolbox
  • Any web page

The first case is well known to anyone who’s familiar with Firefox Developer Tools (see also the screenshot above). The Developer Toolbox can easily be opened at the bottom of a browser window with various tools, Network Monitor included, at your fingertips.

The second use case is new. Now the tool can be loaded within a browser tab just like any other standard web application. See how it looks like in the next screenshot:

Note that the page is loaded from localhost:8000. This is where the development server is running.

The ability to run the tool as a web app is a big deal! Now we can use all in-browser tools for the development workflow. Although it was possible to use DevTools to debug DevTools before (with the Browser Toolbox), it is now so much easier and more convenient to simply use the in-browser tools. And of course, we can also load the tool in other browsers. The development is also simpler since we don’t have to build Firefox. Instead, a simple tab-refresh is enough to get Network Monitor reloaded and test your code changes.

Architecture

We’ve build the new Network Monitor front-end on top of the following technologies:

Firefox Developer Tools need complex UI features and we are using the popular React & Redux combo for all of our tools to build a clean and consistent code base. The Network Monitor is no exception. We’ve implemented a set of React components that are responsible for rendering the view (UI), a store with all data collected by HTTP interception and finally a set of actions the user might want to execute.

We’ve also changed the way we write tests. Instead of using the Firefox specific test harness we are slowly shifting towards well known libraries like Mocha and Enzyme. This way it is easier to understand our code base and also contribute to it.

We are using Webpack to build a bundle when running inside a web page. The bundle is consequently served through localhost:8000.

The general architecture is based on a flow introduced in the React & Redux concept.

  • The root component representing the NetMonitorApp can be rendered within Developer Toolbox or a web page.
  • Actions are responsible for things like filtering, clearing the list of requests, sorting and opening a side panel with detailed information.
  • All of our data are stored within a store object. Including all the collected data about HTTP traffic.

New Features

We’ve been focused mostly on codebase refactoring, but there were some new features/UI improvements implemented along the way as well. Let’s see some of them.

Column Picker

There are new columns with additional information about individual requests and the user can use the context menu to select those that are important.

Summary Data

We’ve implemented a better summary for currently displayed requests in the list. It’s now located at the bottom of the panel.

  • Number of requests in the list
  • Size/transferred size of all requests
  • Total time needed to load all requests
  • Time when DomContentLoaded event occurred
  • Time when load event occurred

Filtering By Properties

The existing Filter UI is now a lot more powerful. It’s possible to filter the list of requests according to various properties. For example, you can type: larger-than:50 into the Filter input box to see only those requests that are larger than 50 bytes.

Read more about filtering by properties on MDN.

Learn More in MDN

There are links in many places in the UI pointing to MDN for more information. For example, you can quickly learn how various HTTP headers are used.

Conclusion

We believe that building the new generation of Firefox Developer Tools on top of web standards is the right way to go since it means the tools can run in different environments and integrate more effectively with other projects (e.g., IDEs). Building on web standards makes many things possible: Now we can also think about shipping our tools as an online web service that can benefit from the internet platform. We can share collected data as well as debugging context across the web, opening doors to a real social debugging world.

The Netmonitor.html team has done a tremendous amount of work on the refactoring. Big thanks to the core team:

  • Ricky Chien
  • Fred Lin

But there has been many external contributors as well:

  • Jaroslav Snajdr
  • Leonardo Couto
  • Tim Nguyen
  • Deepjyoti Mondal
  • Locke Chen
  • Michael Brennan
  • Ruturaj Vartak
  • Vangelis Katsikaros
  • Adrien Enault
  • And many more…

Let us know what you think. You can join us on the devtools-html Slack.

Jan ‘Honza’ Odvarko

Read next: Hacking on the Network Monitor

Planet MozillaHacking on the Network Monitor Developer Tool (Part 2)

In the previous post, Network Monitor Reloaded, we walked through the reasoning for refactoring the Network Monitor tool. We also learned that using web standards for building Dev Tools enables us to running them in different environments – loaded either within the Firefox Developer Toolbox or within a browser tab as a standard web application.

In this companion article, we’ll show you how to try these things and see the Network Monitor in action.

Get to the Source

The Firefox Developer Tools code base is currently part of the Firefox source repository, and so, downloading it requires downloading the entire repo. There are several ways how to get the source code and work on the code. You might want to start with our Github docs for detailed instructions.

One option is to use Mercurial and clone the mozilla-central repository to get a local copy.

# This may take a while...
hg clone https://hg.mozilla.org/mozilla-central
cd mozilla-central

Part of our strategy to use web standards to build tools for the web also involves moving our code base from Mercurial to Git (on github.com). So, ultimately, the way to get the source code will change permanently, and it will be easier and faster to clone and work with.

Run Developer Toolbox

For now, if you want to build the Network Monitor and run it inside the Firefox Developer Toolbox, follow these detailed instructions.

Essentially, all you need to do is use the mach command.

cd mozilla-central
./mach build

After the build is complete, start the compiled binary and open the Developer Toolbox (Tools -> Web Developer -> Toggle Tools).

You can rebuild quickly after making changes to the source code as follows:

./mach build faster

Run Development Server

In order to run Net Monitor inside a web page (experimental) you’ll need to install the following packages:

We’ve developed a simple container that allows running Firefox Dev Tools (not only the Network Monitor) inside a web page. This is called Launchpad. The Launchpad is responsible for making a connection to the instance of Firefox being debugged and loading our Network Monitor tool.

The following diagram depicts the entire concept:

  • The Net Monitor tool (client) is running inside a Browser tab just like any other standard web application.
  • The app is served by the development server (server) through localhost:8000
  • The Net Monitor tool (client) is connecting to the target (debugged) Firefox instance through a WebSocket.
  • The target Firefox instance needs to listen on port 6080 to allow the WebSocket connection to be created.
  • The development server is started using yarn start

Let’s take a closer look at how to set up the development environment.

First we need to install dependencies for our development server:

cd mozilla-central
cd devtools/client/netmonitor
yarn install

Now we can run it:

yarn start

If all is ok, you should see the following message:

Development Server Listening at http://localhost:8000

Next, we need to listen for incoming connection in the target Firefox browser we want to debug. Open Developer Toolbar (Tools -> Web Developer -> Developer Toolbar) and type the following command into it. This will start listening so tools can connect to this browser.

listen 6080

The Developer Toolbar UI should be opened at the bottom of the browser window.

Finally, you can load localhost:8000

You should see the Launchpad user interface now. It lists the opened browser tabs in the target Firefox browser. You should also see that one of these tabs is the Launchpad itself (the last net monitor tab running from localhost:8000).

All you need to do is to click one of the tabs you want to debug. As soon as the Launchpad and Network monitor tools connect to the selected browser tab, you can reload the connected tab and see a list of HTTP requests.

If you change the underlying source code and refresh the page you’ll see your changes immediately.

Check out the following screencast for a detailed walk-through of running the Network monitor tool on top of the Launchpad and utilizing the hot-reload feature to see code changes instantly.

You might also want to read mozilla-central/devtools/client/netmonitor/README.md for more detailed info about how to build and run the Network Monitor tool.

Future Plans

We believe that building tools for the web using standard web technologies is the right way to go! Our tools are for web developers. We’d like you to be able to work with our tools using the same skills and knowledge that you already apply when developing web apps and services.

We are planning many more powerful features for Firefox Dev Tools, and we believe that the future holds a lot of exciting things. Here’s a teaser for what’s ahead on the roadmap.

  • Connecting to Chrome
  • Connecting to NodeJS
  • Integration with existing IDEs

Stay tuned!

Jan ‘Honza’ Odvarko

Planet MozillaReps Weekly Meeting Jun. 15, 2017

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting Jun. 15, 2017

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaBad Tools are Insidious

This is my first job making data tools that other people use. In the past, I've always been a data scientist - a consumer of these tools. I'm learning a lot.

Last quarter, I learned that bad tools are often hard to spot even when they're damaging productivity. I sum this up by saying that bad tools are insidious. This may be obvious to you but I'm excited by the insight.

Bad tools are hard to spot

I spent some time working directly with analysts building ETL jobs. I found some big usability gaps with our tools and I was surprised I wasn't hearing about these problems from our analysts.

I looked back to previous jobs where I was on the other side of this equation. I remember being totally engrossed in a problem and excited to finding a solution. All I wanted were tools good enough to get the job done. I didn't care to reflect on how I could make the process smoother. I wanted to explore and interate.

When I dug into analyses this quarter, I had a different perspective. I was working with the intention of improving our tools and the analysis was secondary. It was much easier to find workflow improvements this way.

In the Design of Everyday Things Donald notes that users tend to blame themselves when they have difficulty with tools. That's probably part of the issue here as well.

Bad tools hurt

If our users aren't complaining, is it really a problem that needs to get fixed? I think so. We all understand that bad tools hurt our productivity. However, I think we tend to underestimate the value of good tools when we do our mental accounting.

Say I'm working on a new ETL job that takes ~5 minutes to test by hand but ~1 minute to test programatically. By default, I'd value implementing good tests at 4 minutes per test run.

This is a huge underestimate! Testing by hand introduces a context shift, another chance to get distracted, and another chance to fall out of flow. I'll bet a 5 minute distraction can easily end up costing me 20 minutes of productivity on a good day.

Your tools should be a joy to use. The better they work, the easier it is to stay in flow, be creative, and stay excited.

In Summary

Don't expect your users to tell you how to improve your tools. You're probably going to need to eat your own dogfood.

Planet Mozillatarget-independent libcurl headers

We write libcurl to be very portable. It can be built and run on virtually every operating system with an CPU architecture that is at least 32 bit, from some of the most legacy Unixes from the early 90s to the most recent updates to all the popular systems, including the widespread mobile platforms.

Type sizes on different archs

In the early 2000s we added support to libcurl for “large files” (back in the days when that support wasn’t always present in your operating systems) and large variable types (beyond 32 bits) to work for applications and libcurl alike, and to work the same way for libcurl-using applications independently on which platform you’d compile the code on.

We started out using compiler/system defines to figure out for example the size of the native “off_t” type to know if it was 32 bit or 64 bit. That turned out to be problematic as users accidentally ended up in situations where the library considered a type to be one size and the application considered it to be another, leading to unexpected behaviors at best or downright crashes and misery.

Determine at lib build-time

The fix to that run-time size-of-variables confusion was to generate a fixed “outcome” at build-time that would then be used by both the library and applications so that they could never again disagree on this. The obvious downside here was that we had to generate this target specific information into public headers for the library (known as curl/curlbuild.h). We didn’t like doing it this way, but this approach was a better situation than before as it caused less headaches for users.

Now we instead created problems for system packagers who wanted to provide a set of curl headers and allow users to build for example either a 32 bit build or a 64 bit build of their application – so they had to generate two sets of curl headers. Or having the headers on a shared file system to be used by many different systems. Inconvenient. But as this solution didn’t hurt too many people, was a cumbersome problem to fix and yet possible to work around, it remained in the curl project since August 7, 2008 (commit 14240e9e109fe6af1).

Determine at app build-time

In March 2017 (commit 9506d01ee5) we introduced a new take on this problem. A new header that checks systems defines and determines all the necessary information at the time the application is compiled instead of at the time libcurl is compiled. We call it curl/system.h.

The goal was to replace the generated curlbuild.h header, but since it would cause serious problems if this new header would get any different results (like variable type sizes) than the old header, it was a risky move. We needed extra seat-belts for this.

We therefor added the new header next to the old header in parallel, and introduced a test case in the curl test suite that verifies the output from the two systems and make sure that they agree, and had them present in the curl source tree, coexisting. The curl/system.h file of course without being used for anything real, but tested by everyone who runs the test suite – to make sure it isn’t awful.

We think the new header file has now proven itself worthy. We have not gotten any recent reports on problems with test 1541. It is time to cut out the old header system and launch the new!

Starting in release curl 7.55.0, due to be released on August 9, 2017, the header files will finally again be truly platform agnostic. It took us nine years but we finally did it! The bulk of the change is made in this commit.

Just another detail in the machinery.

Planet WebKitRelease Notes for Safari Technology Preview 33

Safari Technology Preview Release 33 is now available for download for macOS Sierra and betas of macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 217407-217978.

Performance

  • Fixed an issue that could lead to large memory use in the Safari Technology Preview and web content processes when certain Safari Extensions are installed

JavaScript

  • Fixed for-in optimization static analysis in the bytecode generator (r217438)
  • Improved performance of String.prototype.concat (r217648)
  • Improved the bytecode and type information provided for toLength (r217530)
  • Optimized Map and Set constructors (r217525)

WebRTC

Media Streams and Capture

  • Fixed getUserMedia prompting too often (r217910)
  • Prevented getUserMedia requests from background tabs unless the tab is already capturing (r217930)
  • Prevented getUserMedia from prompting again if the user denied access (r217945)

Media

  • Enabled clients to specify a list of codecs that should require hardware decode support (r217799)
  • Exempted exclusively wall-powered devices from client-required hardware codec support (r217906)
  • Aligned Web Audio implementation to specifications when clients pass a value of 0 for bufferSize to the createScriptProcessor() method (r217919)

CSS Grid

  • Added support for orthogonally positioned grid items (r217486)
  • Fixed the behavior of positioned items without specific dimensions (r217411)
  • Fixed the logical margin applied in the tracks sizing algorithm of auto tracks (r217709)
  • Fixed margin applied when stretching an orthogonal item in a fixed size track (r217705)

Web API

  • Aligned <col span> and <colgroup span> limits with the latest HTML specification (r217907)
  • Fixed null content-type and content-length when fetching Blob URLs with XHR (r217901)
  • Fixed getComputedStyle() to return pixel values for left, right, top, and bottom, matching the specifications (r217522)
  • Implemented fromFloat32Array, fromFloat64Array, toFloat32Array, and toFloat64Array for DOMMatrix (r217764)
  • Implemented DOMPointReadOnly.matrixTransform() (r217763)
  • Enabled script modules to be imported via data URLs (r217760)
  • Updated to slightly stricter rules for custom element names from the recent draft standards (r217864)
  • Used the parent box style to adjust RenderStyle for alignment (r217536)
  • Added conditional support for media preloading and aligned media as values (r217863)
  • Aligned preload implementation to specifications with mandatory as value and other alignments (r217962)

Rendering

  • Changed behavior to remove backing store for layers that are outside the viewport (r217696)
  • Fixed an issue where a frame’s composited content is visible when the frame has visibility:hidden (r217472)
  • Changed behavior to destroy the associated renderer subtree when display:contents node is deleted (r217794)

Web Inspector

  • Added contextual menu item to log a WebSocket object to the console (r217912)
  • Added Debug view to the Settings tab for debug settings and experimental features (r217625)
  • Added the ability for the user to choose stylesheet when creating new rules (r217911)
  • Changed the Node Details Sidebar to allow editable key and values in the Attributes table (r217744)
  • Prevented unnecessary layout triggered when a Sidebar is resized while collapsed (r217452)
  • Fixed performing search on reload for an existing query in the Search tab (r217733)
  • Fixed images dragged from Web Inspector to Desktop being named “Unknown.png” (r217584)
  • Fixed an issue where reloading the page after switching from the Resource tab switches back (r217505)
  • Fixed the CodeMirror instance in the ConsolePrompt getting refreshed each time it is shown (r217746)
  • Fixed showing active Web Sockets when opening Web Inspector (r217721)
  • Fixed an issue preventing the go-to arrow to inspect Web Sockets that receive >50 messages per second (r217690)
  • Improved the reliability of automatically pausing in auto-attach when inspecting a JSContext (r217509)

Bug Fixes

  • Enabled AirPods to work with Netflix (r217858)
  • Fixed stuttering audio in YouTube when the page changes visibility (r217936)

Planet MozillaCroissants, Qatar and a Food Computer Meetup in Zurich

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

Planet MozillaWebExtensions in Firefox 55

Firefox 55 landed in Beta this week, so it’s time for another update on WebExtensions. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN.

APIs

The webRequest API has seen many improvements. Empty types and URLs in webRequest filters are now rejected. Requests can be cancelled before cookie processing occurs. Websockets can be processed through webRequest using the ws:// and wss:// protocols. Requests from the top frame now have the correct frameId, and more error conditions on requests are picked up by the onErrorOccurred event.

The sidebar API now re-opens automatically if you reload the add-on using about:debugging or web-ext. If you are following along with project Photon, you’ll note that the sidebar works great with the new Photon designs. Shiny!

The runtime.onMessageExternal API has been implemented, which allows WebExtensions add-ons to communicate with other WebExtensions add-ons. The runtime.onInstalled API will now activate if an add-on is installed temporarily, and the event will now include the previousVersion of the extension.

In order to limit the amount of CSS that a developer has to write and provide some degree of uniformity, there is a browser_style option for the browserAction API. We’ve also provided this to options V2 and the sidebar APIs.

Context menus now work in browserAction popups. The onClickData event in the context menu also gets the frameID. Context menu clicks can now open browser actions, page actions and sidebars. To do this, specify _execute_browser_action, _execute_page_action or _execute_sidebar_action in the command field for creating a context menu.

If you load a page from your extension, you get a long moz-extension://…. URL in the URL bar. We’ve added a notification in the identity box to indicate that the page what extension loaded it.

Other changes include:

A new API is now available for the nsiProfiler. This allows the Gecko Profiler to be used without legacy add-on support. This was essential for the Quantum Flow work happening in Firefox. Because of the sensitive nature of the content and the limited appeal of this API, access to this API is currently restricted.

Permissions

With Firefox 55, the user interface for required and optional permissions is now enabled for WebExtensions add-ons. Required permissions and hosts will trigger a prompt on installation for the user. Here’s an example:

When an extension is updated and the hosts or permissions have changed, the current extension remains enabled, but the user has to accept the updated permissions in order to continue.

There is also a new user interface for side loading add-ons that is more consistent with other installation methods. Side loading is when extensions are installed outside of Firefox, by other software. It now appears in the hamburger menu as a notification:

This permissions dialog is slightly different as well:

Once an extension has been installed, if it would like more permissions or hosts, it can ask for those as needed — these are called optional permissions. They are accessible using the browser.permissions.request API. An example of using optional permissions is available in the example repository on github.

Developer tools

With the introduction of devtools.inspectedWindow.eval bindings, many more add-ons are now able to support WebExtensions APIs. The developer tools team at Mozilla has been reaching out to developers with add-ons that might be affected as you can see on Twitter. For example, the Redux DevTools extension is now a WebExtensions add-on using the same code base as other browsers.

An API for devtools.panels.themeName has been implemented. The devtools panel icon is no longer inverted if a light theme is chosen.

There have been some improvements to the about:debugging page:

These changes are aimed at improving the ease of development. Temporary extensions now appear at the top of the page, a remove button is present, help is shown if the extension has a temporary ID, the location of the extension on the file system is shown, and the internal UUID is shown.

Android

Firefox for Android has gained browserAction support. Currently a textual menu item is added to the bottom of the menu on Android. It supports browserAction.onClicked and setTitle and getTitle. Tabs support was added to pageAction.

Theming

The beginnings of theme support, as detailed in this blog post, has landed in Firefox. In Firefox 55 you can use the browser.theme.update API. The theme API allows you to set some key values in Firefox such as:

browser.theme.update({
 images: {
   headerURL: "header.png",
 },
 colors: {
   accentcolor: "#000",
   textcolor: "#fff",
 }
});

This WebExtensions API will apply the theme to Firefox by setting the header image and some CSS colors. At this point the theme API applies a very similar set of functionality as the existing lightweight theme. However, using this API you can do this dynamically in your extension.

Additionally, browser.management APIs have been implemented for themes. These allow you to enable and disable themes by only using the management API. For an example check out the example repository on github.

Proxy

The proxy API allows extension authors to insert proxy configuration files into Firefox. This API implementation is quite different from the one in Chrome to take advantage of some of the improved support in Firefox for proxies. As a result, to prevent confusion, this API is not present in the chrome namespace.

The proxy configuration file will contain a function for dealing with the incoming request:

function FindProxyForURL(url, host) {
 // ...
}

And this will then be registered in the API:

browser.proxy.registerProxyScript(filename).then();

For an example of using the proxy API, please see the repository on github.

Performance

One focus of the Firefox 55 release was the performance of WebExtensions, particularly the scenario where there is at least one WebExtension on startup.

These performance improvements include speeding up host matching, limiting the cloning of add-on messages, lazily loading APIs when they are needed. We’ve also been adding in some telemetry measurements into Firefox, such as background page load times and extension start up times.

The next largest performance gain is the moving of WebExtensions add-ons to their own process. To enable this, we made the debugging tools work seamlessly with out-of-process add-ons. We are hoping to enable this feature for Windows users in Firefox 56 once the remaining graphics issues have been resolved.

You can see some of the results of performance improvements in the Quantum newsletter which Ehsan posts to his blog. These improvements aren’t just limited to WebExtensions add-ons. For example, the introduction of off script decoding brought a large performance improvement to startup measurements for all Firefox users, as well as those with WebExtensions add-ons:

Community

As ever we need to thank the community who contributed to this release. This includes: Tushar Saini, Tomislav Jovanovic, Rob Wu, Martin Giger and Geoff Lankow. Thank you to you all.

The post WebExtensions in Firefox 55 appeared first on Mozilla Add-ons Blog.

Planet MozillaThe Joy of Coding - Episode 102

The Joy of Coding - Episode 102 mconley livehacks on real Firefox bugs while thinking aloud.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>