Planet Mozillahappy bmo push day!

dylanwh:

New Features: Comments are remembered if you cancel an edit or navigate away from the bug page, and private comments are more obviously private.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1450325] Update email templates with instructions for unsubscribing from all emails
  • [1451599] Checkbox for agreement terms at create account page should be on the left side
  • [1438205] Preserve comments in progress across page reloads
  • [1452531] PhabBugz code should add allow visibility to reviewers when creating custom…

View On WordPress

Planet MozillaLocalization Workshop in Kolkata (November 2017)

In case you’re wondering “Uhm, what year are we in again?”, don’t worry: it’s April 2018, and this is a long overdue post that I owe to our large community of Indic locales.

Last November, Jeff, Peiying and I (flod) headed to Kolkata for the last of our planned localization workshops. The group of languages represented at the event included Bengali (both Bangladesh and India), Gujarati, Hindi, Kannada, Marathi, Nepali, Odia, Tamil and Telugu. If you’re surprised by the number of languages, consider that India alone has 22 languages listed in the Indian Constitution, but that’s only the tip of the iceberg, with a much larger variety of languages spoken, and sometime officially recognized at the State level.

IMG_8927After successfully testing the unconference approach in previous events, like Berlin during the summer, we decided to push the boundaries and enter the weekend without an agenda: localizers would own the event, propose topics at the beginning of each day, vote on a schedule, and drive the discussions. In hindsight, I think this was a successful experiment: several participants came up with topics and in some case gave presentations on subjects they cared about. I felt like every person in the room was able to actively participate and be heard.

Workshop Kolkata 2017Here are a few personal takeaways:

  • Locales in the area share the same struggle that other communities express: it’s hard to find new contributors, it requires a lot of time and resources to train them, and they might end up leaving the project shortly after. For sure, we should – and will – invest in making mentorship easier in Pontoon. We want to be able to have discussions about translations directly within the tool, to track quality metrics over time for each contributor, and not risk losing potential contributors when managers are inactive for an existing locale.
  • They feel a struggle with other parts of the Mozilla project, attracting volunteers from different functional areas. That’s something that we haven’t heard in previous events, and it might be explained with the success of specific initiatives in the region.
  • The usage of local languages vs English is quite low in India, compared to other areas of the world. We know there are some possible cultural explanations for this, for example knowing English could represent a mean to a better job, but we also look forward to the multilingual improvements that we plan for 2018/19, to see if that’s going to change the situation by lowering the barrier to access other languages within the browser.
  • It’s always fruitful to spend some time showing how to test the browser, and how to use tools like Pontoon and Transvision. And, this time, I didn’t even have to do a presentation about Transvision (it was proposed and driven Mak from bn-BD) 🙂

You can also read the report of the event from the localizers’ perspective, with a lot more details on the discussions, by reading the blog posts from Drashti, Bala and Selva.

Workshop Kolkata 2017A big thanks to Biraj for helping with the organization, giving us a brief tour of the city, and making my first trip to India a pleasant experience.

Planet MozillaReps Weekly Meeting, 26 Apr 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaMaking a Web Thing on the ESP8266

Today I’m going to walk you through creating a simple Web Thing using an inexpensive off-the-shelf ESP8266 board.

The power of web things comes from their ability to connect the digital world of web pages with the physical world of things. We recently released the Things Framework, a collection of software intended to make it easy to create new web things. The relevant library for this example is the webthing-esp8266 library, which makes easy it to connect Arduino-programmed ESP8266 boards with the Web of Things. We hope that this lowers the barrier to creating compelling experiences with our gateway and the Web Thing API.

Lamp example running on the ESP8266

The first step in our journey is to install the Arduino IDE and its ESP8266 board support. Visit Adafruit’s ESP8266 documentation for a very detailed walkthrough of setting up. At the end of setup, you should have an Arduino IDE installed that has the ESP8266WiFi, ESP8266mDNS, and ESP8266WebServer libraries available.

System diagram

Now that we have all the prerequisites out of the way, let’s get to the fun part. The webthing-esp8266 library works by assembling a collection of components that come together to expose the Web Thing API. The main coordinator is the WebThingAdapter which keeps track of a ThingDevice that in turn has an assortment of ThingProperties.

The WebThingAdapter knows how to speak the Web of Things API with our Gateway and handles all the translation necessary for the Gateway to discover and interact with the ThingDevice. The ThingDevice represents the physical object we want to put on the web. In a simple case, this may be a few LEDs. Once we get more complex, this could be a quadcopter, an OLED display, or even a Tesla coil. Before we get ahead of ourselves, let’s step through a basic example which exposes the ESP8266’s built-in LED.

To start, create a new sketch in the Arduino IDE. Now that we have a place to write code, we need to include all the libraries this sketch uses. These are ESP8266WiFi and WiFiClient for connecting to our WiFi network, Thing for creating Web-of-Things-compatible objects, and WebThingAdapter for translating these objects into a web server.

#include <ESP8266WiFi.h>
#include <WiFiClient.h>
#include <Thing.h>
#include <WebThingAdapter.h>

The next step is to configure the constants we’re using. These are ssid for our WiFi network’s name, password for its password, and lampPin for the pin of the LED we want to control.

const char* ssid = "......";
const char* password = "..........";

const int lampPin = LED_BUILTIN;

Now we get to specify what kind of web thing we’re creating. First, we create the adapter, which sets the name of the board. If you want to have multiple ESP8266 boards on the same network, you’ll need to make sure their names are unique.

WebThingAdapter adapter("esp8266");

Then we need to specify the ThingDevice we want to have on our gateway. In this case, we want to expose the LED as a dimmableLight called “My Lamp” which will allow us to turn it on and control its brightness from the gateway.

ThingDevice lamp("lamp", "My Lamp", "dimmableLight");

Next we define the properties we want the ThingDevice to have. A dimmableLight needs two properties: “on” and “level”.

ThingProperty lampOn("on", "Whether the lamp is turned on", BOOLEAN);
ThingProperty lampLevel("level", "The level of light from 0-100", NUMBER);

In the start of our setup function we initialize the LED, connect to our WiFi network, and turn on the Serial port for debugging.

void setup() {
  pinMode(lampPin, OUTPUT);
  digitalWrite(lampPin, HIGH);
  analogWriteRange(255);

  Serial.begin(115200);
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);
  Serial.println("");

  // Wait for connection
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }

  Serial.println("");
  Serial.print("Connected to ");
  Serial.println(ssid);
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

With that boilerplate out of the way, we can tie together our ThingProperties, ThingDevice, and WebThingAdapter. The lamp needs to know that it owns the lampOn and lampLevel properties, while the adapter needs to know that the lamp exists.

  lamp.addProperty(&lampOn);
  lamp.addProperty(&lampLevel);
  adapter.addDevice(&lamp);
  adapter.begin();
  Serial.println("HTTP server started");
}

In our continuously running loop function we first update the adapter so it can handle talking to the gateway or any other Web of Things clients. Next, we update the light based on the property values. Note that we take the 0-100 brightness level and map it from 255-0 because the brightness of the ESP8266’s built-in LED is inverted.

void loop() {
  adapter.update();
  if (lampOn.getValue().boolean) {
    int level = map(lampLevel.getValue().number, 0, 100, 255, 0);
    analogWrite(lampPin, level);
  } else {
    analogWrite(lampPin, 255);
  }
}

Our sketch is done! If you upload this to your ESP8266 you should be able to see the “My Lamp” thing in your gateway’s Add Things list. You can then click to turn the LED on and off, or click on the “splat” icon to control its brightness level.

Adding My Lamp in new thing list My Lamp in thing list

And, if you need more inspiration, check out the demo of my LED lamp:

You can also control your web thing directly by issuing HTTP requests to http://esp8266.local/. For example, you can use this curl command to turn the LED on from the terminal.

curl -X PUT http://esp8266.local/things/lamp/properties/on -H 'Content-Type: application/json' -d '{"on":true}'

This is just the beginning of what you can do with the Web Thing API. If you’d prefer working in a higher-level language, check out our other web thing examples. If you think on/off and level aren’t cool enough on their own, we also support string properties!

Planet MozillaIntroducing Hubs: A new way to get together

Introducing Hubs: A new way to get together

Today, we’re excited to share a preview release of Hubs by Mozilla, a new way to get together online within Mixed Reality, right in your browser. Hubs is the first experiment we’re releasing as part of our Social Mixed Reality efforts, and we think it showcases the potential for the web to become the best, most accessible platform to bring people together around the world in this new medium.

What we're releasing

The preview of Hubs we are releasing today is a very early version that allows you to easily create web-based rooms to meet with others within Mixed Reality. Create a room with a single click, and then just share the link with someone. It’s that simple.

When they open the link on their phone or PC, they’ll join you in the room as an avatar.

Introducing Hubs: A new way to get together

If they have a VR headset, they can enter the room in Mixed Reality. All with no app downloads, walled gardens, or content gatekeepers, and on any device you wish — and most importantly, through open source software that respects your privacy and is built on web standards. We'll be sharing a dedicated post with more details about our approach to ensuring privacy in Mixed Reality soon.

When using a Mixed Reality headset with Hubs, you’ll be able to interact online in a whole new way. Instead of through a screen, you will be spending time together in what feels like a real place. You can make eye contact, high five, laugh together, or just explore. It’s up to you, and it all happens right in your browser just like any other website.

We’ve included support for all VR devices available today, and will support the all-in-one devices coming out this year. We also have an eye towards WebXR, which will bring support for AR to Hubs. And even if you don’t have a headset yet, you can still join in on your PC or phone.

As part of this preview release, we’re including a set of robot-styled avatars and a number of scenes you can choose from when you create your room. Each of these is optimized to be able to run well on a variety of devices, from high end PCs to mobile phones. When in the room, you can see one another, move around, and pick up and throw virtual objects. And of course, you can hear each other’s voices with fully spatialized audio, so it sounds like you are in a real place.

We hope that you’ll try using this preview of Hubs to spend time with far away friends and family, or introduce it to your communities as a new way to get together. We’ve been extremely encouraged by the progress we’ve made so far, but we’re just getting started. If you’d like to be informed of improvements we make to Hubs, please join our mailing list.

Introducing Hubs: A new way to get together

What's coming next

There are a number of efforts already underway to extend what you can do with Hubs, and we will be rolling out new releases on a regular basis. Here are some of the things we are working on:

  • Custom Spaces - As mentioned in our original announcement, we think creating your first virtual space to spend time in should be as easy as creating your first website. We’ve already shown off some early work towards a space construction kit to reducing the barrier to creating virtual spaces. Needless to say, spaces created with this kit will be designed to work well with Hubs, so that you can quickly share a link to have others meet you there. You will also be free to use your own fully custom GLTF scenes with Hubs.

  • Avatars and Identity - Mixed Reality provides an unprecedented opportunity to design things in a way that ensures people can express themselves on their own terms. As part of this, we think that you should always have as much control as possible over how you appear in Mixed Reality. We are working on ways to reduce the barriers to avatar customization so anyone can create an avatar that embodies how they want to be seen. We’ll have a lot more to say about this later in the year.

  • Existing Tools - We think Mixed Reality communication should complement, not replace, the existing ways we connect on the web, like chat and voice. We will be exploring ways to integrate Hubs with existing communications tools, so people can start experimenting with Mixed Reality as a new way to spend time together alongside the tools they already use.

  • Your Feedback - We are looking forward to gathering your ideas on what we can do to improve Hubs. If you have an idea or want to report an issue, you can submit a Github issue or join us in the #social channel on the WebVR Slack to join the conversation. You can also contact us at hubs@mozilla.com. If you’d like to be kept up to date on improvements we make to Hubs, sign up for our mailing list.

Planet MozillaEnabling Social Experiences Using Mixed Reality and the Open Web

Today, Mozilla is sharing an early preview of an experiment we are calling “Hubs by Mozilla”. Hubs is an immersive social experience that is delivered through the browser. You simply click on a web link to begin interacting with others inside virtual reality.

Late last year we announced the creation of a team focused on enabling social experiences using Mixed Reality and the open web. This is one of many experiments we’ll be sharing from that work. Using the web as a platform provides people with better choices and greater access. People shouldn’t have to be locked in to a specific platform or device. They should be able to connect and engage with the web wherever it expands. It is challenging work, and there is still much to do, but we are excited to share our progress with you. Starting today, we are opening up our latest Mixed Reality experiment, Hubs, to anyone that would like to give it a try.

There are other teams and companies out there that are building social VR experiences. What makes Hubs different? Well, Hubs is…

Built for the Browser

When we announced Firefox Reality earlier this month, we reinforced our stance that the web provides the best future for virtual and augmented reality (or “Mixed Reality”). This technology is at tipping point. If we want to continue to bring immersive experiences into the mainstream, we need to be laser focused on removing friction for the user. The technology needs to step out of the way, and the experiences need to take center stage.

With Hubs, you can create a room with a single click. You can then share and access that room with a URL. No app store. No gatekeepers. No installation process. Just click and you are there.

Built for Every Device

Because we are using web standards (WebVR and eventually WebXR) to deliver this content, we are able to support every single Mixed Reality headset. Every. Single. One. You can enjoy this experience with advanced hardware such as an Oculus Rift or an HTC Vive, or you can use alternatives such as a Daydream or cardboard viewer. You can even use your desktop or mobile phone if you don’t have access to any VR hardware. Everyone can come together and communicate with each other in this online social space. The experience will progressively scale to make use of the hardware that is available to you.

Built for Privacy

Ensuring your privacy when meeting others in Mixed Reality is a guiding principle for our work. You can take a look at our privacy policy here. We want to shape a future for Mixed Reality where users feel safe, and that means we need to create create tools, options, and features that empower users to control how their identity is represented in this new medium. We’ll have more to share soon about the work we are doing to protect your privacy in Mixed Reality. We also built this experience with open source software that anyone can view and contribute to.

Built for Scaling

The experience that is available today is an experiment, but the technology that powers it can be extended in exciting ways. In the coming months we will continue to release new tools and features, as we learn together through use and iteration. This includes kits to create your own custom spaces, powerful avatar and identity options, integrations with existing communications tools, and more.

We are excited about the future of Hubs and the potential for social VR experiences, but we need your help to test this and make it better. Check out the link below to try it out. Play with it. Share it. Break it. Contribute to it. If you are looking for even more details, then check out the blog post on our Mixed Reality blog. We’d love to get your feedback as we build this together.

Try Hubs – a WebVR experiment from Mozilla Mixed Reality

The post Enabling Social Experiences Using Mixed Reality and the Open Web appeared first on The Mozilla Blog.

Planet MozillaHow does dynamic dispatch work in WebAssembly?

WebAssembly is a stack-based virtual machine and instruction set, designed such that implementations can be fast and safe. It is a portable target for the compilation of languages like C, C++, and Rust.

How does WebAssembly make control flow safe? Functions and executable code live in a different namespace from data. A WebAssembly call instruction has a static operand that specifies which function it is calling. WebAssembly implementations can emit native code without dynamic checks after validating these static operands. Furthermore, control flow is structured even within functions. Unlike most native instruction sets, like x86, there are no arbitrary, un-typed jumps.

But C, C++, and Rust all have some capability for dynamic dispatch: function pointers, virtual methods, and trait objects. On native targets like x86, all these forms compile down into a jump to a dynamic address. What do these forms compile down into when targeting WebAssembly?

call_indirect

In addition to the call instruction, WebAssembly also has a call_indirect instruction. The call_indirect instruction indexes into a table of functions to call a dynamically selected function.

The call_indirect instruction takes two static operands:

  1. The type of the function that will be called, e.g. (i32, i32) -> nil. This type is encoded as an index into the “Type” section of a .wasm binary, and is statically validated to be within bounds.

  2. The table of functions to index into. Again, this table is encoded as a statically validated index into the “Table” section of a .wasm binary.0

The index into the table of functions, selecting which function gets called, is provided dynamically via the stack. Any arguments to the function are passed via the stack as well. If the index is outside the table’s bounds, a trap is raised. If the function at that index has a different type from what is expected, a trap is raised. If these dynamic checks pass, then the function at the given index is invoked.

In pseudo-code, the call_indirect instruction’s semantics are approximately:

<figure class="highlight">
call_indirect(const type_idx, const table_idx):
    const expected_type = get_type(type_idx)
    const table = get_table(table_idx)

    let func_idx = pop()
    if func_idx >= len(table):
        raise trap

    let func = table[func_idx]
    if type(func) != expected_type:
        raise trap

    invoke func
</figure>

Let’s reify this with an example Rust program that uses dynamic dispatch via trait objects, compiling it to both x86-64 and WebAssembly, and then inspecting the disassemblies.

A Simple Trait Objects Example

To get dynamic dispatch in Rust, we must use trait objects, which means we must begin by defining a trait:

<figure class="highlight">
// Some trait describing a generic class of behavior.
trait MyTrait {
    fn my_trait_method(&self) -> u32;
}
</figure>

Next, we define a couple types that both implement that trait. We must define more than one, or else the optimizer will be sneaky and de-virtualize and inline everything, defeating the purpose of this exercise.

<figure class="highlight">
// The concrete `Uno` type implements `MyTrait`.
struct Uno(char);
impl MyTrait for Uno {
    fn my_trait_method(&self) -> u32 {
        1
    }
}

// And the concrete `Dos` type also implements `MyTrait`.
struct Dos(char);
impl MyTrait for Dos {
    fn my_trait_method(&self) -> u32 {
        2
    }
}
</figure>

We define a function that takes a &MyTrait trait object and dynamically dispatches a call to the my_trait_method method. We cannot allow this function to be inlined, or else the all-too-helpful optimizer will rain on our parade again.

<figure class="highlight">
#[inline(never)]
fn dynamic_dispatch(thing: &MyTrait) {
    thing.my_trait_method();
}
</figure>

Finally, to tie everything together, we construct an instance of Uno and of Dos, convert them into trait objects, and pass these trait objects to dynamic_dispatch. Since we will export this function from our shared library / .wasm binary, we mark it extern and annotate it as #[no_mangle].

<figure class="highlight">
#[no_mangle]
pub extern fn tie_it_all_together() {
    let uno = Uno('1');
    let uno = &uno as &MyTrait;
    dynamic_dispatch(uno);

    let dos = Dos('2');
    let dos = &dos as &MyTrait;
    dynamic_dispatch(dos);
}
</figure>

Comparing x86-64 and WebAssembly Disassembly

We can use this command to compile our example to native code (x86-64 for my machine), and view the disassembly:

rustc -Og -C panic=abort -C lto=fat example.rs \
    && objdump -M intel -d libexample.so

To do the same for WebAssembly, we provide an explicit target flag to rustc and switch out objdump for wasm-objdump:

rustc -Og --target wasm32-unknown-unknown -C panic=abort -C lto=fat example.rs \
    && wasm-objdump -xd example.wasm

Let’s dive in!

The code for <Uno as MyTrait>::my_trait_method and <Dos as MyTrait>::my_trait_method, which just return constant integers, are straightforward in both the native code and the WebAssembly.

Here is the x86-64 for <Uno as MyTrait>::my_trait_method:1

<figure class="highlight">
<<example::Uno as example::MyTrait>::my_trait_method>:
;; Function prologue.
push   rbp
mov    rbp,rsp

;; Move 1 into the return register.
mov    eax,0x1

;; Function epilogue.
pop    rbp
ret
</figure>

And here is the WebAssembly:

<figure class="highlight">
<<example::Uno as example::MyTrait>::my_trait_method>:
;; Push 1 onto the stack.
i32.const 1

;; Return to the caller. The top of the stack is the return value.
end
</figure>

The code for <Dos as MyTrait>::my_trait_method is identical except that it returns 2 instead of 1.

Next, let’s look at the code for tie_it_all_together, which constructs two different trait objects and calls dynamic_dispatch with each of them.

Trait objects are represented as the pair of a pointer to the instance of the concrete type that implements the trait, and a pointer to the vtable for that instance’s type’s implementation of the trait. The x86-64 code breaks the trait object structure up into its component members when calling dynamic_dispatch, passing the pointer to the instance in the rdi register and the pointer to the vtable in the rsi register.2

<figure class="highlight">
<tie_it_all_together>:
;; Function prologue, reserving two words of stack space for `uno` and `dos`.
push   rbp
mov    rbp,rsp
sub    rsp,0x10

;; Store the '1' character (0x31) in `uno`.
mov    DWORD PTR [rbp-0x4],0x31

;; Move a pointer to the `<Uno as MyTrait>` vtable into `rsi`.
lea    rsi,[rip+0x7a]

;; Move a pointer to `uno` into `rdi`.
lea    rdi,[rbp-0x4]

;; Call `dynamic_dispatch` with the `uno` trait object!
call   f60 <_example::dynamic_dispatch>

;; Store the '2' character (0x32) in `dos`.
mov    DWORD PTR [rbp-0x8],0x32

;; Move a pointer to the `<Dos as MyTrait>` vtable into `rsi`.
lea    rsi,[rip+0x83]

;; Move a pointer to `dos` into `rdi`.
lea    rdi,[rbp-0x8]

;; Call `dynamic_dispatch` with the `dos` trait object!
call   f60 <_example::dynamic_dispatch>

;; Function epilogue.
add    rsp,0x10
pop    rbp
ret
</figure>

WebAssembly is a stack machine, rather than a register machine. To pass arguments to a function, we push them onto the WebAssembly stack rather than putting them in registers. A WebAssembly function may also have locals, which are sort of like registers, but have the restriction that they can’t be accessed across frames. They are like caller-save registers. If we need some stack frame’s member to be addressable, we need somewhere to put it. The Rust compiler emits instructions to maintain its own, second stack in linear memory, dedicated to addressable structures. This linear memory stack maintained by Rust should not be confused with the WebAssembly stack. Rust’s linear memory stack is built on top of WebAssembly’s axioms, while the latter is a fundamental atom of WebAssembly’s semantics and execution model.

The tie_it_all_together function uses the linear memory stack to store the uno and dos variables in tie_it_all_together, because their addresses are taken to construct the trait objects:

<figure class="highlight">
<tie_it_all_together>:
;; Function prologue. Global 0 contains the linear memory stack pointer. Because
;; the stack grows down, subtracting 16 from it is reserving 16 bytes of space
;; in the linear memory stack.
get_global 0
i32.const 16
i32.sub
tee_local 0
set_global 0
;; The wasm stack is now []
;; Locals:
;;   0: linear memory stack pointer

;; Store the '1' character (49) into the first linear memory stack frame slot,
;; which is `uno`.
get_local 0
i32.const 49
i32.store 2 8
;; The wasm stack is now []
;; Locals:
;;   0: linear memory stack pointer

;; Push a pointer to `uno` onto the wasm stack.
get_local 0
i32.const 8
i32.add
;; The wasm stack is now [*mut uno]
;; Locals:
;;   0: linear memory stack pointer

;; Push the pointer to the `<Uno as MyTrait>` vtable onto the wasm stack.
i32.const 1024
;; The wasm stack is now [*mut uno, *mut vtable]
;; Locals:
;;   0: linear memory stack pointer

;; Call `dynamic_dispatch` with the `uno` trait object! This consumes both
;; values on the wasm stack as its arguments.
call 4 <example::dynamic_dispatch>
;; The wasm stack is now []
;; Locals:
;;   0: linear memory stack pointer

;; Store the '2' character (50) into the second linear memory stack frame slot,
;; which is `dos`.
get_local 0
i32.const 50
i32.store 2 12
;; The wasm stack is now []
;; Locals:
;;   0: linear memory stack pointer

;; Push the pointer to `dos` onto the wasm stack.
get_local 0
i32.const 12
i32.add
;; The wasm stack is now [*mut dos]
;; Locals:
;;   0: linear memory stack pointer

;; Push the pointer to the `<Dos as MyTrait>` vtable onto the wasm stack.
i32.const 1040
;; The wasm stack is now [*mut dos, *mut vtable]
;; Locals:
;;   0: linear memory stack pointer

;; Call `dynamic_dispatch` with the `dos` trait object!
call 4 <example::dynamic_dispatch>
;; The wasm stack is now []
;; Locals:
;;   0: linear memory stack pointer

;; Function epilogue.
get_local 0
i32.const 16
i32.add
set_global 0
end
</figure>

Finally, let’s examine the code for dynamic_dispatch, which takes the &MyTrait trait object argument and makes the dynamically dispatched call to the trait object’s my_trait_method function.

The x86-64 code indexes into the vtable with an offset of 24 (or three words) to get the appropriate my_trait_method function pointer and does a tail-call to it via a direct jump.

<figure class="highlight">
<example::dynamic_dispatch>:
;; Function prologue.
push   rbp
mov    rbp,rsp

;; Function epilogue, except we don't `ret` because we're doing a
;; tail-call.
pop    rbp

;; Offset the vtable pointer by three words, load the function pointer at that
;; location within the vtable, and jump to the function!
jmp    QWORD PTR [rsi+0x18]
</figure>

Now let’s compare that to the WebAssembly. Here is the annotated WebAssembly code for the dynamic_dispatch function:

<figure class="highlight">
;; This function has the type (i32, i32) -> nil
<example::dynamic_dispatch>:
;; This gets the first parameter to the function, the pointer to the trait
;; object's concrete instance, and pushes it onto the wasm stack.
get_local 0
;; The wasm stack is now [*mut instance]

;; Push the second parameter, the pointer to the vtable, onto the wasm stack.
get_local 1
;; The wasm stack is now [*mut instance, *mut vtable]

;; Pop the vtable pointer from the top of the stack, add 12 to it to offset for
;; the vtable's `my_trait_method` member, load that location from memory, and
;; push the result onto the stack.
i32.load 2 12
;; The wasm stack is now [*mut instance, index]

;; Do an indirect call of type 1, which is (i32) -> i32, from function
;; table 0. The `call_indirect` instruction directly consumes the index from the
;; stack, and because the function type we're calling takes an argument, the
;; instance pointer is popped off the stack as well. Because the function type
;; has a return value, the return value will be pushed onto the stack when control
;; returns to this function.
call_indirect 1 0
;; The wasm stack is now [return_value]

;; Because the dynamic_dispatch function does not use the return value, it gets
;; explicitly dropped.
drop
;; The wasm stack is now []

;; Return!
end
</figure>

It turns out that the WebAssembly and the x86-64 code to do dynamic dispatch are pretty similar:

  • Just like the x86-64, the WebAssembly has also exploded the trait object into its component members.
  • Just like the x86-64, the WebAssembly indexes into the vtable by three words (it uses an offset of 12, and a wasm32 word is 4 bytes) to get the “function pointer” for the instance’s my_trait_method function.

But the similarities stop there.

The x86-64 code uses a single, familiar stack. The WebAssembly code has two stacks:

  1. The WebAssembly stack directly manipulated by instructions, which is a fundamental part of WebAssembly’s semantics, and is maintained by the WebAssembly implementation.
  2. A second stack for addressable Rust locals, which lives in the linear memory heap, and is maintained by function prologue and epilogue instructions emitted by the Rust compiler.

When WebAssembly does dynamic dispatch, it is rather different than what x86-64 does. There are essentially two different address spaces for data and functions, and the same 32 bit integers are used to index into both spaces, but this works because the instructions are typed and designed for static validation before execution begins. This multi-space design is forced by WebAssembly’s hard requirements for safety and portability. It is pretty neat that languages like C++ and Rust — despite being low-level and having historically targeted architectures with a single address space for both executable code and data — are still abstract enough that we can represent them with this alternative architecture that separates functions and data into distinct spaces!

Many thanks to Alex Crichton, Jeena Lee, and Luke Wagner for reading early drafts of this text!


0 Currently, there is only a single table of functions with heterogeneous types, so the only valid value for this operand is 0. The WebAssembly engine’s embedder (e.g. a Web browser) can also expose APIs to mutate these tables, so it isn’t strictly true that the functions in a table are always present in the “Table” section of the .wasm binary.

1 It is a bit surprising that rustc/LLVM didn’t optimize this into mov eax,0x1; ret, and that it left some unnecessary prologue and epilogue instructions in there. As Rothon pointed out, rustc will currently force-enable frame pointers when building with debug info.

2 We can verify this pair-of-pointers representation is used by running dwarfdump to inspect the DWARF debugging information describing the physical layout of the trait object:

...
< 2><0x00000180>      DW_TAG_structure_type
                        DW_AT_name                  &MyTrait
                        DW_AT_byte_size             0x00000010
                        DW_AT_alignment             0x00000008
< 3><0x00000187>        DW_TAG_member
                          DW_AT_name                  pointer
                          DW_AT_type                  <GOFF=0x000002a9>
                          DW_AT_alignment             0x00000008
                          DW_AT_data_member_location  DW_OP_plus_uconst 0
                          DW_AT_artificial            yes(1)
< 3><0x00000199>        DW_TAG_member
                          DW_AT_name                  vtable
                          DW_AT_type                  <GOFF=0x000002ec>
                          DW_AT_alignment             0x00000008
                          DW_AT_data_member_location  DW_OP_plus_uconst 8
                          DW_AT_artificial            yes(1)
...

You can also use dwarfdump to double-check your reading of how parameters are passed to a given function.

Planet MozillaBlinkOn 9: Working on the Web Platform from a cooperative

Last week, I attended BlinkOn 9. I was very happy to spend some time with my colleagues working on Chromium, including a new developer who will join my team next week (to be announced soon!).

This edition had the usual format with presentations, brainstorming, lightning talks and informal chats with Chromium developers. I attended several interesting presentations on web platform standardization, implementation and testing. It was also great to talk to Googlers in order to coordinate on some of Igalia’s projects such as the collaboration with AMP or MathML in Chromium.

In the previous edition, I realized that one can propose non-technical talks (e.g. the one about inclusion and diversity) and some people seemed curious about Igalia. Hence I proposed a presentation “Working on the Web Platform from a Cooperative” that gives:

  • An introduction to Igalia and its activities.
  • A description of our non-hierarchical model and benefits it brings.
  • An overview of Igalia’s contribution to the Web Platform.

Presenting my talk in West Sycamore

From the feedback I got, people appreciated the presentation and liked to get more insight on Igalia. Unfortunately, I was not able to record the talk due to technical issues. Of course, thirty minutes is a bit short to develop all the ideas and reply properly to all the questions. But for those who are interested here are more pointers:

  • About “equal salary” VS “cost of living”, you might want to read Andy Wingo’s blog posts “time for money” and “a simple (local) solution to the pay gap”. Several years ago, Robert O’Callahan had already wondered whether it really made sense to take into account the cost of living to determine salaries. Personally, I believe that as long as our “target salary” is high enough for all places where we work, we don’t really need to worry about this issue and can instead spend time focusing on more interesting agreements to keep making Igalia a great working place.

  • About dependency on the customers, see the last paragraph of “work groups” in Andy’s blog post “but that would be anarchy!” especially treating customers as partners. As I said during the talk, as long as we have enough customers we have some freedom to accept contracts that are more interesting for our strategy and aligned with our values or negociate improvements to existing contracts ; without worrying about unstability and uncertainty.

  • About the meaning of “Igalia”, the simple answer is “it does not mean anything”. If you join Igalia and get the opportunity to learn about the company history, there is a more complete answer about how the name was found…

  • Regarding founders of Igalia in 2001: Dape (who attended BlinkOn), Alex, Juanjo, Xavi, Berto and Chema are indeed still working at Igalia and in general, very few people have left Igalia since its creation.

Finally we had two related tricky questions from Google employees:

  • How do you sync with the browser vendors’ own agenda?

  • What can Google (or any other browser vendor) could do to facilitate involvements of third-party contributors?

One could enumerate different situations but unfortunately there is not a generic answer. In some cases collaboration worked very well and was quite successful. In other cases, things were more complicated and we had to “fight” to convince browser vendors to keep some existing code or accept new features.

Communication is very important. We try to sync with browser vendors using video conferencing or by attending conferences, but some companies/teams are more or less inclined to reveal information (especially when strategic products are involved). In general, I have the impression that the more the teams work close to the Web Platform, the more they are used to the democratic and open-source culture and welcome third-party contributions.

Although the ideal is to work upstream, we have recently been developing skills to manage separate forks and rebase them regularly against the main branch. This is a good option to find a balance between the request of the customer to implement features and the need of the browser vendors to focus on their own tasks. Chromium for Wayland is a good example of that approach.

Hence probably one way to help third-party contributors is to improve communication. We had some issues with developers not even willing to talk to us or not taking time to review or comment on our patches/CLs. If browser vendors could indicate that they don’t like an approach as soon as possible or that they won’t accept patches until some refactoring is complete, that would help us a lot to discuss with clients, properly schedule our tasks and consider the option of an experimental branch.

Another way to help third-party contributors would be to advertize more that such contributions are actually possible. Indeed, many people think that “everything is implemented by browser vendors” which can make difficult to find clients for web platform development. When companies rant about Google not implementing feature X, fixing bug Y or participating to standard Z, instead of ignoring them or denying the importance of the request, it would probably be more constructive to mention that they can actually pay consulting companies to do that job. As I indicated in the talk, we recently had such successful collaborations with Bloomberg, Metrological or AMP and we would be happy to find more!

There are probably more to reply to these questions, but that’s my quick thought on the matter for now. I’ll try discussing with my colleagues and see if we have more ideas to share.

Planet MozillaFirefox release speed wins

Sylvestre wrote about how we were able to ship new releases for Nightly, Beta, Release and ESR versions of Firefox for Desktop and Android in less than a day in response to the pwn2own contest.

People commented on how much faster the Beta and Release releases were compared to the ESR release, so I wanted to dive into the releases on the different branches to understand if this really was the case, and if so, why?

Chemspill timings

                    | Firefox ESR 52.7.2 | Firefox 59.0.1  | Firefox 60.0b4
 ------------------ | ------------------ | --------------- | --------------
 Fix landed in HG   | 23:33:06           | 23:31:28        | 23:29:54
 en-US builds ready | 03:19:03 +3h45m    | 01:16:41 +1h45m | 01:16:47 +1h46m
 Updates ready      | 08:43:03 +5h42m    | 04:21:17 +3h04m | 04:41:02 +3h25m
 Total              | 9h09m              | 4h49m           | 5h11m

(All times UTC from 2018-03-15 -> 2018-03-16)

Summary

via GIPHY

We can see that Firefox 59 and 60.0b4 were significantly faster to run than ESR 52 was! What's behind this speedup?

Release Engineering have been busy migrating release automation from buildbot to taskcluster . Much of ESR52 still runs on buildbot, while Firefox 59 is mostly done in Taskcluster, and Firefox 60 is entirely done in Taskcluster.

In ESR52 the initial builds are still done in buildbot, which has been missing out on many performance gains from the build system and AWS side. Update testing is done via buildbot on slower mac minis or windows hardware.

The Firefox 59 release had much faster builds, and update verification is done in Taskcluster on fast linux machines instead of the old mac minis or windows hardware.

The Firefox 60.0b4 release also had much faster builds, and ended up running in about the same time as Firefox 59. It turns out that we hit several intermittent infrastructure failures in 60.0b4 that caused this release to be slower than it could have been. Also, because we had multiple releases running simultaneously, we did see some resource contention for tasks like signing.

For comparison, here's what 60.0b11 looks like:

                    | Firefox 60.0b11
 ------------------ | --------------- 
 Fix landed in HG   | 18:45:45
 en-US builds ready | 20:41:53 +1h56m
 Updates ready      | 22:19:30 +1h37m
 Total              | 3h33m

Wow, down to 3.5 hours!

In addition to the faster builds and faster update tests, we're seeing a lot of wins from increased parallelization that we can do now using taskcluster's much more flexible scheduling engine. There's still more we can do to speed up certain types of tasks, fix up intermittent failures, and increase parallelization. I'm curious just how fast this pipeline can be :)

Planet WebKitWeb Inspector Styles Sidebar Improvements

In Web Inspector that recently shipped with Safari 11.1 on macOS High Sierra, the Elements tab sidebar panels and the styles editor got a lot of attention from the Web Inspector team. We’ve re-written the styles editor to provide an editing experience more familiar to web developers, and rearranged the sidebar panels to improve access to the most used tools. The design refresh brings new behaviors and fine-tuning to enhance web developers’ ability to access and understand the elements they’re inspecting.

Tabs Layout

styles tabs before & afterBefore / After

In the Elements tab, Styles and Computed are the most commonly used panels. We made them top-level tabs, so it takes a single click to switch between them.

Styles Panel

styles panel information density before & afterBefore / After

The redesigned Styles panel of the same size now fits more data:

  • Selectors are no longer generated for style attributes
  • The default “Media: all” header is no longer shown
  • The icons were removed to save some horizontal space

Syntax Highlighting

styles syntax highlighting

Property values are now black to make them easier to distinguish from property names. Strings are still red, but links are now blue.

We added curly braces back so copying CSS would produce valid CSS. Curly braces, colons, and semicolons are light gray so they won’t distract from the content.

Styles Editor

We rewrote the styles editor from scratch. This is the first major overhaul of the styles editor since 2013. Instead of a free-form text editor, we changed to cell-based style editing.

Styles tab and shift-tab behavior

CSS property names and values are now separate editable fields. Pressing Tab (⇥) or Return navigates to the next editable field. Pressing Shift-Tab (⇧⇥) navigates to the previous editable field.

Also, typing a colon (“:”) when focused on the property name focuses on the corresponding property value. Typing semicolon (“;”) at the end of the property value navigates to the next property.

Styles add new property behavior

To add a new property, you can click on the white space before or after an existing property. Pressing Tab (⇥) or Return when focused on the last property value also adds a new property.

Styles remove property behavior

To remove a property, you can remove either a property name or a property value.

Styles up and down arrow behavior

Completion suggestions display right away when focused on the value field. Completion values apply right away when selecting using Up and Down arrow keys.

Styles more arrow key behaviors

While editing a property field, Up and Down arrow keys now can increment and decrement values. You can change the interval by holding modifier keys:

  • Option (⌥): 0.1
  • Shift (⇧): 10
  • Command (⌘): 100

Legacy Settings

Legacy settings screen with the Legacy Style Editor setting

The previous version of the styles editor is still available in the in Web Inspector settings, but it’s no longer maintained.

The Visual Styles Panel never gained enough traction to remain in Elements tab by default. It is no longer maintained. Along with the Legacy Style Editor, the Visual Styles Panel can still be enabled in the Experimental settings.

Contributing

Please report bugs and feature requests regarding the new styles editor on webkit.org/new-inspector-bug. If you’re interested in contributing or have any questions, please stop by the #webkit-inspector IRC channel.

Web Inspector is primarily written in HTML, JavaScript, and CSS, which means that web developers already have the skills needed to jump in and contribute a bug fix, enhancement or a new feature.

Planet MozillaThe Joy of Coding - Episode 137

The Joy of Coding - Episode 137 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaNYU MSPN Webinar Series - Product and Project Lifecycle Management

NYU MSPN Webinar Series - Product and Project Lifecycle Management Each talk should be 10 to 15 minutes and there will be student questions after. The entire timeline is an hour to an hour and...

Planet MozillaNYU MSPN Webinar Series - Product and Project Lifecycle Management

NYU MSPN Webinar Series - Product and Project Lifecycle Management Each talk should be 10 to 15 minutes and there will be student questions after. The entire timeline is an hour to an hour and...

Planet MozillaEnterprise Policy Support in Firefox

Last year, Mozilla ran a survey to find out top enterprise requirements for Firefox. Policy management (especially Windows Group Policy) was at the top of that list.

For the past few months we’ve been working to build that support into Firefox in the form of a policy engine. The policy engine adds desktop configuration and customization features for enterprise users to Firefox. It works with any tool that wants to set policies including Windows Group Policy.

I’m excited to announce that our work on the policy engine has reached a major milestone and is available in the latest Firefox 60 beta.

We’d really like for folks to take a look at what we’ve done and provide feedback. We would especially like to know what kinds of things folks are doing that require AutoConfig, so we can investigate adding those things to the policy engine. This is important because we are planning to sandbox AutoConfig to only its original API in Rapid Release, probably in version 62. You can get more detail about that in bug 1455601.

We’ve set up a survey to get a lot more details about requirements. Click here for that. (Yes, I know we’ve been doing lots of surveys. We appreciate your help as we define requirements.)

If you run into specific problems you can opens bugs Github or in Bugzilla.

For a detailed list of all the policies that are available and how to use them in a policies.json file, you can check out the README.

It also includes information on which policies only work on the ESR.

If you’re using Windows, you can download the ADMX templates.

We’re currently in the process of standing up more documentation and a support forum on support.mozilla.org.

In the meantime, we have some initial documentation.

Folks are also asking what this means for the future of CCK2. I’m planning to make as much CCK2 functionality as I can available for Firefox 60. I’ll be doing another blog post soon about that.

Planet MozillaMozilla publishes recommendations on government vulnerability disclosure in Europe

As we’ve argued on many occasions, effective government vulnerability disclosure (GVD) review processes can greatly enhance cybersecurity for governments, citizens, and companies, and help mitigate risk in an ever-broadening cyber threat landscape.  In Europe, the EU is currently discussing a new legislative proposal to enhance cybersecurity across the bloc, the so-called ‘EU Cybersecurity Act’. In that context, we’ve just published our policy recommendations for lawmakers, in which we call on the EU to seize the opportunity to set a global policy norm for government vulnerability disclosure.  

Specifically, our policy recommendations for lawmakers focus predominantly on the elements of the legislative proposal that concern the enhanced mandate for ENISA (the EU Cybersecurity agency), namely articles three to eleven. Therein, we recommend the EU co-legislators to include within ENISA’s reformed responsibilities a mandate to assist Member States in establishing and implementing  policies and practices for the responsible management and coordinated disclosure of vulnerabilities in ICT products and services that are not publicly known.

As the producer of one of the world’s most popular web browsers, it is essential for us that vulnerabilities in our software are quickly identified and patched. Simply put, the safety and security of our users depend on it. More broadly, as witnessed in the recent Petya, and WannaCry cyberattacks, vulnerabilities can be exploited by cybercriminals to cause serious damage to citizens, enterprises, public services, and governments.

Vulnerability disclosure (and the processes that underpin it) is particularly important with respect to governments. Governments often have unique knowledge of vulnerabilities, and learn about vulnerabilities in many ways: through their own research and development, by purchasing them, through intelligence work, or by reports from third parties. Crucially, governments can face conflicting incentives as to whether to disclose the existence of such vulnerabilities to the vendor immediately, or to delay disclosure in order to support offensive intelligence-gathering and law enforcement activities (so-called government hacking).

In both the US and the EU, Mozilla has long led calls for governments to codify and improve their policies and processes for handling vulnerability disclosure, including speaking out strongly in favor of the Protecting Our Ability to Counter Hacking Act (PATCH Act) in the United States. Mozilla is also a member of the Centre for European Policy Studies’ Task Force on Software Vulnerability Disclosure, a multistakeholder effort dedicated to advancing thinking on this important topic, including mapping current practices and developing a model for government vulnerability disclosure review. We strongly believe that by putting in place such frameworks, governments can contribute to greater cybersecurity for their citizens, their businesses, and even themselves.

As our policy recommendation contends, the proposed EU Cybersecurity Act offers a unique opportunity to advance the norm that Member States should have robust, accountable, and transparent government vulnerability disclosure review processes, thereby fostering greater cybersecurity in Europe. Indeed, through its capacity to assist and advise on the development of policy and practices, a reformed ENISA is well-placed to support the EU Member States in developing government vulnerability disclosure review mechanisms and sharing best practices.

Over the coming months, we’ll be working closely with EU lawmakers to explain this issue and highlight its importance for cybersecurity in Europe.

If you’re interested in reading our recommendations in full, you can access them here.

The post Mozilla publishes recommendations on government vulnerability disclosure in Europe appeared first on Open Policy & Advocacy.

Planet Mozillaany.js

Thanks to Ms2ger web-platform-tests is now even more awesome (not in the American sense). To avoid writing HTML boilerplate, web-platform-tests supports .window.js, .worker.js, and .any.js resources, for writing JavaScript that needs to run in a window, dedicated worker, or both at once. I very much recommend using these resource formats as they ease writing and reviewing tests and ensure APIs get tested across globals.

Ms2ger extended .any.js to also cover shared and service workers. To test all four globals, create a single your-test.any.js resource:

// META: global=window,worker
promise_test(async () => {
  const json = await new Response(1).json()
  assert_equals(json, 1);
}, "Response object: very basic JSON parsing test");

And then you can load it from your-test.any.html, your-test.any.worker.html, your-test.any.sharedworker.html, and your-test.https.any.serviceworker.html (requires enabling HTTPS) to see the results of running that code in those globals.

The default globals for your-test.any.js are a window and a dedicated worker. You can unset the default using !default. So if you just want to run some code in a service worker:

// META: global=!default,serviceworker

Please give this a try and donate some tests for your favorite API annoyances.

Planet MozillaThings Gateway - a Virtual Weather Station

Today, I'm going to talk about creating a Virtual Weather Station using the Things Gateway from Mozilla and a developer account from Weather Underground.  The two combined enable home automation control from weather events like temperature, wind, and precipitation.

I've already written the code and this blog is about how to use it.  In the next blog posting, I'll talk about how the code actually works.


Goal: create a virtual Web thing to get weather data into the Things Gateway for use in rules.  Specifically, make a rule that turns a green light on when the wind speed is high enough to fly a kite.

ItemWhat's it for?Where I got it
an RPi running the Things GatewayIt's our target to have the weather station provide values to the Things GatewayGeneral Download & Install Instructions
or see my own install instructions:
General Install & Zigbee setup,
Philip Hue setup,
IKEA TRÅDFRI setup,
Z-Wave setup,
TP-Link setup
A laptop or desktop PCthe machine to run the Virtual Weather Station. You can use the RPi itself.My examples will be for a Linux machine
a couple things set up on the Things Gateway to controlthis could be bulbs or switches I'm using Aeotec Smart Switches to run red and green LED bulbs.
the webthing and configman Python3 packagesthese are libraries used by the Virtual Weather Stationsee the pip install directions below
a clone of the pywot github repositoryit is where the the Virtual Weather Station code livessee the git clone directions below
a developer key for online weather datathis gives you the ability to download data from Weather Undergroundit's free from Weather Underground

Step 1: Download and install the configman and webthing Python 3 packages.  Clone the pywot github repository in a local directory appropriate for software development. While this can be done directly on the RPi, I'm choosing to use my Linux workstation. I like its software development environment better.
        
$ sudo pip3 install configman
$ sudo pip3 install webthing
$ git clone https://github.com/twobraids/pywot.git
$ cd pywot
$ export PYTHONPATH=$PYTHONPATH:$PWD
$ cd demo


So what is configman ?

This is an obscure library for configuration that I wrote years and years ago.  I continue to use it because it is really handy.  It combines command line, config files, the environment or anything conforming to the abstract type collections.Mapping to universally manage program configuration.  Configuration requirements can be spread across classes and then used for dynamic loading and dependency injection.  For more information, see my slides for my PyOhio 2014 talk: Configman.

What is webthing?

webthing is a Mozilla package for Python 3, that implements the Web Things api.  It provides a set of classes that represent devices and their properties, giving them an implementation that can be controlled over an HTTP connection.

What is pywot?

pywot is my project to create a wrapper around webthing that offers a more Pythonic interface than webthing does alone.  webthing closely follows a reference implementation written in Javascript, so it offers an API with a distinctly different idiom than most Python modules.  pywot is an attempt to pave over the idiomatic differences.

Step 2:  In the …/pywot/demo directory, there are several example files.  virtual_weather_station.py is our focus today.  In this posting, we're just going to run it, then we'll tear it apart and analyze it in the next posting.

Get a developers account for Weather Underground.  Take note of your API key that they assign to you.  You'll need this in the next step.

Step 3: Using your WU API key, your city and state, run the program like this:
        
$ ./virtual_weather_station.py -K YOUR_WU_API_KEY --city_name=Missoula --state_code=MT



Step 4: We're going to assume that there are two light bulbs already configured and named: Red, Green.  Add the virtual weather station to the Things Gateway. by pressing the "+" key.


Sometimes, I've noticed that the Things Gateway doesn't immediately find my Virtual Weather Station.  I've not nailed it down as to why, but something about mDNS on my network can be very slow to update - sometimes up to ten minutes.  In this case, you don't have to wait, just press "Add by URL..." and then enter the IP address of the machine running the Virtual Weather Station with this URL template: "http://IP_ADDRESS:8888"

Step 5: The Virtual Weather Station is now fetching weather data every five minutes (as controlled by the configuration value called seconds_between_polling, you can change that on the command line) .  The Things Gateway should have that data immediately:  press the "splat" on the "THING" icon for the weather station:


Step 6: Now we can make a rule to turn on the "Green" light whenever the wind speed exceeds the minimum rated speed for our kite.

Select RULES from the drop down menu.  Drag the Weather Station up into the top half of the screen; select "Wind Speed" from the drop down box; change the "<" to ">"; use the up/down buttons to set the minimum wind speed threshold.  I'm choosing 5.


Step 7: Drag the "Green" light into the other half of the blank pane, use the drop down box to select the "ON" property.


Step 8: Go to the top of the page, set a useful name to your rule, press <enter> and then use the left arrow to leave the rule editor.

Step 9:  You've now seen how to make a rule based on properties of the Weather Station.  Your task is to now make the rule for the Red light.  I made mine turn on the red light when the wind is less than 5mph - I call that calm winds.  You can make your red light rule do whatever you want.

That should be about it.

Remember that making a rule implies the creation of a converse rule.  The rule that I made above says the Green light should come on when the wind speed is greater than 5mph.  The converse rule says that wind speeds below 5mph, the light will go out.

If the wind speed was greater than five at the moment that the rule was created, there may be some counterintuitive behavior.  It appears that rules aren't applied immediately as they're created.  They trigger on an "event" that happens when a property changes value.  If the wind was greater than 5mph when the rule was created, the rule didn't yet exist when the "event" happened.  The kite light will still work once the wind speed changes again at the next five minute polling point.  Be patient.


Bonus Step:  want to run the Virtual Weather Station, but don't want to include the WU API key on the command line?  Try this:
        
$ ./virtual_weather_station.py -K YOUR_WU_API_KEY --admin.dump_conf=config.ini

That created a config file called: ./config.ini
Open up ./config.ini in an editor and uncomment the line that has your WU API key. Save the file.  You can specify the the config file on the command line when you run the Virtual Weather Station.  Any of the parameters can be loaded from the ini file.
        
$ ./virtual_weather_station.py -admin.conf=config.ini --city_name=Missoula --state_code=MT

Still too much typing? Instead of the config file, you could just set any/all of the parameters as environment variables:
        
$ weather_underground_api_key=YOUR_WU_KEY
$ city_name=Missoula
$ state_code=MT
$ ./virtual_weather_station.py


In my next blog post, I'm going to explain the code that runs the Virtual Weather Station in great detail.

Planet MozillaTesting Strategies for React and Redux

When the Firefox Add-ons team ported addons.mozilla.org to a single page app backed by an API, we chose React and Redux for powerful state management, delightful developer tools, and testability. Achieving the testability part isn’t completely obvious since there are competing tools and techniques.

Below are some testing strategies that are working really well for us.

Testing must be fast and effective

We want our tests to be lightning fast so that we can ship high-quality features quickly and without discouragement. Waiting for tests can be discouraging, yet tests are crucial for preventing regressions, especially while restructuring an application to support new features.

Our strategy is to only test what’s necessary and only test it once. To achieve this we test each unit in isolation, faking out its dependencies. This is a technique known as unit testing and in our case, the unit is typically a single React component.

Unfortunately, it’s very difficult to do this safely in a dynamic language such as JavaScript since there is no fast way to make sure the fake objects are in sync with real ones. To solve this, we rely on the safety of static typing (via Flow) to alert us if one component is using another incorrectly — something a unit test might not catch.

A suite of unit tests combined with static type analysis is very fast and effective. We use Jest because it too is fast, and because it lets us focus on a subset of tests when needed.

Testing Redux connected components

The dangers of testing in isolation within a dynamic language are not entirely alleviated by static types, especially since third-party libraries often do not ship with type definitions (creating them from scratch is cumbersome). Also, Redux-connected components are hard to isolate because they depend on Redux functionality to keep their properties in sync with state. We settled on a strategy where we trigger all state changes with a real Redux store. Redux is crucial to how our application runs in the real world so this makes our tests very effective.

As it turns out, testing with a real Redux store is fast. The design of Redux lends itself very well to testing due to how actions, reducers, and state are decoupled from one another. The tests give the right feedback as we make changes to application state. This also makes it feel like a good fit for testing. Aside from testing, the Redux architecture is great for debugging, scaling, and especially development.

Consider this connected component as an example: (For brevity, the examples in this article do not define Flow types but you can learn about how to do that here.)

import { connect } from 'react-redux';
import { compose } from 'redux';

// Define a functional React component.
export function UserProfileBase(props) {
  return (
    <span>{props.user.name}</span>
  );
}

// Define a function to map Redux state to properties.
function mapStateToProps(state, ownProps) {
  return { user: state.users[ownProps.userId] };
}

// Export the final UserProfile component composed of
// a state mapper function.
export default compose(
  connect(mapStateToProps),
)(UserProfileBase);

You may be tempted to test this by passing in a synthesized user property but that would bypass Redux and all of your state mapping logic. Instead, we test by dispatching a real action to load the user into state and make assertions about what the connected component rendered.

import { mount } from 'enzyme';
import UserProfile from 'src/UserProfile';

describe('<UserProfile>', () => {
  it('renders a name', () => {
    const store = createNormalReduxStore();
    // Simulate fetching a user from an API and loading it into state.
    store.dispatch(actions.loadUser({ userId: 1, name: 'Kumar' }));

    // Render with a user ID so it can retrieve the user from state.
    const root = mount(<UserProfile userId={1} store={store} />);

    expect(root.find('span')).toEqual('Kumar');
  });
});

Rendering the full component with Enzyme’s mount() makes sure mapStateToProps() is working and that the reducer did what this specific component expected. It simulates what would happen if the real application requested a user from the API and dispatched the result. However, since mount() renders all components including nested components, it doesn’t allow us to test UserProfile in isolation. For that we need a different approach using shallow rendering, explained below.

Shallow rendering for dependency injection

Let’s say the UserProfile component depends on a UserAvatar component to display the user’s photo. It might look like this:

export function UserProfileBase(props) {
  const { user } = props;
  return (
    <div>
      <UserAvatar url={user.avatarURL} />
      <span>{user.name}</span>
    </div>
  );
}

Since UserAvatar will have unit tests of its own, the UserProfile test just has to make sure it calls the interface of UserAvatar correctly. What is its interface? The interface to any React component is simply its properties. Flow helps to validate property data types but we also need tests to check the data values.

With Enzyme, we don’t have to replace dependencies with fakes in a traditional dependency injection sense. We can simply infer their existence through shallow rendering. A test would look something like this:

import UserProfile, { UserProfileBase } from 'src/UserProfile';
import UserAvatar from 'src/UserAvatar';
import { shallowUntilTarget } from './helpers';

describe('<UserProfile>', () => {
  it('renders a UserAvatar', () => {
    const user = {
      userId: 1, avatarURL: 'https://cdn/image.png',
    };
    store.dispatch(actions.loadUser(user));

    const root = shallowUntilTarget(
      <UserProfile userId={1} store={store} />,
      UserProfileBase
    );

    expect(root.find(UserAvatar).prop('url'))
      .toEqual(user.avatarURL);
  });
});

Instead of calling mount(), this test renders the component using a custom helper called shallowUntilTarget(). You may already be familiar with Enzyme’s shallow() but that only renders the first component in a tree. We needed to create a helper called shallowUntilTarget() that will render all “wrapper” (or higher order) components until reaching our target, UserProfileBase.

Hopefully Enzyme will ship a feature similar to shallowUntilTarget() soon, but the implementation is simple. It calls root.dive() in a loop until root.is(TargetComponent) returns true.

With this shallow rendering approach, it is now possible to test UserProfile in isolation yet still dispatch Redux actions like a real application.

The test looks for the UserAvatar component in the tree and simply makes sure UserAvatar will receive the correct properties (the render() function of UserAvatar is never executed). If the properties of UserAvatar change and we forget to update the test, the test might still pass, but Flow will alert us about the violation.

The elegance of both React and shallow rendering just gave us dependency injection for free, without having to inject any dependencies! The key to this testing strategy is that the implementation of UserAvatar is free to evolve on its own in a way that won’t break the UserProfile tests. If changing the implementation of a unit forces you to fix a bunch of unrelated tests, it’s a sign that your testing strategy may need rethinking.

Composing with children, not properties

The power of React and shallow rendering really come into focus when you compose components using children instead of passing JSX via properties. For example, let’s say you wanted to wrap UserAvatar in a common InfoCard for layout purposes. Here’s how to compose them together as children:

export function UserProfileBase(props) {
  const { user } = props;
  return (
    <div>
      <InfoCard>
        <UserAvatar url={user.avatarURL} />
      </InfoCard>
      <span>{user.name}</span>
    </div>
  );
}

After making this change, the same assertion from above will still work! Here it is again:

expect(root.find(UserAvatar).prop('url'))
  .toEqual(user.avatarURL);

In some cases, you may be tempted to pass JSX through properties instead of through children. However, common Enzyme selectors like root.find(UserAvatar) would no longer work. Let’s look at an example of passing UserAvatar to InfoCard through a content property:

export function UserProfileBase(props) {
  const { user } = props;
  const avatar = <UserAvatar url={user.avatarURL} />;
  return (
    <div>
      <InfoCard content={avatar} />
      <span>{user.name}</span>
    </div>
  );
}

This is still a valid implementation but it’s not as easy to test.

Testing JSX passed through properties

Sometimes you really can’t avoid passing JSX through properties. Let’s imagine that InfoCard needs full control over rendering some header content.

export function UserProfileBase(props) {
  const { user } = props;
  return (
    <div>
      <InfoCard header={<Localized>Avatar</Localized>}>
        <UserAvatar url={user.avatarURL} />
      </InfoCard>
      <span>{user.name}</span>
    </div>
  );
}

How would you test this? You might be tempted to do a full Enzyme mount() as opposed to a shallow() render. You might think it will provide you with better test coverage but that additional coverage is not necessary — the InfoCard component will already have tests of its own. The UserProfile test just needs to make sure InfoCard gets the right properties. Here’s how to test that.

import { shallow } from 'enzyme';
import InfoCard from 'src/InfoCard';
import Localized from 'src/Localized';
import { shallowUntilTarget } from './helpers';

describe('<UserProfile>', () => {
  it('renders an InfoCard with a custom header', () => {
    const user = {
      userId: 1, avatarURL: 'https://cdn/image.png',
    };
    store.dispatch(actions.loadUser(user));

    const root = shallowUntilTarget(
      <UserProfile userId={1} store={store} />,
      UserProfileBase
    );

    const infoCard = root.find(InfoCard);

    // Simulate how InfoCard will render the
    // header property we passed to it.
    const header = shallow(
      <div>{infoCard.prop('header')}</div>
    );

    // Now you can make assertions about the content:
    expect(header.find(Localized).text()).toEqual('Avatar');
  });
});

This is better than a full mount() because it allows the InfoCard implementation to evolve freely so long as its properties don’t change.

Testing component callbacks

Aside from passing JSX through properties, it’s also common to pass callbacks to React components. Callback properties make it very easy to build abstractions around common functionality. Let’s imagine we are using a FormOverlay component to render an edit form in a UserProfileManager component.

import FormOverlay from 'src/FormOverlay';

export class UserProfileManagerBase extends React.Component {
  onSubmit = () => {
    // Pretend that the inputs are controlled form elements and
    // their values have already been connected to this.state.
    this.props.dispatch(actions.updateUser(this.state));
  }

  render() {
    return (
      <FormOverlay onSubmit={this.onSubmit}>
        <input id="nameInput" name="name" />
      </FormOverlay>
    );
  }
}

// Export the final UserProfileManager component.
export default compose(
  // Use connect() from react-redux to get props.dispatch()
  connect(),
)(UserProfileManagerBase);

How do you test the integration of UserProfileManager with FormOverlay? You might be tempted once again to do a full mount(), especially if you’re testing integration with a third-party component, something like Autosuggest. However, a full mount() is not necessary.

Just like in previous examples, the UserProfileManager test can simply check the properties passed to FormOverlay. This is safe because FormOverlay will have tests of its own and Flow will validate the properties. Here is an example of testing the onSubmit property.

import FormOverlay from 'src/FormOverlay';
import { shallowUntilTarget } from './helpers';

describe('<UserProfileManager>', () => {
  it('updates user information', () => {
    const store = createNormalReduxStore();
    // Create a spy of the dispatch() method for test assertions.
    const dispatchSpy = sinon.spy(store, 'dispatch');

    const root = shallowUntilTarget(
      <UserProfileManager store={store} />,
      UserProfileManagerBase
    );

    // Simulate typing text into the name input.
    const name = 'Faye';
    const changeEvent = {
      target: { name: 'name', value: name },
    };
    root.find('#nameInput').simulate('change', changeEvent);

    const formOverlay = root.find(FormOverlay);

    // Simulate how FormOverlay will invoke the onSubmit property.
    const onSubmit = formOverlay.prop('onSubmit');
    onSubmit();

    // Make sure onSubmit dispatched the correct ation.
    const expectedAction = actions.updateUser({ name });
    sinon.assertCalledWith(dispatchSpy, expectedAction);
  });
});

This tests the integration of UserProfileManager and FormOverlay without relying on the implementation of FormOverlay. It uses sinon to spy on the store.dispatch() method to make sure the correct action is dispatched when the user invokes onSubmit().

Every change starts with a Redux action

The Redux architecture is simple: when you want to change application state, dispatch an action. In the last example of testing the onSubmit() callback, the test simply asserted a dispatch of actions.updateUser(...). That’s it. This test assumes that once the updateUser() action is dispatched, everything will fall into place.

So how would an application like ours actually update the user? We would connect a saga to the action type. The updateUser() saga would be responsible for making a request to the API and dispatching further actions when receiving a response. The saga itself will have unit tests of its own. Since the UserProfileManager test runs without any sagas, we don’t have to worry about mocking out the saga functionality. This architecture makes testing very easy; something like redux-thunk may offer similar benefits.

Summary

These examples illustrate patterns that work really well at addons.mozilla.org for solving common testing problems. Here is a recap of the concepts:

  • We dispatch real Redux actions to test application state changes.
  • We test each component only once using shallow rendering.
  • We resist full DOM rendering (with mount()) as much as possible.
  • We test component integration by checking properties.
  • Static typing helps validate our component properties.
  • We simulate user events and make assertions about what action was dispatched.

Want to get more involved in Firefox Add-ons community? There are a host of ways to contribute to the add-ons ecosystem – and plenty to learn, whatever your skills and level of experience.

Planet MozillaMartes Mozilleros, 24 Apr 2018

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaMartes Mozilleros, 24 Apr 2018

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Planet MozillaSupporting Same-Site Cookies in Firefox 60

Firefox 60 will introduce support for the same-site cookie attribute, which allows developers to gain more control over cookies. Since browsers will include cookies with every request to a website, most sites rely on this mechanism to determine whether users are logged in.

Attackers can abuse the fact that cookies are automatically sent with every request to force a user to perform unwanted actions on the site where they are currently logged in. Such attacks, known as cross-site request forgeries (CSRF), allow attackers who control third-party code to perform fraudulent actions on the user’s behalf. Unfortunately current web architecture does not allow web applications to reliably distinguish between actions initiated by the user and those that are initiated by any of the third-party gadgets or scripts that they rely on.

To compensate, the same-site cookie attribute allows a web application to advise the browser that cookies should only be sent if the request originates from the website the cookie came from. Requests triggered from a URL different than the one that appears in the URL bar will not include any of the cookies tagged with this new attribute.

The same-site attribute can take one of two values: ‘strict’ or ‘lax’. In strict mode, same-site cookies will be withheld for any kind of cross-site usage. This includes all inbound links from external sites to the application. Visitors clicking on such a link will initially be treated as ‘not being logged in’ whether or not they have an active session with the site.

The lax mode caters to applications which are incompatible with these restrictions. In this mode, same-site cookies will be withheld on cross-domain subrequests (e.g. images or frames), but will be sent whenever a user navigates safely from an external site, for example by following a link.

For the Mozilla Security Team:
Christoph Kerschbaumer, Mark Goodwin, Francois Marier

The post Supporting Same-Site Cookies in Firefox 60 appeared first on Mozilla Security Blog.

Planet MozillaFirefox DevEdition 60 Beta 14 Testday Results

Hello Mozillians!

As you may already know, last Friday – April 20th – we held a new Testday event, for Firefox DevEdition 60 Beta 14.

Thank you all for helping us make Mozilla a better place: gaby2300, micde, Jarrod Michell, Thomas Brooks. 

From India team: Surentharan.R.A and Suren, Fahima Zulfath.

From Bangladesh teamTanvir Rahman, Saddam Hossain, Maruf Rahman, Md. Raihan Ali, Tanvir Mazharul, Nazmul Hossain, Moniruzzaman, Rakibul Yeasin Totul, Mizanur Rahman Rony, Sajedul Islam, Nayeem Nazmul, Saddam Hossain, Rubayet Hossain, Nazir Ahmed Sabbir, Saheda Reza Antora.

Results:

– several test cases executed for Search Suggestions, Site Storage Redesign UI and Web Compatibility.

– 7 bugs verified: 1430672144182514393711424880143869614398411437890

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Planet MozillaThis Week in Rust 231

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is human-panic, a crate to make Rust's error handling usable to end users. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

132 pull requests were merged in the last week

New Contributors

  • Dylan MacKenzie
  • Johannes Nixdorf
  • Kerem
  • krk
  • Nathaniel McCallum
  • Nicholas Rishel
  • Russell Cohen

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I’ve become fearless in Rust, but it’s made me fear every other language…

u/bluejekyll on reddit.

Thanks to nasa42 for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Planet MozillaRust pattern: Precise closure capture clauses

This is the second in a series of posts about Rust compiler errors. Each one will talk about a particular error that I got recently and try to explain (a) why I am getting it and (b) how I fixed it. The purpose of this series of posts is partly to explain Rust, but partly just to gain data for myself. I may also write posts about errors I’m not getting – basically places where I anticipated an error, and used a pattern to avoid it. I hope that after writing enough of these posts, I or others will be able to synthesize some of these facts to make intermediate Rust material, or perhaps to improve the language itself.

Other posts in this series:

The error: closures capture too much

In some code I am writing, I have a struct with two fields. One of them (input) contains some data I am reading from; the other is some data I am generating (output):

use std::collections::HashMap;

struct Context {
  input: HashMap<String, u32>,
  output: Vec<u32>,
}

I was writing a loop that would extend the output based on the input. The exact process isn’t terribly important, but basically for each input value v, we would look it up in the input map and use 0 if not present:

impl Context {
  fn process(&mut self, values: &[String]) {
    self.output.extend(
      values
        .iter()
        .map(|v| self.input.get(v).cloned().unwrap_or(0)),
    );
  }
}

However, this code will not compile:

error[E0502]: cannot borrow `self` as immutable because `*self.output` is also borrowed as mutable
  --> src/main.rs:13:22
     |
  10 |         self.output.extend(
     |         ----------- mutable borrow occurs here
 ...
  13 |                 .map(|v| self.input.get(v).cloned().unwrap_or(0)),
     |                      ^^^ ---- borrow occurs due to use of `self` in closure
     |                      |
     |                      immutable borrow occurs here
  14 |         );
     |         - mutable borrow ends here

As the various references to “closure” in the error may suggest, it turns out that this error is tied to the closure I am creating in the iterator. If I rewrite the loop to not use extend and an iterator, but rather a for loop, everything builds:

impl Context {
  fn process(&mut self, values: &[String]) {
    for v in values {
      self.output.push(
        self.input.get(v).cloned().unwrap_or(0)
      );
    }
  }
}

What is going on here?

Background: The closure desugaring

The problem lies in how closures are desugared by the compiler. When you have a closure expression like this one, it corresponds to deferred code execution:

|v| self.input.get(v).cloned().unwrap_or(0)

That is, self.input.get(v).cloned().unwrap_or(0) doesn’t execute immediately – rather, it executes later, each time the closure is called with some specific v. So the closure expression itself just corresponds to creating some kind of “thunk” that will hold on to all the data it is going to need when it executes – this “thunk” is effectively just a special, anonymous struct. Specifically, it is a struct with one field for each local variable that appears in the closure body; so, something like this:

MyThunk { this: &self }

where MyThunk is a dummy struct name. Then MyThunk implements the Fn trait with the actual function body, but each place that we wrote self it will substitute self.this:

impl Fn for MyThunk {
  fn call(&self, v: &String) -> u32 {
    self.this.input.get(v).cloned().unwrap_or(0)
  }
}

(Note that you cannot, today, write this impl by hand, and I have simplified the trait in various ways, but hopefully you get the idea.)

So what goes wrong?

So let’s go back to the example now and see if we can see why we are getting an error. I will replace the closure itself with the MyThunk creation that it desugars to:

impl Context {
  fn process(&mut self, values: &[String]) {
    self.output.extend(
      values
        .iter()
        .map(MyThunk { this: &self }),
        //   ^^^^^^^^^^^^^^^^^^^^^^^
        //   really `|v| self.input.get(v).cloned().unwrap_or(0)`
    );
  }
}

Maybe now we can see the problem more clearly; the closure wants to hold onto a shared reference to the entire self variable, but then we also want to invoke self.output.extend(..), which requires a mutable reference to self.output. This is a conflict! Since the closure has shared access to the entirety of self, it might (in its body) access self.output, but we need to be mutating that.

The root problem here is that the closure is capturing self but it is only using self.input; this is because closures always capture entire local variables. As discussed in the previous post in this series, the compiler only sees one function at a time, and in particular it does not consider the closure body while checking the closure creator.

To fix this, we want to refine the closure so that instead of capturing self it only captures self.input – but how can we do that, given that closures only capture entire local variables? The way to do that is to introduce a local variable, input, and initialize it with &self.input. Then the closure can capture input:

impl Context {
  fn process(&mut self, values: &[String]) {
    let input = &self.input; // <-- I added this
    self.output.extend(
      values
        .iter()
        .map(|v| input.get(v).cloned().unwrap_or(0)),
        //       ----- and removed the `self.` here
    );
  }
}

As you can verify for yourself, this code compiles.

To see why it works, consider again the desugared output. In the new version, the desugared closure will capture input, not self:

MyThunk { input: &input }

The borrow checker, meanwhile, sees two overlapping borrows in the function:

  • let input = &self.input – shared borrow of self.input
  • self.output.extend(..) – mutable borrow of self.output

No error is reported because these two borrows affect different fields of self.

A more general pattern

Sometimes, when I want to be very precise, I will write closures in a stylized way that makes it crystal clear what they are capturing. Instead of writing |v| ..., I first introduce a block that creates a lot of local variables, with the final thing in the block being a move closure (move closures take ownership of the things they use, instead of borrowing them from the creator). This gives complete control over what is borrowed and how. In this case, the closure might look like:

{
  let input = &self.input;
  move |v| input.get(v).cloned().unwrap_or(0)
}

Or, in context:

impl Context {
  fn process(&mut self, values: &[String]) {
    self.output.extend(values.iter().map({
      let input = &self.input;
      move |v| input.get(v).cloned().unwrap_or(0)
    }));
  }
}

In effect, these let statements become like the “capture clauses” in C++, declaring how precisely variables from the environment are captured. But they give added flexibility by also allowing us to capture the results of small expressions, like self.input, instead of local variables.

Another time that this pattern is useful is when you want to capture a clone of some data versus the data itself:

{
  let data = data.clone();
  move || ... do_something(&data) ...
}

How we could accept this code in the future

There is actually a pending RFC, RFC #2229, that aims to modify closures so that they capture entire paths rather than local variables. There are various corner cases though that we have to be careful of, particularly with moving closures, as we don’t want to change the times that destructors run and hence change the semantics of existing code. Nonetheless, it would solve this particular case by changing the desugaring.

Alternatively, if we had some way for functions to capture a refence to a “view” of a struct rather than the entire thing, then closures might be able to capture a reference to a “view” of self rather than capturing a reference to the field input directly. There is some discussion of the view idea in this internals thread; I’ve also tinkered with the idea of merging views and traits, as described in this internals post. I think that once we tackle NLL and a few other pending challenges, finding some way to express “views” seems like a clear way to help make Rust more ergonomic.

Planet MozillaThunderbird April News Update: GSoC, 60 Beta 4, New Thunderbird Council

Due to lots of news coming out of the Thunderbird project, I’ve decided to combine three different blog posts I was working on into one news update that gives people an idea of what has been happening in the Thunderbird community this month. Enjoy and comment to let me know if you like or dislike this kind of post!

Enigmail GSoC Student Selected

Great news! A student has been selected for the Enigmail/Thunderbird Google Summer of Code (GSoC) project. Enigmail, the OpenPGP privacy extension for Thunderbird, submitted its project to GSoC seeking a student to help update user interface elements and assist with other design work.

Thunderbird 60, Beta 4 Released

A new version of the Thunderbird 60 Beta is out, with version four beginning to roll out. Users of the Beta are testing what will ultimately be the next Extended Support Release (ESR), which acts as our stable release and is what most of our users see. There are a lot of changes between Thunderbird 52, that last ESR, and this release. Some of these changes include: An updated “Photon” UI (like that seen in Firefox), various updates to Thunder’s “Lightning” calendar, a new “Message from Template” command, and various others. You can find a full list here.

As with every Beta, but especially this one given it will become the new stable release, we hope that you will download it and give us feedback on your experience.

A New Thunderbird Council

A new Thunderbird Council was elected this month. This new council of seven members will serve for a year. The members of the new council are as follows:

  • Philipp Kewisch
  • Magnus Melin
  • Patrick Cloke
  • Wayne Mery
  • Philippe Lieser
  • Jorg Knobloch
  • Ryan Sipes

This blog will try to lay out the new council’s visions and priorities in future posts.

Planet MozillaNew Mozilla Poll: Support for Net Neutrality Grows, Trust in ISPs Dips

“Today marks the ostensible effective date for the FCC’s net neutrality repeal order, but it does not mark the end of net neutrality,” says Denelle Dixon, Mozilla COO. “And not just because some procedural steps remain before the official overturning of the rules — but because Mozilla and other supporters of net neutrality are fighting to protect it in the courts and in Congress.”

Also today: Mozilla is publishing results from a nationwide poll that reveals where Americans stand on the issue. Our survey reinforces what grassroots action has already demonstrated: The repeal contradicts most Americans’ wishes. The nation wants strong net neutrality rules.

“The new Mozilla and Ipsos poll shows once again that Americans across the political spectrum overwhelmingly want strong net neutrality protections, and that they don’t trust their ISPs to provide it for them without oversight,” says Gigi Sohn, Mozilla Fellow and former FCC counselor.

“What should make policymakers stand up and take notice is that 78% of Americans, including 84% of adults under the age of 35, believe that equal access to the internet is a right, and not a luxury,” Sohn continues.

~

Mozilla and Ipsos conducted this public opinion poll in February of 2018, surveying 1,007 American adults from across 50 states. Among our key findings:

Outside of Washington, D.C., net neutrality isn’t a partisan issue. Americans from red and blue states alike agree that equal access to the internet is a right, including: 79% of Colorado residents, 81% of Arizona residents, and 80% of North Carolina residents.

91% of Americans believe consumers should be able to freely and quickly access their preferred content on the internet. Support for net neutrality is growing: When Mozilla and Ipsos asked this same question in 2017, 86% of Americans believed this.

78% of Americans believe equal access to the internet is a right. This opinion is most common among younger Americans (84% of adults under the age of 35).

76% of Americans believe internet service providers (ISPs) should treat all consumer data the same, and not speed up or slow down specific content. This opinion is most common among older Americans (80% of adults ages 55+) and Americans with a college degree (81%).

63% of Americans do not think that ISPs will voluntarily look out for consumers’ best interests, compared to 32% who agree with this statement. Faith in ISPs is declining: When Mozilla and Ipsos asked this same question in 2017, 37% of Americans trusted ISPs.

See the full results from our poll here. See results from the 2017 Mozilla/Ipsos net neutrality poll here.

~

What’s ahead?

“Today could be the start of a shift away from freedom and innovation,” adds Denelle Dixon. “Some opponents of net neutrality will say our concerns are misplaced, and that when April 24 fails to see a wave of blocking, throttling, and fast lanes, that they were right in their claims. But that’s not how the world without net neutrality will develop. The impact won’t be immediate, like a lightswitch. Instead, we’ll see more of a gradual chipping away — an erosion into a discriminatory internet, with ultimately a far worse experience for any users and businesses who don’t pay more for special treatment.”

“There is an active lawsuit on this matter in the case titled ‘Mozilla v. FCC’ — and today is also the last day that others can file additional challenges against the FCC, following Mozilla’s lead,” Dixon concludes. “We’ve been encouraged by the support we’ve seen with allies filing suit in the industry, and we hope to see more organizations joining us in the fight to protect net neutrality.”

At some point in the coming months, the Senate will likely vote whether to undo the FCC’s repeal of net neutrality. Per the Congressional Review Act, lawmakers can veto the FCC’s decision with a majority vote. If the Congressional Review Act resolution passes the Senate, it will move to the House, then (maybe) the president’s desk. (Learn more about the Congressional Review Act and net neutrality here.)

“Members of Congress can restore net neutrality protections right now by passing the Joint Resolution of Disapproval that has been introduced in both houses,” Gigi Sohn says. “Voters will make their displeasure known to anyone who doesn’t support this measure in November.”

In the meantime, Mozilla will continue its fierce advocacy for a free, open internet. Earlier this year, we sued the FCC for its decision to gut net neutrality. And right now, we’re running a campaign that makes calling your elected official easy. Visit https://advocacy.mozilla.org/en-US/net-neutrality/, pick up the phone, and urge your representative to save net neutrality.

The post New Mozilla Poll: Support for Net Neutrality Grows, Trust in ISPs Dips appeared first on The Mozilla Blog.

Planet MozillaSUMO Community Meeting

SUMO Community Meeting SUMO - 04.13.2018

Planet MozillaSUMO Community Meeting

SUMO Community Meeting SUMO - 04.13.2018

Planet MozillaTaskcluster migration update: we're finished!

We're done!

Over the past few weeks we've hit a few major milestones in our project to migrate all of Firefox's CI and release automation to taskcluster.

Firefox 60 and higher are now 100% on taskcluster!

Tests

At the end of March, our Release Operations and Project Integrity teams finished migrating Windows tests onto new hardware machines, all running taskcluster. That work was later uplifted to beta so that CI automation on beta would also be completely done using taskcluster.

This marked the last usage of buildbot for Firefox CI.

Periodic updates of blocklist and pinning data

Last week we switched off the buildbot versions of the periodic update jobs. These jobs keep the in-tree versions of blocklist, HSTS and HPKP lists up to date.

These were the last buildbot jobs running on trunk branches.

Partner repacks

And to wrap things up, yesterday the final patches landed to migrate partner repacks to taskcluster. Firefox 60.0b14 was built yesterday and shipped today 100% using taskcluster.

A massive amount of work went into migrating partner repacks from buildbot to taskcluster, and I'm really proud of the whole team for pulling this off.

So, starting today, Firefox 60 and higher will be completely off taskcluster and not rely on buildbot.

It feels really good to write that :)

We've been working on migrating Firefox to taskcluster for over three years! Code archaeology is hard, but I think the first Firefox jobs to start running in Taskcluster were the Linux64 builds, done by Morgan in bug 1155749.

Into the glorious future

It's great to have migrated everything off of buildbot and onto taskcluster, and we have endless ideas for how to improve things now that we're there. First we need to spend some time cleaning up after ourselves and paying down some technical debt we've accumulated. It's a good time to start ripping out buildbot code from the tree as well.

We've got other plans to make release automation easier for other people to work with, including doing staging releases on try(!!), making the nightly release process more similar to the beta/release process, and for exposing different parts of the release process to release management so that releng doesn't have to be directly involved with the day-to-day release mechanics.

Planet MozillaBuilding Bold New Worlds With Virtual Reality

 

“I wanted people to feel the whole story with their bodies, not just with their minds. Once I discovered virtual reality was the place to do that, it was transformative.”
– Nonny de la Peña, CEO of Emblematic

 

Great creators can do more than just tell a story. They can build entirely new worlds for audiences to experience and enjoy.

From rich text to video to podcasts, the Internet era offers an array of new ways for creators to build worlds. Here at Mozilla, we are particularly excited about virtual reality. Imagine moving beyond watching or listening to a story; imagine also feeling that story. Imagine being inside it with your entire mind and body. Now imagine sharing and entering that experience with something as simple as a web URL. That’s the potential before us.

To fully realize that potential, we need people who think big. We need artists and developers and engineers who are driven to push the boundaries of the imagination. We need visionaries who can translate that imagination into virtual reality.

The sky is the limit with virtual reality, and we’re driven to serve as the bridge that connects artists and developers. We are also committed to providing those communities with the tools and resources they need to begin building their own worlds. Love working with Javascript? Check out the A-Frame framework. Do you prefer building with Unity? We have created a toolkit to bring your VR Unity experience to the web with WebVR.

We believe browsers are the future of virtual and augmented reality. The ability to click on a link and enter into an immersive, virtual world is a game-changer. This is why we held our ‘VR the People’ panel at the Sundance Film Festival, and why we will be at the Tribeca Film Festival in New York next week. We want to connect storytellers with this amazing technology. If you’re at Tribeca (or just in the area), please reach out. We’d love to chat.

This concludes our four part series about virtual reality, storytelling, and the open web. It’s our mission to empower creators, and we hope these posts have left you inspired. If you’d like to watch our entire VR the People panel. Check out the video below.

 

Be sure to visit https://mixedreality.mozilla.org/ to learn more about the tools and resources Mozilla offers to help you build new worlds from your imagination.

Read more on VR the People

The post Building Bold New Worlds With Virtual Reality appeared first on The Mozilla Blog.

Planet MozillaThis Week in Mixed Reality: Issue 3

This Week in Mixed Reality: Issue 3

This week we’re heads down focusing on adding features in the three broad areas of Browsers, Social and the Content Ecosystem.

Browsers

This week we focused on building Firefox Reality and we’re excited to announce additional features:

  • Implemented private tabs
  • Tab overflow popup list
  • Added contextual menu for “more options” in the header
  • Improvements for SVR based devices:
    • Update SDK to v2.1.2 for tracking improvements
    • Fallback to head tracking based input when there are not controllers available
    • Implement scrolling using wheel and trackpad input buttons in ODG devices
  • Working on the virtual keyboard across Android platform
  • We are designing the transitions for WebVR immersive mode

Check out the video clip of additional features we added this week of the contextual menu and private tabs:

Firefox Reality private browsing from Imanol Fernández Gorostizaga on Vimeo.

Social

We're working on a web-based social experience for Mixed Reality.

In the last week, we have:

  • Landed next 2D UX pass which cleans up a bunch of CSS and design inconsistencies, and prompts users for avatar and name customization before entry until they customize their name.
  • Ongoing work for final push of in-VR UX: unified 3D cursor, “pause/play” mode for blocking UX, finalized HUD design and positioning, less error-prone teleporting component should all land this week.
  • Worked through remaining issues with deployments, cleaned up bugs and restart issues with Habitat (as well as filed a number of bugs.)
  • Set-up room member capping and room closing.

Join our public WebVR Slack #social channel to join in the discussion!

Content ecosystem

This week, Blair MacIntyre released a new version of the iOS WebXR Viewer app that includes support for experimenting with Computer Vision.

Check out the video below:
This Week in Mixed Reality: Issue 3

Stay tuned next week for some exciting news!

Planet MozillaNew to me: the Taskcluster team

All entities move and nothing remains still.<footer>– Heraclitus, as referenced by Plato</footer>

At this time last year, I had just moved on from Release Engineering to start managing the Sheriffs and the Developer Workflow teams. Shortly after the release of Firefox Quantum, I also inherited the Taskcluster team. The next few months were *ridiculously* busy as I tried to juggle the management responsibilities of three largely disparate groups.

By mid-January, it became clear that I could not, in fact, do it all. The Taskcluster group had the biggest ongoing need for management support, so that’s where I chose to land. This sanity-preserving move also gave a colleague, Kim Moir, the chance to step into management of the Developer Workflow team.

Meet the Team

Let me start by introducing the Taskcluster team. We are:

We are an eclectic mix of curlers, snooker players, pinball enthusiasts, and much else besides. We also write and run continous integration (CI) software at scale.

What are we doing?

Socrates gets booked
The part I understand is excellent, and so too is, I dare say, the part I do not understand… <footer>– Socrates, in reference to Heraclitus</footer>

One of the reasons why I love the Taskcluster team so much is that they have a real penchant for documentation. That includes their design and post-mortem processes. Previously, I had only managed others who were using Taskcluster…consumers of their services. The Taskcluster documentation made it really easy for me to plug-in quickly and help provide direction.

If you’re curious about what Taskcluster is at a foundational level, you should start with the tutorial.

The Taskcluster team currently has three, big efforts in progress.

1. Redeployability

Many Taskcluster team members initially joined the team with the dream of building a true, open source CI solution. Dustin has a great post explaining the impetus behind redeployability. Here’s the intro:

Taskcluster has always been open source: all of our code is on Github, and we get lots of contributions to the various repositories. Some of our libraries and other packages have seen some use outside of a Taskcluster context, too.

But today, Taskcluster is not a project that could practically be used outside of its single incarnation at Mozilla. For example, we hard-code the name taskcluster.net in a number of places, and we include our config in the source-code repositories. There’s no legal or contractual reason someone else could not run their own Taskcluster, but it would be difficult and almost certainly break next time we made a change.

The Mozilla incarnation is open to use by any Mozilla project, although our focus is obviously Firefox and Firefox-related products like Fennec. This was a practical decision: our priority is to migrate Firefox to Taskcluster, and that is an enormous project. Maintaining an abstract ability to deploy additional instances while working on this project was just too much work for a small team.

The good news is, the focus is now shifting. The migration from Buildbot to Taskcluster is nearly complete, and the remaining pieces are related to hardware deployment, largely by other teams. We are returning to work on something we’ve wanted to do for a long time: support redeployability.

We’re a little further down that path than when he first wrote about it in January, but you can read more about our efforts to make Taskcluster more widely deployable in Dustin’s blog.

2. Support for packet.net

packet.net provides some interesting services, like baremetal servers and access to ARM hardware, that other cloud providers are only starting to offer. Experiments with our existing emulator tests on the baremetal servers have shown incredible speed-ups in some cases. The promise of ARM hardware is particularly appealing for future mobile testing efforts.

Over the next few months, we plan to add support for packet.net to the Mozilla instance of Taskcluster. This lines up well with the efforts around redeployability, i.e. we need to be able to support different and/or multiple cloud providers anyway.

3. Keeping the lights on (KTLO)

While not particularly glamorous, maintenance is a fact of life for software engineers supporting code that in running in production. That said, we should actively work to minimize the amount of maintenance work we need to do.

One of the first things I did when I took over the Taskcluster team full-time was halt *all* new and ongoing work to focus on stability for the entire month of February. This was precipitated by a series of prolonged outages in January. We didn’t have an established error budget at the time, but if we had, we would have completely blown through it.

Our focus on stability had many payoffs, including more robust deployment stories for many of our services, and a new IRC channel (#taskcluster-bots) full of deployment notices and monitoring alerts. We needed to put in this stability work to buy ourselves the time to work on redeployability.

What are we *not* doing?

With all the current work on redeployability, it’s tempting to look ahead to when we can incorporate some of these improvements into the current Firefox CI setup. While we do plan to redeploy Firefox CI at some point this year to take advantage of these systemic improvements, it is not our focus…yet.


One of the other things I love about the Taskcluster team is that they are really good at supporting community contribution. If you’re interested in learning more about Taskcluster or even getting your feet wet with some bugs, please drop by the #taskcluster channel on IRC and say Hi!

Planet MozillaDev-tools in 2018

This is a bit late (how is it the middle of April already?!), but the dev-tools team has lots of exciting plans for 2018 and I want to talk about them!

Our goals for 2018

Here's a summary of our goals for the year.

Ship it!

We want to ship high quality, mature, 1.0 tools in 2018. Including,

  • Rustfmt (1.0)
  • Rust Language Server (RLS, 1.0)
  • Rust extension for Visual Studio Code using the RLS (non-preview, 1.0)
  • Clippy (1.0, though possibly not labeled that, including addressing distribution issues)

Support the epoch transition

2018 will bring a step change in Rust with the transition from 2015 to 2018 epochs. For this to be a smooth transition it will need excellent tool support. Exactly what tool support will be required will emerge during the year, but at the least we will need to provide a tool to convert crates to the new epoch.

We also need to ensure that all the currently existing tools continue to work through the transition. For example, that Rustfmt and IntelliJ can handle new syntax such as dyn Trait, and the RLS copes with changes to the compiler internals.

Cargo

The Cargo team have their own goals. Some things on the radar from a more general dev-tools perspective are integrating parts of Xargo and Rustup into Cargo to reduce the number of tools needed to manage most Rust projects.

Custom test frameworks

Testing in Rust is currently very easy and natural, but also very limited. We intend to broaden the scope of testing in Rust by permitting users to opt-in to custom testing frameworks. This year we expect the design to be complete (and an RFC accepted) and for a solid and usable implementation to exist (though stabilisation may not happen until 2019).The current benchmarking facilities will be reimplemented as a custom test framework. The framework should support testing for WASM and embedded software.

Doxidize

Doxidize is a successor to Rustdoc. It adds support for guide-like documentation as well as API docs. This year there should be an initial release and it should be practical to use for real projects.

Maintain and improve existing tools

Maintenance and consistent improvement is essential to avoid bit-rot. Existing mature tools should continue to be well-maintained and improved as necessary. This includes

  • debugging support,
  • Rustdoc,
  • Rustup,
  • Bindgen,
  • editor integration.

Good tools info on the Rust website

The Rust website is planned to be revamped this year. The dev-tools team should be involved to ensure that there is clear and accurate information about key tools in the Rust ecosystem and that high quality tools are discoverable by new users.

Organising the team

The dev-tools team should be reorganised to continue to scale and to support the goals in this roadmap. I'll outline the concrete changes next.

Re-organising the dev-tools team

The dev-tools team has always been large and somewhat broad - there are a lot of different tools at different levels of maturity with different people working on them. There has always been a tension between having a global, strategic view vs having a detailed, focused view. The peers system was one way to tackle that. This year we're trying something new - the dev-tools team will become something of an umbrella team, coordinating work across multiple teams and working groups.

We're creating two new teams - Rustdoc, and IDEs and editors - and going to work more closely with the Cargo team. We're also spinning up a bunch of working groups. These are more focused, less formal teams, they are dedicated to a single tool or task, rather than to strategy and decision making. Primarily they are a way to let people working on a tool work more effectively. The dev-tools team will continue to coordinate work and keep track of the big picture.

We're always keen to work with more people on Rust tooling. If you'd like to get involved, come chat to us on Gitter in the following rooms:

The teams

Dev-tools

Manish Goregaokar, Steve Klabnik, and Without Boats will be joining the dev-tools team. This will ensure the dev-tools team covers all the sub-teams and working groups.

IDEs and editors

The new IDEs and editors team will be responsible for delivering great support for Rust in IDEs and editors of every kind. That includes the foundations of IDE support such as Racer and the Rust Language Server. The team is Nick Cameron (lead), Igor Matuszewski, Vlad Beskrovnyy, Alex Butler, Jason Williams, Junfeng Li, Lucas Bullen, and Aleksey Kladov.

Rustdoc

The new Rustdoc team is responsible for the Rustdoc software, docs.rs, and related tech. The docs team will continue to focus on the documentation itself, while the Rustdoc team focuses on the software. The team is QuietMisdreavus (lead), Steve Klabnik, Guillaume Gomez, Oliver Middleton, and Onur Aslan.

Cargo

No change to the Cargo team.

Working groups
  • Bindgen
    • Bindgen and C Bindgen
    • Nick Fitzgerald and Emilio Álvarez
  • Debugging
    • Debugger support for Rust - from compiler support, through LLVM and debuggers like GDB and LLDB, to the IDE integration.
    • Tom Tromey, Manish Goregaokar, and Michael Woerister
  • Clippy
    • Oliver Schneider, Manish Goregaokar, llogiq, and Pascal Hertleif
  • Doxidize
    • Steve Klabnik, Andy Russel, Michael Gatozzi, QuietMisdreavus, and Corey Farwell
  • Rustfmt
    • Nick Cameron and Seiichi Uchida
  • Rustup
    • Nick Cameron, Alex Crichton, Without Boats, and Diggory Blake
  • Testing
    • Focused on designing and implementing custom test frameworks.
    • Manish Goregaokar, Jon Gjengset, and Pascal Hertleif
  • 2018 edition tooling
    • Using Rustfix to ease the edition transition; ensure a smooth transition for all tools.
    • Pascal Hertleif, Manish Goregaokar, Oliver Schneider, and Nick Cameron

Thank you to everyone for the fantastic work they're been doing on tools, and for stepping up to be part of the new teams!

Planet MozillaEmerging Tech Speaker Series Talk with Rian Wanstreet

Emerging Tech Speaker Series Talk with Rian Wanstreet Precision Agriculture, or high tech farming, is being heralded as a panacea solution to the ever-growing demands of an increasing global population - but the...

Planet MozillaEmerging Tech Speaker Series Talk with Rian Wanstreet

Emerging Tech Speaker Series Talk with Rian Wanstreet Precision Agriculture, or high tech farming, is being heralded as a panacea solution to the ever-growing demands of an increasing global population - but the...

Planet MozillaReps Weekly Meeting, 19 Apr 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaReps Weekly Meeting, 19 Apr 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Planet MozillaNonny de la Peña & the Power of Immersive Storytelling

 

“I want you to think: if she can walk into that room and change her entire life and help create this whole energy and buzz, you can do it too.”
– Nonny de la Peña

 

This week, we’re highlighting VR’s groundbreaking potential to take audiences inside stories with a four part video series. There aren’t many examples of creators doing that more effectively and powerfully than Nonny de la Peña.

Nonny de la Peña is a former correspondent for Newsweek, the New York Times and other major outlets. For more than a decade now, de la Peña has been focused on merging her passion for documentary filmmaking with a deep-seeded expertise in VR. She essentially invented the field of “immersive journalism” through her company, Emblematic Group.

What makes de la Peña’s work particularly noteworthy (and a primary reason we’ve been driven to collaborate with her), is that her journalism often uses virtual reality to bring attention to under-served and overlooked groups.

To that end, our panel at this year’s Sundance Festival doubled as another installation in Nonny’s latest project, Mother Nature.

Mother Nature is an open and collaborative project that amplifies the voices of women and creators working in tech. It rebukes the concept that women are underrepresented in positions of power in tech and engineer roles because of anything inherent in their gender.

It’s a clear demonstration of how journalists and all storytellers can use VR to create experiences that can change minds and hearts, and help move our culture towards a more open and human direction.

For more on Nonny de la Peña and her immersive projects, visit Emblematic Group. I’d also encourage you to access our resources and open tools at https://mixedreality.mozilla.org/ and learn how you can use virtual reality and the web to tell your own stories.

Read more on VR the People

The post Nonny de la Peña & the Power of Immersive Storytelling appeared first on The Mozilla Blog.

Planet MozillaFirefox Performance Update #6

Hi there folks, just another Firefox Performance update coming at you here.

These updates are going to shift format slightly. I’m going to start by highlighting the status of some of the projects the Firefox Performance Team (the front-end team working to make Firefox snappy AF), and then go into the grab-bag list of improvements that we’ve seen landing in the tree.

But first a word from our sponsor: arewesmoothyet.com!

This performance update is brought to you by arewesmoothyet.com! On Nightly versions of Firefox, a component called BackgroundHangReporter (or “BHR”) notices anytime the main-threads hang too long, and then collect a stack to send via Telemetry. We’ve been doing this for years, but we’ve never really had a great way of visualizing or making use of the data1. Enter arewesmoothyet.com by Doug Thayer! Initially a fork of perf.html, awsy.com lets us see graphs of hangs on Nightly broken down by category2, and then also lets us explore the individual stacks that have come in using a perf.html-like interface! (You might need to be patient on that last link – it’s a lot of data to download).

Hot damn! Note the finer-grain categories showing up on April 1st.

Early first blank paint (lead by Florian Quèze)

This is a start-up perceived performance project where early in the executable life-cycle, long before we’ve figured out how to layout and paint the browser UI, we show a white “blank” area on screen that is overtaken with the UI once it’s ready. The idea here is to avoid having the user stare at nothing after clicking on the Firefox icon. We’ll also naturally be working to reduce the amount of time that the blank window appears for users, but our research shows users feel like the browser starts-up faster when we show something rather than nothing. Even if that nothing is… well, mostly nothing. Florian recently landed a Telemetry probe for this feature, made it so that we can avoid initting the GPU process for the blank window, and is in the midst of fixing an issue where the blank window appears for too long. We’re hoping to have this ready to ship enabled on some platforms (ideally Linux and Windows) in Firefox 61.

Faster content process start-up time (lead by Felipe Gomes)

Explorations are just beginning here. Felipe has been examining the scripts that are running for each tab on creation, and has a few ideas on how to both reduce their parsing overhead, as well as making them lazier to load. This project is mostly at the research stage. Expect concrete details on sub-projects and linked bugs soon!

Get ContentPrefService init off of the main thread (lead by Doug Thayer)

This is so, so close to being done. The patch is written and reviewed, but landing it is being stymied by a hard-to-reproduce locally but super-easy-to-reproduce-in-automation shutdown leak during test runs. Unfortunately, the last 10% sometimes takes 90% of the effort, and this looks like one of those cases.

Blocklist improvements (lead by Gijs Kruitbosch)

Gijs is continuing to make our blocklist asynchronous. Recently, he made the getAddonBlocklistEntry method of the API asynchronous, which is a big-deal for start-up, since it means we drop another place where the front-end has to wait for the blocklist to be ready! The getAddonBlocklistState method is next on the list.

As a fun exercise, you can follow the “true” value for the BLOCKLIST_SYNC_FILE_LOAD probe via this graph, and watch while Gijs buries it into the ground.

LRU cache for tab layers (lead by Doug Thayer)

Doug Thayer is following up on some research done a few years ago that suggests that we can make ~95% of our user’s tab switches feel instantaneous by implementing an LRU cache for the painted layers. This is a classic space-time trade-off, as the cache will necessarily consume memory in order to hold onto the layers. Research is currently underway here to see how we can continue to improve our tab switching performance without losing out on the memory advantage that we tend to have over other browsers.

Tab warming (lead by Mike Conley)

Tab warming has been enabled on Nightly for a few weeks, and besides one rather serious glitch that we’ve fixed, we’ve been pretty pleased with the result! There’s one issue on macOS that’s been mulled over, but at this point I’m starting to lean towards getting this shipped on at least Windows for the Firefox 61 release.

Firefox’s Most Wanted: Performance Wins (lead by YOU!)

Before we go into the grab-bag list of performance-related fixes – have you seen any patches landing that should positively impact Firefox’s performance? Let me know about it so I can include it in the list, and give appropriate shout-outs to all of the great work going on! That link again!

Grab-bag time

And now, without further ado, a list of performance work that took place in the tree:

(🌟 indicates a volunteer contributor)

Thanks to all of you! Keep it coming!


  1. Pro-tip: if you’re collecting data, consider figuring out how you want to visualize it first, and then make sure that visualization work actually happens. 

  2. since April 1st, these categories have gotten a lot finer-grain 

Planet MozillaAnnouncing cargo src (beta)

cargo src is a new tool for exploring your Rust code. It is a cargo plugin which runs locally and lets you navigate your project in a web browser. It has syntax highlighting, jump to definition, type on hover, semantic search, find uses, find impls, and more.

Today I'm announcing version 0.1, our first beta; you should try it out! (But be warned, it is definitely beta quality - it's pretty rough around the edges).

To install: cargo install cargo-src, to run: cargo src --open in your project directory. You will need a nightly Rust toolchain. See below for more directions.

overview

When cargo src starts up it will need to check and index your project. If it is a large project, that can take a while. You can see the status in the bottom left of the web page (this is currently not live, it'll update when you load a file). Build information from Cargo is displayed on the console where you ran cargo src. While indexing, you'll be able to see your code with syntax highlighting, but won't get any semantic information or be able to search.

Actionable identifiers are underlined. Click on a reference to jump to the definition. Click on a definition to search for all references to that definition. Right click on a link to see more options (such as 'find impls').

right-click

Hover over an identifier to see it's type, documentation, and fields (or similar info).

type-on-hover

On the left-hand side there are tabs for searching, and for browsing files and symbols (which to be honest that last one is not working that well yet). Searching is for identifiers only and is case-sensitive. I hope to support text search and fuzzy search in the future.

ident-search

A big thank you to Nicole Anderson and Zahra Traboulsi for their work - they've helped tremendously with the frontend, making it look and function much better than my attempts. Thanks to everyone who has contributed by testing or coding!

Cargo src is powered by the same tech as the Rust Language Server, taking it's data straight from the compiler. The backend is a Rust web server using Hyper. The frontend uses React and is written in Javascript with a little TypeScript. I think it's a fun project to work on because it's at the intersection of so many interesting technologies. It grew out of an old project - rustw - which was a web-based frontend for the Rust compiler.

Contributions are welcome! It's really useful to file issues if you encounter problems. If you want to get more involved, the code is on GitHub; come chat on Gitter.

Planet MozillaThings Gateway - Series 2, Episode 1

In my previous seven part posting on the Things Gateway from Mozilla, I explored the various built in options for connecting with existing home automation technologies.  While interesting, at that point, the Things Gateway hadn't really broken any new ground.  The features could be found in other Home Automation projects, arguably in more mature environments.

With the release of version of 0.4, the Things Gateway introduces something entirely new that the other products in the field don't yet do. Mozilla is thinking about the Internet of Things in a different way: a way that plays directly to the company's strengths. What if all these home automation devices, switches, plugs, bulbs, spoke a protocol that already exists and is cross platform, cross language and fully open: the Web protocols.  Imagine if each plug or bulb would respond to HTTP requests as if it were a Web app.  You could just use a browser to control them: no need for proprietary software stacks and phone apps.  This could be revolutionary.

In this, the beginning of Series Two of my blog posts about the Things Gateway, I'm going to show how to use the Things Framework to create virtual Web things.

Right now, the Mozilla team on this project is focused intensely on making the Web Things Framework easy to implement by hardware manufacturers.  Targeting the Maker Movement, the team is pushing to make it easy to enable small Arduino and similar tiny computers to speak the Web of Things (WoT) protocol.  They've created libraries and modules in various languages that implement the Things Framework: Javascript, Java, Python 3 have been written, with C++ and Rust on the horizon.

I'm going to focus on the Python implementation of the Things Framework.  It is pip installable with this command on a Linux machine:


$ sudo pip3 install webthing

The webthing-python github repo provides some programming examples on how to use the module.

One of the first things that a Python programmer is going to notice about this module is that it closely tracks the structure of a reference implementation. That reference implementation is written in Javascript. As such, it imposes a rather Javascript style and structure onto the Python API. For some that can roll with the punches, this is not a problem, for others, like myself, I'd rather have a more Pythonic API to deal with. So I've wrapped the webthing module with my own pywot (python Web of Things) module.

pywot paves over some of the awkward syntax exposed in the Python webthing implementation and offers some services that further reduce the amount code it takes to create a Web thing.

For example, I don't have one of those fancy home weather stations in my yard.  However, I can make a virtual weather station that fetches data from Weather Underground with the complete set of current conditions for my community.  Since I can access a RESTful API from Weather Underground in a Python program, I can wrap that API as a Web Thing.  The Thing Gateway then sees it as a device on the network and integrates it into the UI as a sensor for multiple values.

Weather Underground offers a software developers license that will allows up to 500 API calls per day at no cost.  All you have to do is sign up and they'll give you an API key.  Embed that key in a URL and you can fetch data from just about any weather station on their network.  The license agreement says that if you publicly post data from their network, you must provide attribution. However, this application of their data is totally private.  Of course, it could be argued that turning your porch light blue when Weather Underground says the temperature is cold may be considered a public display of WU data.

There is really very little programming that needs to be done to make a Web Thing this way.  Error handling and configuration boilerplate (omitted here) outweigh the actual code that defines my Virtual Weather Station:
class WeatherStation(WoTThing):

async def get_weather_data(self):
async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(config.seconds_for_timeout):
async with session.get(config.target_url) as response:
self.weather_data = json.loads(await response.text())
self.temperature = self.weather_data['current_observation']['temp_f']
self.wind_speed = self.weather_data['current_observation']['wind_mph']

temperature = WoTThing.wot_property(
'temperature',
initial_value=0.0,
description='the temperature in ℉',
value_source_fn=get_weather_data,
metadata={
'units': ''
}
)
wind_speed = WoTThing.wot_property(
'wind speed',
initial_value=0.0,
description='the wind speed in MPH',
value_source_fn=get_weather_data,
metadata={
'units': 'MPH'
}
)

The ability to change the status of devices in my home based on the weather is very useful.  I could turn off the irrigation system if there's been enough rain. I could have a light give warning if frost is going to endanger my garden. I could have a light tell me that it is windy enough to go fly a kite.



Is the wind calm or is it perfect kite flying weather?

If you want to jump right in, you can see the full code in my pywot git hub repo.  The demo directory has several examples.  However, in my next posting, I'm going to explain the virtual weather station in detail.

A few words about security: As I said before the Things Gateway and Things Framework are experimental software.  They are not yet hardened enough for more than experimental use.  Under no circumstances should a Web Thing be exposed directly to the Internet - they are for trusted local network use only.  Standards for authentication and authorization have not yet been engineered into the product.  They are in the works, hopefully by the next iteration, version 0.5.

From Ben Francis of the Mozilla ET IoT team: ...there is currently no authentication and while HTTPS support is provided, it can only really be used with self-signed certificates on a local network. We're not satisfied with that level of security and are exploring ways to provide authentication (in discussions with the W3C WoT Interest Group) and a solution for HTTPS on local networks (via the HTTPS in Local Network Community Group https://www.w3.org/community/httpslocal/). This means that for the time being we would strongly recommend against exposing native web things directly to the Internet using the direct integration pattern unless some form of authentication is used.


Planet MozillaThe ticking time bomb: Fake ad blockers in Chrome Web Store

People searching for a Google Chrome ad blocking extension have to choose from dozens of similarly named extensions. Only few of these are legitimate, most are forks of open source ad blockers trying to attract users with misleading extension names and descriptions. What are these up to? Thanks to Andrey Meshkov we now know what many people already suspected: these extensions are malicious. He found obfuscated code hidden carefully within a manipulated jQuery library that accepted commands from a remote server.

As it happens, I checked out some fake ad blockers only in February. Quite remarkably, all of these turned up clean: the differences to their respective open source counterparts were all minor, mostly limited to renaming and adding Google Analytics tracking. One of these was the uBlock Plus extension which now showed up on Andrey’s list of malicious extensions and has been taken down by Google. So at some point in the past two months this extension was updated in order to add malicious code.

And that appears to be the point here: somebody creates these extensions and carefully measures user counts. Once the user count gets high enough the extension gets an “update” that attempts to monetize the user base by spying on them. At least stealing browsing history was the malicious functionality that Andrey could see, additional code could be pushed out by the server at will. That’s what I suspected all along but this is the first time there is actual proof.

Chrome Web Store has traditionally been very permissive as far as the uploaded content goes. Even taking down extensions infringing trademarks took forever, extensions with misleading names and descriptions on the other hand were always considered “fine.” You have to consider that updating extensions on Chrome Web Store is a fully automatic process, there is no human review like with Mozilla or Opera. So nobody stops you from turning an originally harmless extension bad.

On the bright side, I doubt that Andrey’s assumption of 20 million compromised Chrome users is correct. There are strong indicators that the user numbers of these fake ad blockers have been inflated by bots, simply because the user count is a contributing factor to the search ranking. I assume that this is also the main reason behind the Google Analytics tracking: whoever is behind these extensions, they know exactly that their Chrome Web Store user numbers are bogus.

For reference, the real ad blocking extensions are:

Planet MozillaFirefox Data engineering newsletter Q1 / 2018

As the Firefox data engineering teams we provide core tools for using data to other teams. This spans from collection through Firefox Telemetry, storage & processing in our Data Platform to making data available in Data Tools.

To make new developments more visible we aim to publish a quarterly newsletter. As we skipped one, some important items from Q4 are also highlighted this time.

This year our teams are putting their main focus on:

  • Making experimentation easy & powerful.
  • Providing a low-latency view into product release health.
  • Making it easy to work with events end-to-end.
  • Addressing important user issues with our tools.
<figure></figure>

Usage improvements

Last year we started to investigate how our various tools are used by people working on Firefox in different roles. From that we started addressing some of the main issues users have.

Most centrally, the Telemetry portal is now the main entry point to our tools, documentation and other resources. When working with Firefox data you will find all the important tools linked from there.

We added the probe dictionary to make it easy to find what data we have about Firefox usage.

For STMO, our Redash instance, we deployed a major UI refresh from the upstream project.

There is new documentation on prototyping and optimizing STMO queries.

Our data documentation saw many other updates, from cookbooks on how to see your own pings and sending new pings to adding more datasets. We also added documentation on how our data pipeline works.

Enabling experimentation

For experimentation, we have focused on improving tooling. Test Tube will soon be our main experiment dashboard, replacing experiments viewer. It displays the results of multivariant experiments that are being conducted within Firefox.

We now have St. Moab as a toolkit for automatically generating experiment dashboards.

Working with event data

To make working with events easier, we improved multiple stages in the pipeline. Our documentation has an overview of the data flow.

On the Firefox side, events can now be recorded through the events API, from add-ons, and whitelisted Firefox content. From Firefox 61, all recorded events are automatically counted into scalars, to easily get summary statistics.

Event data is available for analysis in Redash in different datasets. We can now also connect more event data to Amplitude, a product analytics tool. A connection for some mobile events to Amplitude is live, for Firefox Desktop events it will be available soon.

Low-latency release health data

To enable low-latency views into release health data, we are working on improving Mission Control, which will soon replace arewestableyet.com.

It has new features that enable comparing quality measures like crashes release-over-release across channels.

Firefox Telemetry tools

For Firefox instrumentation we expanded on the event recording APIs. To make build turnaround times faster, we now support adding scalars in artifact builds and will soon extend this to events.

Following the recent Firefox data preferences changes, we adopted Telemetry to only differentiate between “release” and “prerelease” data.

This also impacted the measurement dashboard and telemetry.js users as the current approach to publishing this data from the release channel does not work anymore.

The measurement dashboard got some smaller usability improvements thanks to a group of contributors. We also prototyped a use counter dashboard for easier analysis.

Datasets & analysis tools

To power LetsEncrypt stats, we publish a public Firefox SSL usage dataset.

The following datasets are newly available in Redash or through Spark:

  • client_counts_daily — This is useful for estimating user counts over a few dimensions and a long history with daily precision.
  • first_shutdown_summary — A summary of the first main ping of a client’s lifetime. This accounts for clients that do not otherwise appear in main_summary.
  • churn — A pre-aggregated dataset for calculating the 7-day churn for Firefox Desktop.
  • retention — A pre-aggregated dataset for calculating retention for Firefox Desktop. The primary use-case is 1-day retention.

For analysis tooling we now have Databricks available. This offers instant-on-notebooks with no more waiting for clusters to spin up and supports Scala, SQL and R. If you’re interested sign up to the databricks-discuss mailing list.

We also got the probe info service into production, which scrapes the probe data in Firefox code and makes a history of it available to consumers. This is what powers the probe dictionary, but can also be used to power other data tooling.

Getting in touch

Please reach out to us with any questions or concerns.

Cheers from

  • The data engineering team (Katie Parlante), consisting of
  • The Firefox Telemetry team (Georg Fritzsche)
  • The Data Platform team (Mark Reid)
  • The Data Tools team (Rob Miller)

Firefox Data engineering newsletter Q1 / 2018 was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Planet WebKitRelease Notes for Safari Technology Preview 54

Safari Technology Preview Release 54 is now available for download for macOS Sierra and macOS High Sierra. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 230029-230521.

Clipboard API

  • Fixed copying a list from Microsoft Word to TinyMCE when mso-list is the first property (r230120)
  • Prioritized file promises over filenames during drag and drop (r230221)

Beacon API

  • Fixed Beacon redirect responses to be CORS validated (r230495)

Web API

  • Implemented createImageBitmap(Blob) (r230350)

WebRTC

  • Added a special software encoder mode when a compression session is not using a hardware encoder and VCP is not active (r230451)
  • Added experimental support for MDNS ICE candidates in WebRTC data channel peer-to-peer connections (r230290, r230307)

Web Inspector

  • Fixed the errors glyph to fully change to blue when active (r230372)
  • Tinted all pixels drawn by a shader program when hovering over a tree element in the Canvas Tab (r230127)

Planet MozillaThe Joy of Coding - Episode 136

The Joy of Coding - Episode 136 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaThe Joy of Coding - Episode 136

The Joy of Coding - Episode 136 mconley livehacks on real Firefox bugs while thinking aloud.

Planet MozillaWorking for Good: Metalwood Salvage of Portland

The web should be open to everyone, a place for unbridled innovation, education, and creative expression. That’s why Firefox fights for Net Neutrality, promotes online privacy rights, and supports open-source … Read more

The post Working for Good: Metalwood Salvage of Portland appeared first on The Firefox Frontier.

Planet MozillaFriend of Add-ons: Viswaprasath Ks

Please meet our newest Friend of Add-ons, Viswaprasanth Ks! Viswa began contributing to Mozilla in January 2013, when he met regional community members while participating in a Firefox OS hackathon in Bangalore, India. Since then, he has been a member of the Firefox Student Ambassador Board, a Sr. Firefox OS app reviewer, and a Mozilla Rep and Tech Speaker.

In early 2017, Viswa began developing extensions for Firefox using the WebExtensions API. From the start, Viswa wanted to invite his community to learn this framework and create extensions with him. At community events, he would speak about extension development and help participants build their first extensions. These presentations served as a starting point for creating the Activate campaign “Build Your Own Extension.” Viswa quickly became a leader in developing the campaign and testing iterations with a variety of different audiences. In late 2017, he collaborated with community members Santosh Viswanatham and Trishul Goel to re-launch the campaign with a new event flow and more learning resources for new developers.

Viswa continues to give talks about extension development and help new developers become confident working with WebExtensions APIs. He is currently creating a series of videos about the WebExtensions API to be released this summer. When he isn’t speaking about extensions, he mentors students in the Tamilnadu region in Rust and Quality Assurance.

These experiences have translated into skills Viswa uses in everyday life. “I learned about code review when I became a Sr. Firefox OS app reviewer,” he says. “This skill helps me a lot at my office. I am able to easily point out errors in the product I am working on. The second important thing I learned by contributing to Mozilla is how to build and work with a diverse team. The Mozilla community has a lot of amazing people all around the world, and there are unique things to learn from each and every one.”

In his free time, Viswa watches tech-related talks on YouTube, plays chess online, and explores new Mozilla-related projects like Lockbox.

He’s also quick to add, “I feel each and every one who cares about the internet should become Mozilla contributors so the journey will be awesome in future.”

If that describes you and you would like get more involved with the add-ons community, please take a look at our wiki for some opportunities to contribute to the project.

Thank you so much for all of your contributions, Viswa! We’re proud to name you Friend of Add-ons.

 

The post Friend of Add-ons: Viswaprasath Ks appeared first on Mozilla Add-ons Blog.

Planet MozillaVirtual Reality at the Intersection of Art & Technology

 

“If someone can imagine a world…they can create an experience.”
– Reggie Watts

 

This is the second video in our four part series around creators, virtual reality, and the open web. As we laid out in the opening post of this series, virtual reality is more than a technology, and it is far more than mere eye-candy. VR is an immensely powerful tool that is honed and developed every day. In the hands of a creator, that tool has the potential to transport audiences into new worlds and provide new perspectives.

It’s one thing to read about the crisis in Sudan, but being transported inside that crisis is deeply affecting in a way we haven’t seen before.

The hard truth is that all the technological capabilities in the world won’t matter if creators don’t have the proper tools to shape that technology into experiences. To make a true impact, technology and art can’t live parallel lives. They must intersect. Bringing together those worlds was the thrust for our VR the People panel at the Sundance Festival.

“You’re gonna end up finding someone who’s a 16-year-old in the basement with an open-source VR headset and some crappy computer and they download free software so they can build [an experience].”
– Brooks Brown, Global Director of Virtual Reality, Starbreeze Studios

 

That quote above is exactly why Mozilla spent years working to build WebVR, and why we held our panel at Sundance. It’s why we are writing these posts. We’re hoping they reach someone out there – anyone, anywhere – who has a world in their head and a story to tell. We’re hoping they pick up the tools our engineers built and use them in ways that inspire and force those same engineers to build new tools that keep pace with the evolving creative force.

So go ahead, check out our resources and tools at https://mixedreality.mozilla.org/. We promise you won’t be creating alone. You bring the art, we’ll bring the technology, and together we can make something special.

Read more on VR the People

Planet MozillaHello wasm-pack!

2 panels, one showing ferris the crab with assorted rust and wasm packages and one with the npm wombat with assorted js wasm and css/html packages. the crab is throwing a package over towards the wombat

As Lin Clark emphasizes in her article about Rust and WebAssembly: the goal of WebAssembly is not to replace JavaScript, but to be an awesome tool to use with JavaScript. Lots of amazing work has been done to simplify crossing the language boundary between JavaScript and WebAssembly, and you can read all about that in Alex Crichton’s post on wasm-bindgen. This post focuses on a different type of JavaScript/Rust integration: package ecosystem and developer workflows.

Both Rust and JavaScript have vibrant package ecosystems. Rust has cargo and crates.io. JavaScript has several CLI tools, including the npm CLI, that interface with the npm registry. In order for WebAssembly to be successful, we need these two systems to work well together, specifically:

  • Rust developers should be able to produce WebAssembly packages for use in JavaScript without requiring a Node.js development environment
  • JavaScript developers should be able to use WebAssembly without requiring a Rust development environment

✨📦 Enter: wasm-pack.

wasm-pack is a tool for assembling and packaging Rust crates that target WebAssembly. These packages can be published to the npm Registry and used alongside other packages. This means you can use them side-by-side with JS and other packages, and in many kind of applications, be it a Node.js server side app, a client-side application bundled by Webpack, or any other sort of application that uses npm dependencies. You can find wasm-pack on crates.io and GitHub.

Development of this tooling has just begun and we’re excited to get more developers from both the Rust and JavaScript worlds involved. Both the JavaScript and Rust ecosystems are focused on developer experience. We know first hand that the key to a productive and happy ecosystem is good tools that automate the boring tasks and get out of the developer’s way. In this article, I’ll talk about where we are, where we’re headed, how to get started using the tooling now, and how to get involved in shaping its future.

💁 What it does today

ferris stands between 2 open packages, one labelled rust, one labelled npm. there is a flow from the Rust package to the npm package with 4 stages. first stage: a lib.rs and cargo.toml file, then a .wasm file, then a .wasm and a .js file, then a .wasm, .js, package.json and a readme

Today, wasm-pack walks you through four basic steps to prepare your Rust code to be published as a WebAssembly package to the npm registry:

1. Compile to WebAssembly

wasm-pack will add the appropriate WebAssembly compilation target using rustup and will compile your Rust to Web Assembly in release mode.

To do this, wasm-pack will:

  • Add the wasm32-unknown-unknown compilation target, if needed
  • Compile your Rust project for release using the wasm target

2. Run wasm-bindgen

wasm-pack wraps the CLI portion of the wasm-bindgen tool and runs it for you! This does things like wrapping your WebAssembly module in JS wrappers which make it easier for people to interact with your module. wasm-bindgen supports both ES6 modules and CommonJS and you can use wasm-pack to produce either type of package!

To do this, wasm-pack will:

  • If needed, install and/or update wasm-bindgen
  • Run wasm-bindgen, generating a new .wasm file and a .js file
  • Moves the generated files to a new pkg directory

3. Generate package.json

wasm-pack reads your Cargo.toml and generates the package.json file necessary to publish your package to the npm registry. It will:

To do this, wasm-pack will:

  • Copy over your project name and description
  • Link to your Rust project’s repository
  • List the generated JavaScript files in the files key. This ensures that those files, and only those files, are included in your npm package. This is particularly important for ensuring good performance if you intend to use this package, or a bundle including this package, in the browser!

4. Documentation

wasm-pack will copy your Rust project’s README.md to the npm package it produces. We’ve got a lot of great ideas about extending this further to support the Rust ecosystem’s documentation feature, rustdoc– more on the in the next section!

🔮 Future Plans

Integrate with rustdoc

The crates.io team surveyed developers, and learned that good documentation was the number one feature that developers looked for when evaluating the use of crate. Contributor Yoshua Wuyts introduced the brilliant idea of generating further README.md content by integrating wasm-pack with the Rust API documentation tool, rustdoc. The Rust-wasm team is committed to making Rust a first class way to write WebAssembly. Offering documentation for Rust-generated WebAssembly packages that’s both easy to write and easy to discover aligns neatly with our goals. Read more about the team’s thoughts in this issue and join in the discussion!

Manage and Optimize your Rust and JS dependency graphs

The next large piece of development work on wasm-pack will focus on using custom segments in compiled WebAssembly to declare dependencies on local Javascript files or other npm packages.

The preliminary work for this feature has already landed in wasm-bindgen, so the next step will be integrating it into wasm-pack. The naive integration won’t be too tricky- but we’re excited to explore the opportunities we have to streamline and optimize Rust dependency trees that contain npm dependencies on several levels! This work will be similar to the optimizations that bundlers like webpack deliver, but on the level of Rust dependencies.

There’s a lot of questions we still have to answer and there’s going be a lot of neat engineering work to do. In a few weeks there will be a full post on this topic, so keep an eye out!

ferris is sitting in a package on a scale, in the distance several interconnected and dependent packages are linked with lines flowing into the package. the scale says "heavy"

Grow Node.js toolchain in Rust

The largest and most ambitious goal of this project is to rewrite the required npm login, npm pack and npm publish steps in Rust so that the required dependency on a Node.js development environment becomes optional for those who don’t currently use Node.js in their workflow. As we’ve said before, we want to ensure that both WebAssembly package producers and users can remain in their familiar workflows. Currently, that is true for JavaScript developers- they do not need to have a Rust development environment or any knowledge of Rust to get started using a Rust-generated WebAssembly module that’s been published with wasm-pack. However, Rust developers still need to install Node.js and npm to publish with wasm-pack, we’re excited to change that by writing a npm package publisher in Rust- and who knows, perhaps we can eventually integrate some Rust elements (perhaps compiled to WebAssembly!) into the npm client!

Further collaboration with npm and bundlers

We’re always communicating with the npm CLI team members Kat Marchan and Rebecca Turner, as well as the folks who work on webpack and Parcel– we’re excited to keep working with them to make it easy for developers to release and use WebAssembly code!

🛠 Start using it today!

wasm-pack is currently a command line tool distributed via Cargo. To install it, setup a Rust development environment, and then run:

cargo install wasm-pack

If you aren’t sure where to start, we have a tutorial for you! This tutorial, by Michael Gattozzi and the Rust-wasm working group, walks you through:

  • writing a small Rust library
  • compiling it to WebAssembly, packaging, and publishing with wasm-pack
  • bundling with webpack to produce a small website

a gif of the wasm pack CLI tool. first we ls a directory with a rust crate, then we run wasm pack. it completes in one minute, then we ls the target directory to see that a wasm binary was compiled, then we ls the pkg directory to see that an npm package was created

👯‍♀️Contribute

The key to all excellent developer tooling is a short feedback cycle between developers of the tool and developers using the tool in their day to day workflows. In order to be successful with wasm-pack, and all of our WebAssembly developer tooling, we need developers of all skill levels and backgrounds to get involved!

Take a look at our Contributor Guidelines and our Issue Tracker (we regularly label things as “good first issue” and provide mentors and mentoring instructions!)- we’re excited to work with you!

Planet MozillaNo-Judgment Digital Definitions: App vs Web App

Just when you think you’ve got a handle on this web stuff, things change. The latest mixup? Apps vs Web Apps. An app should be an app no matter what, … Read more

The post No-Judgment Digital Definitions: App vs Web App appeared first on The Firefox Frontier.

Planet Mozilla2018 Global Sprint Orientation Webinar 3 - April 17th, 2018

2018 Global Sprint Orientation Webinar 3 - April 17th, 2018 Learn about working open at the Global Sprint and hear stories and tips from past participants.

Planet Mozilla2018 Global Sprint Orientation Webinar 3 - April 17th, 2018

2018 Global Sprint Orientation Webinar 3 - April 17th, 2018 Learn about working open at the Global Sprint and hear stories and tips from past participants.

Planet MozillaHolochain Meetup

Holochain Meetup Holochain Meetup 4/17/2018

Planet MozillaDecision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development

The technology industry was dealt a major setback when the Federal Circuit recently decided in Oracle v. Google that Google’s use of Java “declaring code” was not a fair use. The copyright doctrine of Fair Use impacts a developer’s ability to learn from and improve on the work of others, which is a crucial part of software development. Because of this ruling, copyright law today is now at odds with how software is developed.*

This is the second time in this eight year case that the Federal Circuit’s ruling has diverged from how software is written. In 2014, the court decided that declaring code can be copyrighted, a ruling with which we disagreed. Last year we filed another amicus brief in this case, advocating that Google’s implementation of the APIs should be considered a fair use. In this recent decision, the court found that copying the Java declaring code was not a protected fair use of that code.

We believe that open source software is vital to security, privacy, and open access to the internet. We also believe that Fair Use is critical to developing better, more secure, more private, and more open software because it allows developers to learn from each other and improve on existing work. Even the Mozilla Public License explicitly acknowledges that it “is not intended to limit any rights” under applicable copyright doctrines such as fair use.

The Federal Circuit’s decision is a big step in the wrong direction. We hope Google appeals to the Supreme Court and that the Supreme Court sets us back on a better course.

 

* When Google released its Android operating system, it incorporated some code from Sun Microsystem’s Java APIs into the software. Google copied code in those APIs that merely names functions and performs other general housekeeping functions (called “declaring code”) but wrote all the substantive code (called “implementing code”) from scratch. Software developers generally use declaring code to define the names, format, and organization ideas for certain functions, and implementing code to do the actual work (telling the program how to perform the functions). Developers specifically rely on “declaring code” to enable their own programs to interact with other software, resulting in code that is efficient and easy for others to use.

The post Decision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development appeared first on Open Policy & Advocacy.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>