This Week In Rust: This Week in Rust 257 |
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's crate is static-assertions, a crate that does what it says on the tin – allow you to write static assertions. Thanks to llogiq for the suggestion!
Submit your suggestions and votes for next week!
Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here.
115 pull requests were merged in the last week
rustc_on_unimplemented
and reword Iterator
E0277 errorsPath
unused_patterns
lintSized
nesscopysign
function to f32 and f64match (return)
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
Panic is “pulling over to the side of the road” whereas crash is “running into a telephone pole”.
– /u/zzzzYUPYUPphlumph on /r/rust
Thanks to KillTheMule for the suggestion!
Please submit your quotes for next week!
This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.
https://this-week-in-rust.org/blog/2018/10/23/this-week-in-rust-257/
|
This Week In Rust: This Week in Rust 257 |
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's crate is static-assertions, a crate that does what it says on the tin – allow you to write static assertions. Thanks to llogiq for the suggestion!
Submit your suggestions and votes for next week!
Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here.
115 pull requests were merged in the last week
rustc_on_unimplemented
and reword Iterator
E0277 errorsPath
unused_patterns
lintSized
nesscopysign
function to f32 and f64match (return)
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
Panic is “pulling over to the side of the road” whereas crash is “running into a telephone pole”.
– /u/zzzzYUPYUPphlumph on /r/rust
Thanks to KillTheMule for the suggestion!
Please submit your quotes for next week!
This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.
https://this-week-in-rust.org/blog/2018/10/23/this-week-in-rust-257/
|
Cameron Kaiser: TenFourFox FPR10 available |
The changes for FPR11 (December) and FPR12 will be smaller in scope mostly because of the holidays and my parallel work on the POWER9 JIT for Firefox on the Talos II. For the next couple FPRs I'm planning to do more ES6 work (mostly Symbol and whatever else I can shoehorn in) and to enable unique data URI origins, and possibly get requestIdleCallback into a releaseable state. Despite the slower pace, however, we will still be tracking the Firefox release schedule as usual.
http://tenfourfox.blogspot.com/2018/10/tenfourfox-fpr10-available.html
|
The Servo Blog: RGSoC wrap-up - Supporting Responsive Images in Servo |
|
Mozilla Future Releases Blog: Testing new ways to keep you safe online |
Mozilla has long played an important role in the online world, and we’re proud of the impact we’ve had. But we want to do even more, and that means exploring ways to keep you safe beyond the browser’s reach. Across numerous studies we’ve consistently heard from our users that they want Firefox to protect their privacy on public networks like cafes and airports. With that in mind, over the next few months we will be running an experiment in which we’ll offer a virtual private network (VPN) service to a small group of Firefox users.
This experiment is also important to Mozilla’s future. We believe that an innovative, vibrant, and sustainable Mozilla is critical to the future of the open Internet, and we plan to be here over the long haul. To do that with confidence we also need to have diverse sources of revenue. For some time now Mozilla has largely been funded by our search partnerships. With this VPN experiment which kicks off Wednesday, October 24th, we’re starting the process of exploring new, additional sources of revenue that align with our mission.
What is a VPN?
A VPN is an online service and a piece of software that work together to secure your internet connection against monitoring and eavesdropping. By encrypting all your internet traffic and routing it through a secure server, a VPN prevents your ISP (internet service provider), school, or government from seeing which websites you visit and tracing your online activity back to your IP address. A VPN can also offer valuable peace of mind when you’re using an unsecured public Wi-Fi network, like the one at the airport or your local coffee shop.
How will it work?
A small, random group of US-based Firefox users will be presented with an offer to purchase a monthly subscription to a VPN service that’s been vetted and approved by Mozilla. After signing up for a subscription (billed securely using payment services Stripe and Recurly) they will be able to download and install the VPN software. Windows, macOS, Linux, iOS, and Android are all supported. The VPN can be easily turned on or off as needed, and the subscription can be cancelled at any time.
Screenshots of the experiment’s user experience:
Partnership with ProtonVPN
Using a VPN service means placing a great deal of trust in its provider because you depend upon both the safety of its technology and its commitment to protecting your privacy. There are many VPN vendors out there, but not all of them are created equal. We knew that we could only offer our users a VPN product if it met or exceeded our most rigorous standards. We also knew that the practices, policies, and character of these vendors would be just as important in our decision.
We therefore set out to conduct a thorough evaluation of a long list of market-leading VPN services. Our team looked closely at a wide variety of factors, ranging from the design and implementation of each VPN service and its accompanying software, to the security of the vendor’s own network and internal systems. We examined each vendors’ privacy and data retention policies to ensure they logged as little user data as possible. And we considered numerous other factors, including local privacy laws, company track record, transparency, and quality of support.
As a result of this evaluation we’ve selected ProtonVPN for this experiment. ProtonVPN offers a secure, reliable, and easy-to-use VPN service and is operated by the makers of ProtonMail, a respected, privacy-oriented email service. Based in Switzerland, ProtonVPN has a strict privacy policy and does not log any data about your usage of their service. As a company they have a track record of fighting for online privacy and they share our dedication to internet safety and security.
Your purchase supports Mozilla’s work
ProtonVPN will be providing the service in this experiment. Mozilla will be the party collecting payment from Firefox users who decide to subscribe. A portion of these proceeds will be shared with ProtonVPN, to offset their costs in operating the service, and a portion will go to Mozilla. In this way, subscribers will be directly supporting Mozilla while benefiting from one of the very best VPN services on the market today.
We’re looking forward to this experiment and we are excited to bring the protection of a VPN to more people. We’ll be watching both user and community feedback closely as this experiment runs.
The post Testing new ways to keep you safe online appeared first on Future Releases.
https://blog.mozilla.org/futurereleases/2018/10/22/testing-new-ways-to-keep-you-safe-online/
|
Hacks.Mozilla.Org: WebAssembly’s post-MVP future: A cartoon skill tree |
People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.
I can understand where that misconception comes from. The WebAssembly community group is really committed to backwards compatibility. This means that the WebAssembly that you create today will continue working on browsers into the future.
But that doesn’t mean that WebAssembly is feature complete. In fact, that’s far from the case. There are many features that are coming to WebAssembly which will fundamentally alter what you can do with WebAssembly.
I think of these future features kind of like the skill tree in a videogame. We’ve fully filled in the top few of these skills, but there is still this whole skill tree below that we need to fill-in to unlock all of the applications.
So let’s look at what’s been filled in already, and then we can see what’s yet to come.
The very beginning of WebAssembly’s story starts with Emscripten, which made it possible to run C++ code on the web by transpiling it to JavaScript. This made it possible to bring large existing C++ code bases, for things like games and desktop applications, to the web.
The JS it automatically generated was still significantly slower than the comparable native code, though. But Mozilla engineers found a type system hiding inside the generated JavaScript, and figured out how to make this JavaScript run really fast. This subset of JavaScript was named asm.js.
Once the other browser vendors saw how fast asm.js was, they started adding the same optimizations to their engines, too.
But that wasn’t the end of the story. It was just the beginning. There were still things that engines could do to make this faster.
But they couldn’t do it in JavaScript itself. Instead, they needed a new language—one that was designed specifically to be compiled to. And that was WebAssembly.
So what skills were needed for the first version of WebAssembly? What did we need to get to a minimum viable product that could actually run C and C++ efficiently on the web?
The folks working on WebAssembly knew they didn’t want to just support C and C++. They wanted many different languages to be able to compile to WebAssembly. So they needed a language-agnostic compile target.
They needed something like the assembly language that things like desktop applications are compiled to—like x86. But this assembly language wouldn’t be for an actual, physical machine. It would be for a conceptual machine.
That compiler target had to be designed so that it could run very fast. Otherwise, WebAssembly applications running on the web wouldn’t keep up with users’ expectations for smooth interactions and game play.
In addition to execution time, load time needed to be fast, too. Users have certain expectations about how quickly something will load. For desktop applications, that expectation is that they will load quickly because the application is already installed on your computer. For web apps, the expectation is also that load times will be fast, because web apps usually don’t have to load nearly as much code as desktop apps.
When you combine these two things, though, it gets tricky. Desktop applications are usually pretty large code bases. So if they are on the web, there’s a lot to download and compile when the user first goes to the URL.
To meet these expectations, we needed our compiler target to be compact. That way, it could go over the web quickly.
These languages also needed to be able to use memory differently from how JavaScript uses memory. They needed to be able to directly manage their memory—to say which bytes go together.
This is because languages like C and C++ have a low-level feature called pointers. You can have a variable that doesn’t have a value in it, but instead has the memory address of the value. So if you’re going to support pointers, the program needs to be able to write and read from particular addresses.
But you can’t have a program you downloaded from the web just accessing bytes in memory willy-nilly, using whatever addresses they want. So in order to create a secure way of giving access to memory, like a native program is used to, we had to create something that could give access to a very specific part of memory and nothing else.
To do this, WebAssembly uses a linear memory model. This is implemented using TypedArrays. It’s basically just like a JavaScript array, except this array only contains bytes of memory. When you access data in it, you just use array indexes, which you can treat as though they were memory addresses. This means you can pretend this array is C++ memory.
So with all of these skills in place, people could run desktop applications and games in your browser as if they were running natively on their computer.
And that was pretty much the skill set that WebAssembly had when it was released as an MVP. It was truly an MVP—a minimum viable product.
This allowed certain kinds of applications to work, but there were still a whole host of others to unlock.
The next achievement to unlock is heavier weight desktop applications.
Can you imagine if something like Photoshop were running in your browser? If you could instantaneously load it on any device like you do with Gmail?
We’ve already started seeing things like this. For example, Autodesk’s AutoCAD team has made their CAD software available the browser. And Adobe has made Lightroom available through the browser using WebAssembly.
But there are still a few features that we need to put in place to make sure that all of these applications—even the heaviest of heavy weight—can run well in the browser.
First, we need support for multithreading. Modern-day computers have multiple cores. These are basically multiple brains that can all be working at the same time on your problem. That can make things go much faster, but to make use of these cores, you need support for threading.
Alongside threading, there’s another technique that utilizes modern hardware, and which enables you to process things in parallel.
That is SIMD: single instruction multiple data. With SIMD, it’s possible to take a chunk of memory and split up across different execution units, which are kind of like cores. Then you have the same bit of code—the same instruction—run across all of those execution units, but they each apply that instruction to their own bit of the data.
Another hardware capability that WebAssembly needs to take full advantage of is 64-bit addressing.
Memory addresses are just numbers, so if your memory addresses are only 32 bits long, you can only have so many memory addresses—enough for 4 gigabytes of linear memory.
But with 64-bit addressing, you have 16 exabytes. Of course, you don’t have 16 exabytes of actual memory in your computer. So the maximum is subject to however much memory the system can actually give you. But this will take the artificial limitation on address space out of WebAssembly.
For these applications, we don’t just need them to run fast. We needed load times to be even faster than they already were. There are a few skills that we need specifically to improve load times.
One big step is to do streaming compilation—to compile a WebAssembly file while it’s still being downloaded. WebAssembly was designed specifically to enable easy streaming compilation. In Firefox, we actually compile it so fast—faster than it is coming in over the network— that it’s pretty much done compiling by the time you download the file. And other browsers are adding streaming, too.
Another thing that helps is having a tiered compiler.
For us in Firefox, that means having two compilers. The first one—the baseline compiler—kicks in as soon as the file starts downloading. It compiles the code really quickly so that it starts up quickly.
The code it generates is fast, but not 100% as fast as it could be. To get that extra bit of performance, we run another compiler—the optimizing compiler—on several threads in the background. This one takes longer to compile, but generates extremely fast code. Once it’s done, we swap out the baseline version with the fully optimized version.
This way, we get quick start up times with the baseline compiler, and fast execution times with the optimizing compiler.
In addition, we’re working on a new optimizing compiler called Cranelift. Cranelift is designed to compile code quickly, in parallel at a function by function level. At the same time, the code it generates gets even better performance than our current optimizing compiler.
Cranelift is in the development version of Firefox right now, but disabled by default. Once we enable it, we’ll get to the fully optimized code even quicker, and that code will run even faster.
But there’s an even better trick we can use to make it so we don’t have to compile at all most of the time…
With WebAssembly, if you load the same code on two page loads, it will compile to the same machine code. It doesn’t need to change based on what data is flowing through it, like the JS JIT compiler needs to.
This means that we can store the compiled code in the HTTP cache. Then when the page is loading and goes to fetch the .wasm file, it will instead just pull out the precompiled machine code from the cache. This skips compiling completely for any page that you’ve already visited that’s in cache.
Many discussions are currently percolating around other ways to improve this, skipping even more work, so stay tuned for other load-time improvements.
Where are we with supporting these heavyweight applications right now?
Even though this is all still in progress, you already see some of these heavyweight applications coming out today, because WebAssembly already gives these apps the performance that they need.
But once these features are all in place, that’s going to be another achievement unlocked, and more of these heavyweight applications will be able to come to the browser.
But WebAssembly isn’t just for games and for heavyweight applications. It’s also meant for regular web development… for the kind of web development folks are used to: the small modules kind of web development.
Sometimes you have little corners of your app that do a lot of heavy processing, and in some cases, this processing can be faster with WebAssembly. We want to make it easy to port these bits to WebAssembly.
Again, this is a case where some of it’s already happening. Developers are already incorporating WebAssembly modules in places where there are tiny modules doing lots of heavy lifting.
One example is the parser in the source map library that’s used in Firefox’s DevTools and webpack. It was rewritten in Rust, compiled to WebAssembly, which made it 11x faster. And WordPress’s Gutenberg parser is on average 86x faster after doing the same kind of rewrite.
But for this kind of use to really be widespread—for people to be really comfortable doing it—we need to have a few more things in place.
First, we need fast calls between JS and WebAssembly, because if you’re integrating a small module into an existing JS system, there’s a good chance you’ll need to call between the two a lot. So you’ll need those calls to be fast.
But when WebAssembly first came out, these calls weren’t fast. This is where we get back to that whole MVP thing—the engines had the minimum support for calls between the two. They just made the calls work, they didn’t make them fast. So engines need to optimize these.
We’ve recently finished our work on this in Firefox. Now some of these calls are actually faster than non-inlined JavaScript to JavaScript calls. And others engines are also working on this.
That brings us to another thing, though. When you’re calling between JavaScript and WebAssembly, you often need to pass data between them.
You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.
There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.
That’s kind of complicated. And it takes some time to convert the data into linear memory. So we need this to be easier and faster.
Another thing we need is integration with the browser’s built in ES module support. Right now, you instantiate a WebAssembly module using an imperative API. You call a function and it gives you back a module.
But that means that the WebAssembly module isn’t really part of the JS module graph. In order to use import and export like you do with JS modules, you need to have ES module integration.
Just being able to import and export doesn’t get us all the way there, though. We need a place to distribute these modules, and download them from, and tools to bundle them up.
What’s the npm for WebAssembly? Well, what about npm?
And what’s the webpack or Parcel for WebAssembly? Well, what about webpack and Parcel?
These modules shouldn’t look any different to the people who are using them, so no reason to create a separate ecosystem. We just need tools to integrate with them.
There’s one more thing that we need to really do well in existing JS applications—support older versions of browsers, even those browsers that don’t know what WebAssembly is. We need to make sure that you don’t have to write a whole second implementation of your module in JavaScript just so that you can support IE11.
So where are we on this?
As I mentioned before, one reason you have to use linear memory for more complex kinds of data is because WebAssembly only understands numbers. The only types it has are ints and floats.
With the reference types proposal, this will change. This proposal adds a new type that WebAssembly functions can take as arguments and return. And this type is a reference to an object from outside WebAssembly—for example, a JavaScript object.
But WebAssembly can’t operate directly on this object. To actually do things like call a method on it, it will still need to use some JavaScript glue. This means it works, but it’s slower than it needs to be.
To speed things up, there’s a proposal that we’ve been calling the host bindings proposal. It let’s a wasm module declare what glue must be applied to its imports and exports, so that the glue doesn’t need to be written in JS. By pulling glue from JS into wasm, the glue can be optimized away completely when calling builtin Web APIs.
There’s one more part of the interaction that we can make easier. And that has to do with keeping track of how long data needs to stay in memory. If you have some data in linear memory that JS needs access to, then you have to leave it there until the JS reads the data. But if you leave it in there forever, you have what’s called a memory leak. How do you know when you can delete the data? How do you know when JS is done with it? Currently, you have to manage this yourself.
Once the JS is done with the data, the JS code has to call something like a free function to free the memory. But this is tedious and error prone. To make this process easier, we’re adding WeakRefs to JavaScript. With this, you will be able to observe objects on the JS side. Then you can do cleanup on the WebAssembly side when that object is garbage collected.
So these proposals are all in flight. In the meantime, the Rust ecosystem has created tools that automate this all for you, and that polyfill the proposals that are in flight.
One tool in particular is worth mentioning, because other languages can use it too. It’s called wasm-bindgen. When it sees that your Rust code should do something like receive or return certain kinds of JS values or DOM objects, it will automatically create JavaScript glue code that does this for you, so you don’t need to think about it. And because it’s written in a language independent way, other language toolchains can adopt it.
wasm-pack
in the Rust ecosystem which automatically runs everything you need to package your code for npm. And the bundlers are also actively working on support.wasm2js
tool. That takes a wasm file and spits out the equivalent JS. That JS isn’t going to be fast, but at least that means it will work in older versions of browsers that don’t understand WebAssembly.So we’re getting close to unlocking this achievement. And once we unlock it, we open the path to another two.
One is rewriting large parts of things like JavaScript frameworks in WebAssembly.
The other is making it possible for statically-typed compile-to-js languages to compile to WebAssembly instead—for example, having languages like Scala.js, or Reason, or Elm compile to WebAssembly.
For both of these use cases, WebAssembly needs to support high-level language features.
We need integration with the browser’s garbage collector for a couple of reasons.
First, let’s look at rewriting parts of JS frameworks. This could be good for a couple of reasons. For example, in React, one thing you could do is rewrite the DOM diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelize that algorithm.
You could also speed things up by allocating memory differently. In the virtual DOM, instead of creating a bunch of objects that need to be garbage collected, you could used a special memory allocation scheme. For example, you could use a bump allocator scheme which has extremely cheap allocation and all-at-once deallocation. That could potentially help speed things up and reduce memory usage.
But you’d still need to interact with JS objects—things like components—from that code. You can’t just continually copy everything in and out of linear memory, because that would be difficult and inefficient.
So you need to be able to integrate with the browser’s GC so you can work with components that are managed by the JavaScript VM. Some of these JS objects need to point to data in linear memory, and sometimes the data in linear memory will need to point out to JS objects.
If this ends up creating cycles, it can mean trouble for the garbage collector. It means the garbage collector won’t be able to tell if the objects are used anymore, so they will never be collected. WebAssembly needs integration with the GC to make sure these kinds of cross-language data dependencies work.
This will also help statically-typed languages that compile to JS, like Scala.js, Reason, Kotlin or Elm. These language use JavaScript’s garbage collector when they compile to JS. Because WebAssembly can use that same GC—the one that’s built into the engine—these languages will be able to compile to WebAssembly instead and use that same garbage collector. They won’t need to change how GC works for them.
We also need better support for handling exceptions.
Some languages, like Rust, do without exceptions. But in other languages, like C++, JS or C#, exception handling is sometimes used extensively.
You can polyfill exception handling currently, but the polyfill makes the code run really slowly. So the default when compiling to WebAssembly is currently to compile without exception handling.
However, since JavaScript has exceptions, even if you’ve compiled your code to not use them, JS may throw one into the works. If your WebAssembly function calls a JS function that throws, then the WebAssembly module won’t be able to correctly handle the exception. So languages like Rust choose to abort in this case. We need to make this work better.
Another thing that people working with JS and compile-to-JS languages are used to having is good debugging support. Devtools in all of the major browsers make it easy to step through JS. We need this same level of support for debugging WebAssembly in browsers.
And finally, for many functional languages, you need to have support for something called tail calls. I’m not going to get too into the details on this, but basically it lets you call a new function without adding a new stack frame to the stack. So for functional languages that support this, we want WebAssembly to support it too.
So where are we on this?
The Typed Objects proposal for JS, and the GC proposal for WebAssembly. Typed Objects will make it possible to describe an object’s fixed structure. There is an explainer for this, and the proposal will be discussed at an upcoming TC39 meeting.
The WebAssembly GC proposal will make it possible to directly access that structure. This proposal is under active development.
With both of these in place, both JS and WebAssembly know what an object looks like and can share that object and efficiently access the data stored on it. Our team actually already has a prototype of this working. However, it still will take some time for these to go through standardization so we’re probably looking at sometime next year.
Once those are all in place, we’ll have unlocked JS frameworks and many compile-to-JS languages.
So, those are all achievements we can unlock inside the browser. But what about outside the browser?
Now, you may be confused when I talk about “outside the browser”. Because isn’t the browser what you use to view the web? And isn’t that right in the name—WebAssembly.
But the truth is the things you see in the browser—the HTML and CSS and JavaScript—are only part of what makes the web. They are the visible part—they are what you use to create a user interface—so they are the most obvious.
But there’s another really important part of the web which has properties that aren’t as visible.
That is the link. And it is a very special kind of link.
The innovation of this link is that I can link to your page without having to put it in a central registry, and without having to ask you or even know who you are. I can just put that link there.
It’s this ease of linking, without any oversight or approval bottlenecks, that enabled our web. That’s what enabled us to form these global communities with people we didn’t know.
But if all we have is the link, there are two problems here that we haven’t addressed.
The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.
Then should a web site have a different version of the code for every possible device? No.
Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.
The name for this concept is portability.
So that’s great, you can load code from people who don’t know you and don’t know what kind of device you’re running.
But that brings us to a second problem. If you don’t know these people who’s web pages you’re loading, how do you know what kind of code they’re giving you? It could be malicious code. It could be trying to take over your system.
Doesn’t this vision of the web—running code from anybody who’s link you follow—mean that you have to blindly trust anyone who’s on the web?
This is where the other key concept from the web comes in.
That’s the security model. I’m going to call it the sandbox.
Basically, the browser takes the page—that other person’s code—and instead of letting it run around willy-nilly in your system, it puts it in a sandbox. It puts a couple of toys that aren’t dangerous into that sandbox so that the code can do some things, but it leaves the dangerous things outside of the sandbox.
So the utility of the link is based on these two things:
So why does this distinction matter? Why does it make a difference if we think of the web as something that the browser shows us using HTML, CSS, and JS, or if we think of the web in terms of portability and the sandbox?
Because it changes how you think about WebAssembly.
You can think about WebAssembly as just another tool in the browser’s toolbox… which it is.
It is another tool in the browser’s toolbox. But it’s not just that. It also gives us a way to take these other two capabilities of the web—the portability and the security model—and take them to other use cases that need them too.
We can expand the web past the boundaries of the browser. Now let’s look at where these attributes of the web would be useful.
How could WebAssembly help Node? It could bring full portability to Node.
Node gives you most of the portability of JavaScript on the web. But there are lots of cases where Node’s JS modules aren’t quite enough—where you need to improve performance or reuse existing code that’s not written in JS.
In these cases, you need Node’s native modules. These modules are written in languages like C, and they need to be compiled for the specific kind of machine that the user is running on.
Native modules are either compiled when the user installs, or precompiled into binaries for a wide matrix of different systems. One of these approaches is a pain for the user, the other is a pain for the package maintainer.
Now, if these native modules were written in WebAssembly instead, then they wouldn’t need to be compiled specifically for the target architecture. Instead, they’d just run like the JavaScript in Node runs. But they’d do it at nearly native performance.
So we get to full portability for the code running in Node. You could take the exact same Node app and run it across all different kinds of devices without having to compile anything.
But WebAssembly doesn’t have direct access to the system’s resources. Native modules in Node aren’t sandboxed—they have full access to all of the dangerous toys that the browser keeps out of the sandbox. In Node, JS modules also have access to these dangerous toys because Node makes them available. For example, Node provides methods for reading from and writing files to the system.
For Node’s use case, it makes a certain amount of sense for modules to have this kind access to dangerous system APIs. So if WebAssembly modules don’t have that kind of access by default (like Node’s current modules do), how could we give WebAssembly modules the access they need? We’d need to pass in functions so that the WebAssembly module can work with the operating system, just as Node does with JS.
For Node, this will probably include a lot of the functionality that’s in things like the C standard library. It would also likely include things that are part of POSIX—the Portable Operating System Interface—which is an older standard that helps with compatibility. It provides one API for interacting with the system across a bunch of different Unix-like OSs. Modules would definitely need a bunch of POSIX-like functions.
What the Node core folks would need to do is figure out the set of functions to expose and the API to use.
But wouldn’t it be nice if that were actually something standard? Not something that was specific to just Node, but could be used across other runtimes and use cases too?
A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.
And if that were done in the right way, you could even implement the same API for the web. These standard APIs could be polyfilled onto existing Web APIs.
These functions wouldn’t be part of the WebAssembly spec. And there would be WebAssembly hosts that wouldn’t have them available. But for those platforms that could make use of them, there would be a unified API for calling these functions, no matter which platform the code was running on. And this would make universal modules—ones that run across both the web and Node—so much easier.
So, is this something that could actually happen?
A few things are working in this idea’s favor. There’s a proposal called package name maps that will provide a mechanism for mapping a module name to a path to load the module from. And that will likely be supported by both browsers and Node, which can use it to provide different paths, and thus load entirely different modules, but with the same API. This way, the .wasm module itself can specify a single (module-name, function-name) import pair that Just Works on different environments, even the web.
With that mechanism in place, what’s left to do is actually figure out what functions make sense and what their interface should be.
There’s no active work on this at the moment. But a lot of discussions are heading in this direction right now. And it looks likely to happen, in one form or another.
Which is good, because unlocking this gets us halfway to unlocking some other use cases outside the browser. And with this in place, we can accelerate the pace.
So, what are some examples of these other use cases?
One example is things like CDNs, and Serverless, and Edge Computing. These are cases where you’re putting your code on someone else’s server, and they make sure that the server is maintained and that the code is close to all of your users.
Why would you want to use WebAssembly in these cases? There was a great talk explaining exactly this at a conference recently.
Fastly is a company that provides CDNs and edge computing. And their CTO, Tyler McMullen, explained it this way (and I’m paraphrasing here):
If you look at how a process works, code in that process doesn’t have boundaries. Functions have access to whatever memory in that process they want, and they can call whatever other functions they want.
When you’re running a bunch of different people’s services in the same process, this is an issue. Sandboxing could be a way to get around this. But then you get to a scale problem.
For example, if you use a JavaScript VM like Firefox’s SpiderMonkey or Chrome’s V8, you get a sandbox and you can put hundreds of instances into a process. But with the numbers of requests that Fastly is servicing, you don’t just need hundreds per process—you need tens of thousands.
Tyler does a better job of explaining all of it in his talk, so you should go watch that. But the point is that WebAssembly gives Fastly the safety, speed, and scale needed for this use case.
So what did they need to make this work?
They needed to create their own runtime. That means taking a WebAssembly compiler—something that can compile WebAssembly down to machine code—and combining it with the functions for interacting with the system that I mentioned before.
For the WebAssembly compiler, Fastly used Cranelift, the compiler that we’re also building into Firefox. It’s designed to be very fast and doesn’t use much memory.
Now, for the functions that interact with the rest of the system, they had to create their own, because we don’t have that portable interface available yet.
So it’s possible to create your own runtime today, but it takes some effort. And it’s effort that will have to be duplicated across different companies.
What if we didn’t just have the portable interface, but we also had a common runtime that could be used across all of these companies and other use cases? That would definitely speed up development.
Then other companies could just use that runtime—like they do Node today—instead of creating their own from scratch.
So what’s the status of this?
Even though there’s no standard runtime yet, there are a few runtime projects in flight right now. These include WAVM, which is built on top of LLVM, and wasmjit.
In addition, we’re planning a runtime that’s built on top of Cranelift, called wasmtime.
And once we have a common runtime, that speeds up development for a bunch of different use cases. For example…
WebAssembly can also be used in more traditional operating systems. Now to be clear, I’m not talking about in the kernel (although brave souls are trying that, too) but WebAssembly running in Ring 3—in user mode.
Then you could do things like have portable CLI tools that could be used across all different kinds of operating systems.
And this is pretty close to another use case…
The internet of things includes devices like wearable technology, and smart home appliances.
These devices are usually resource constrained—they don’t pack much computing power and they don’t have much memory. And this is exactly the kind of situation where a compiler like Cranelift and a runtime like wasmtime would shine, because they would be efficient and low-memory. And in the extremely-resource-constrained case, WebAssembly makes it possible to fully compile to machine code before loading the application on the device.
There’s also the fact that there are so many of these different devices, and they are all slightly different. WebAssembly’s portability would really help with that.
So that’s one more place where WebAssembly has a future.
Now let’s zoom back out and look at this skill tree.
I said at the beginning of this post that people have a misconception about WebAssembly—this idea that the WebAssembly that landed in the MVP was the final version of WebAssembly.
I think you can see now why this is a misconception.
Yes, the MVP opened up a lot of opportunities. It made it possible to bring a lot of desktop applications to the web. But we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.
So the WebAssembly that we have today is not the end of this story—it’s just the beginning.
The post WebAssembly’s post-MVP future: A cartoon skill tree appeared first on Mozilla Hacks - the Web developer blog.
https://hacks.mozilla.org/2018/10/webassemblys-post-mvp-future/
|
Daniel Stenberg: DNS-over-HTTPS is RFC 8484 |
The protocol we fondly know as DoH, DNS-over-HTTPS, is now officially RFC 8484 with the official title "DNS Queries over HTTPS (DoH)". It documents the protocol that is already in production and used by several client-side implementations, including Firefox, Chrome and curl. Put simply, DoH sends a regular RFC 1035 DNS packet over HTTPS instead of over plain UDP.
I'm happy to have contributed my little bits to this standard effort and I'm credited in the Acknowledgements section. I've also implemented DoH client-side several times now.
Firefox has done studies and tests in cooperation with a CDN provider (which has sometimes made people conflate Firefox's DoH support with those studies and that operator). These studies have shown and proven that DoH is a working way for many users to do secure name resolves at a reasonable penalty cost. At least when using a fallback to the native resolver for the tricky situations. In general DoH resolves are slower than the native ones but in the tail end, the absolutely slowest name resolves got a lot better with the DoH option.
To me, DoH is partly necessary because the "DNS world" has failed to ship and deploy secure and safe name lookups to the masses and this is the one way applications "one layer up" can still secure our users.
https://daniel.haxx.se/blog/2018/10/19/dns-over-https-is-rfc-8484/
|
Chris H-C: Three-Year Moziversary |
Another year at Mozilla. They certainly don’t slow down the more you have of them.
For once a year of stability, organization-wise. The two biggest team changes were the addition of Jan-Erik back on March 1, and the loss of our traditional team name “Browser Measurement II” for a more punchy and descriptive “Firefox Telemetry Team.”
I will miss good ol’ BM2, though it is fun signing off notification emails with “Your Friendly Neighbourhood Firefox Telemetry Team (:gfritzsche, :janerik, :Dexter, :chutten)”
We’re actually in the market for a Mobile Telemetry Engineer, so if you or someone you know might be interested in hanging out with us and having their username added to the above, please take a look right here.
In blogging velocity I think I kept up my resolution to blog more. I’m up to 32 posts so far in 2018 (compared to year totals of 15, 26, and 27 in 2015-2017) and I have a few drafts kicking in the bin that ought to be published before the end of the year. Part of this is due to two new blogging efforts: So I’ve Finished (a series of posts about video games I’ve completed), and Ford The Lesser (a series summarizing the deeds and tone of the new Ontario Provincial Government). Neither are particularly mozilla-related, though, so I’m not sure if the count of work blogposts has changed.
Thinking back to work stuff, let’s go chronologically. Last November we released Firefox Quantum. It was and is a big deal. Then in December all hands went to Austin, Texas.
Electives happened again so I did a reprise of Death-Defying Stats (where I stand up and solve data questions, Live On Stage). We saw Star Wars: The Last Jedi (I’m not sure why the internet didn’t like it. I thought it was grand. Though the theatre ruined the impact of That One Scene by letting us know that no, the sound didn’t actually cut out it was deliberate. Yeesh). We partied at a pseudo-historical southwestern US town, drinking warm beverages out of gigantic branded mugs we got to take home afterwards.
Then we launched straight into 2018. Interviews increased to a crushing density for the role that was to become Jan-Erik’s and for two interns: one (Erin Comerford) working on redesigns for the venerable Telemetry Measurement Dashboards, and another (Agnieszka Ciepielewska) working on automatic change detection and explanation for Telemetry metrics.
In June we met again in San Francisco, but this time without Georg who was suffering a cold. Sunah and I gave a talk about Event Telemetry, Steak Club met again, and we got to mess around with science stuff at the Exploratorium.
This year’s Summer Big Project… y’know, there were a few of them. The first was the Event Telemetry “event” ping. Then there was the Measurement Dashboard redesign project where I ended up mentoring more than I expected due to PTO and timezones.
Also in the summer I was organizing and then going on a trip to celebrate a different anniversary (my tenth wedding anniversary) for nearly the entire month of July.
In August the team met in Berlin, and this time I was able to join in. That was a fun and productive time where we settled matters of team identity, ownership, process, and developed some delightful in-jokes to perplex anyone not in the in-group. I acted as an arm of Ontario Craft Beer Tourism by importing a few local cans (Waterloo Dark and Mad & Noisy Lagered Ale) while asking (well-intentioned but numerous and likely ignorant) questions about European life and politics and food and history and …
And that brings us more or less to now. September was busy. October is busy. I’m helping :frank put authentication on the old Measurement Dashboards so we can put release-channel data back up there without someone taking it and misinterpreting it. (As an org we’ve made the conscious decision to use our public data in a deliberate fashion to support truthful narratives about our products and our users. Like on the Firefox Public Data Report.) I’m looking into how we might take what we learned with Erin’s redesign prototype and productionize it with real data. I’m also improving documentation and consulting with a variety of teams on a variety of data things.
So, resolutions for the next twelve months? Keep on keeping on, I guess. I’m happy with the progress I have made this past year. I’m pleased with the direction our team and the broader org is headed. I’m interested to see where time and effort will take us.
:chutten
https://chuttenblog.wordpress.com/2018/10/19/three-year-moziversary/
|
Daniel Pocock: Debian GSoC 2018 report |
One of my major contributions to Debian in 2018 has been participation as a mentor and admin for Debian in Google Summer of Code (GSoC).
Here are a few observations about what happened this year, from my personal perspective in those roles.
Making a full report of everything that happens in GSoC is close to impossible. Here I consider issues that span multiple projects and the mentoring team. For details on individual projects completed by the students, please see their final reports posted in August on the mailing list.
Nicolas Dandrimont and Sylvestre Ledru retired from the admin role after GSoC 2016 and Tom Marble has retired from the Outreachy administration role, we should be enormously grateful for the effort they have put in as these are very demanding roles.
When the last remaining member of the admin team, Molly, asked for people to step in for 2018, knowing the huge effort involved, I offered to help out on a very temporary basis. We drafted a new delegation but didn't seek to have it ratified until the team evolves. We started 2018 with Molly, Jaminy, Alex and myself. The role needs at least one new volunteer with strong mentoring experience for 2019.
Google encourages organizations to put project ideas up for discussion and also encourages students to spontaneously propose their own ideas. This latter concept is a significant difference between GSoC and Outreachy that has caused unintended confusion for some mentors in the past. I have frequently put teasers on my blog, without full specifications, to see how students would try to respond. Some mentors are much more precise, telling students exactly what needs to be delivered and how to go about it. Both approaches are valid early in the program.
Students start sending inquiries to some mentors well before GSoC starts. When Google publishes the list of organizations to participate (that was on 12 February this year), the number of inquiries increases dramatically, in the form of personal emails to the mentors, inquiries on the debian-outreach mailing list, the IRC channel and many project-specific mailing lists and IRC channels.
Over 300 students contacted me personally or through the mailing list during the application phase (between 12 February and 27 March). This is a huge number and makes it impossible to engage in a dialogue with every student. In the last years where I have mentored, 2016 and 2018, I've personally but a bigger effort into engaging other mentors during this phase and introducing them to some of the students who already made a good first impression.
As an example, Jacob Adams first inquired about my PKI/PGP Clean Room idea back in January. I was really excited about his proposals but I knew I simply didn't have the time to mentor him personally, so I added his blog to Planet Debian and suggested he put out a call for help. One mentor, Daniele Nicolodi replied to that and I also introduced him to Thomas Levine. They both generously volunteered and together with Jacob, ensured a successful project. While I originally started the clean room, they deserve all the credit for the enhancements in 2018 and this emphasizes the importance of those introductions made during the early stages of GSoC.
In fact, there were half a dozen similar cases this year where I have interacted with a really promising student and referred them to the mentor(s) who appeared optimal for their profile.
After my recent travels in the Balkans, a number of people from Albania and Kosovo expressed an interest in GSoC and Outreachy. The students from Kosovo found that their country was not listed in the application form but the Google team very promptly added it, allowing them to apply for GSoC for the first time. Kosovo still can't participate in the Olympics or the World Cup, but they can compete in GSoC now.
At this stage, I was still uncertain if I would mentor any project myself in 2018 or only help with the admin role, which I had only agreed to do on a very temporary basis until the team evolves. Nonetheless, the day before student applications formally opened (12 March) and after looking at the interest areas of students who had already made contact, I decided to go ahead mentoring a single project, the wizard for new students and contributors.
The application deadline closed on 27 March. At this time, Debian had 102 applications, an increase over the 75 applications from 2016. Five applicants were female, including three from Kosovo.
One challenge we've started to see is that since Google reduced the stipend for GSoC, Outreachy appears to pay more in many countries. Some women put more effort into an Outreachy application or don't apply for GSoC at all, even though there are far more places available in GSoC each year. GSoC typically takes over 1,000 interns in each round while Outreachy can only accept approximately 50.
Applicants are not evenly distributed across all projects. Some mentors/projects only receive one applicant and then mentors simply have to decide if they will accept the applicant or cancel the project. Other mentors receive ten or more complete applications and have to spend time studying them, comparing them and deciding on the best way to rank them and make a decision.
Given the large number of project ideas in Debian, we found that the Google portal didn't allow us to use enough category names to distinguish them all. We contacted the Google team about this and they very quickly increased the number of categories we could use, this made it much easier to tag the large number of applications so that each mentor could filter the list and only see their own applicants.
The project I mentored personally, a wizard for helping new students get started, attracted interest from 3 other co-mentors and 10 student applications. To help us compare the applications and share data we gathered from the students, we set up a shared spreadsheet using Debian's Sandstorm instance and Ethercalc. Thanks to Asheesh and Laura for setting up and maintaining this great service.
Switching from the mentor hat to the admin hat, we had to coordinate the requests from each mentor to calculate the total number of slots we wanted Google to fund for Debian's mentors.
Once again, Debian's Sandstorm instance, running Ethercalc, came to the rescue.
All mentors were granted access, reducing the effort for the admins and allowing a distributed, collective process of decision making. This ensured mentors could see that their slot requests were being counted correctly but it means far more than that too. Mentors put in a lot of effort to bring their projects to this stage and it is important for them to understand any contention for funding and make a group decision about which projects to prioritize if Google doesn't agree to fund all the slots.
Various topics were discussed by the team at the beginning of GSoC.
One discussion was about the definition of "team". Should the new delegation follow the existing pattern, reserving the word "team" for the admins, or should we move to the convention followed by the DebConf team, where the word "team" encompasses a broader group of the volunteers? A draft delegation text was prepared but we haven't asked for it to be ratified, this is a pending task for the 2019 team (more on that later).
There was discussion about the choice of project management tools, keeping with Debian's philosophy of only using entirely free tools. We compared various options, including Redmine with the Agile (Kanban) plugin, Kanboard (as used by DebConf team), and more Sandstorm-hosted possibilities, such as Wekan and Scrumblr. Some people also suggested ideas for project management within their Git repository, for example, using Org-mode. There was discussion about whether it would be desirable for admins to run an instance of one of these tools to manage our own workflow and whether it would be useful to have all students use the same tool to ease admin supervision and reporting. Personally, I don't think all students need to use the same tool as long as they use tools that provide public read-only URLs, or even better, a machine-readable API allowing admins to aggregate data about progress.
Admins set up a Git repository for admin and mentor files on Debian's new GitLab instance, Salsa. We tried to put in place a process to synchronize the mentor list on the wiki, the list of users granted team access in Salsa and the list of mentors maintained in the GSoC portal. This could be taken further by asking mentors and students to put a Moin Category tag on the bottom of their personal pages on the wiki, allowing indexes to be built automatically.
On 23 April, the list of selected students was confirmed. Shortly afterward, a Debian blog appeared welcoming the students.
I traveled to Tirana, Albania for OSCAL'18 where I was joined by two of the Kosovan students selected by Debian. They helped run the Debian booth, comprising a demonstration of software defined radio from Debian Hams.
Enkelena Haxhiu and I gave a talk together about communications technology. This was Enkelena's first talk. In the audience was Arjen Kamphuis, he was one of the last people to ask a question at the end. His recent disappearance is a disturbing mystery.
A GSoC session took place at DebConf18, the video is available here and includes talks from GSoC and Outreachy participants past and present.
Many of the students have already been added to Planet Debian where they have blogged about what they did and what they learned in GSoC. More will appear in the near future.
If you like their project, if you have ideas for an event where they could present it or if you simply live in the same region, please feel free to contact the students directly and help them continue their free software adventure with us.
Google's application form for organizations like Debian asks us what we do to stay in contact with students after GSoC. Crossing multiple passes in the Swiss and Italian alps to find Sergio Alberti at Capo di Lago is probably one of the more exotic answers to that question.
I first mentored students in GSoC 2013. Since then, I've been involved in mentoring a total of 12 students in GSoC and 3 interns in Outreachy as well as introducing many others to mentors and organizations. Several of them stay in touch and it's always interesting to hear about their successes as they progress in their careers and in their enjoyment of free software.
The Outreachy organizers have chosen a picture of two of my former interns, Urvika Gola (Outreachy 2016) and Pranav Jain (GSoC 2016) for the mentors page of their web site. This is quite fitting as both of them have remained engaged and become involved in the mentoring process.
One of the big challenges we faced this year is that as the new admin team was only coming together for the first time, we didn't have any policies in place before mentors and students started putting significant effort in to their proposals.
Potential mentors start to put in significant effort from February, when the list of participating organizations is usually announced by Google. Therefore, it seems like a good idea to make any policies clear to potential mentors before the end of January.
We faced a similar challenge with selecting mentors to attend the GSoC mentor summit. While some ideas were discussed about the design of a selection process or algorithm, the admins fell back on the previous policy based on a random selection as mentors may have anticipated that policy was still in force when they signed up.
As I mentioned already, there are several areas where GSoC and Outreachy are diverging, this already led to some unfortunate misunderstandings in both directions, for example, when people familiar with Outreachy rules have been unaware of GSoC differences and vice-versa and I'll confess to being one of several people who has been confused at least once. Mentors often focus on the projects and candidates and don't always notice the annual rule changes. Unfortunately, this requires involvement and patience from both the organizers and admins to guide the mentors through any differences at each step.
One of the most contentious topics in Debian's GSoC 2018 program was the discussion of whether Debian can and should act as an umbrella organization for smaller projects who are unlikely to participate in GSoC in their own right.
As an example, in 2016, four students were mentored by Savoir Faire Linux (SFL), makers of the Ring project, under the Debian umbrella. In 2017, Ring joined the GNU Project and they mentored students under the GNU Project umbrella organization. DebConf17 coincidentally took place in Montreal, Canada, not far from the SFL headquarters and SFL participated as a platinum sponsor.
Google's Mentor Guide explicitly encourages organizations to consider this role, but does not oblige them to do so either:
Google’s program administrators actually look quite fondly on the umbrella organizations that participate each year.
For an organization like Debian, with our philosophy, independence from the cloud and distinct set of tools, such as the Salsa service mentioned earlier, being an umbrella organization gives us an opportunity to share the philosophy and working methods for mutual benefit while also giving encouragement to related projects that we use.
Some people expressed concern that this may cut into resources for Debian-centric projects, but it appears that Google has not limited the number of additional places in the program for this purpose. This is one of the significant differences with Outreachy, where the number of places is limited by funding constraints.
Therefore, if funding is not a constraint, I feel that the most important factor to evaluate when considering this issue is the size and capacity of the admin team. Google allows up to five people to be enrolled as admins and if enough experienced people volunteer, it can be easier for everybody whereas with only two admins, the minimum, it may not be feasible to act as an umbrella organization.
Within the team, we observed various differences of opinion: for example some people were keen on the umbrella role while others preferred to restrict participation to Debian-centric projects. We have the same situation with Outreachy: some mentors and admins only want to do GSoC, while others only do Outreachy and there are others, like myself, who have supported both programs equally. In situations like this, nobody is right or wrong.
Once that fundamental constraint, the size of the admin team, is considered, I personally feel that any related projects engaged on this basis can be evaluated for a wide range of synergies with the Debian community, including the people, their philosophy, the tools used and the extent to which their project will benefit Debian's developers and users. In other words, this doesn't mean any random project can ask to participate under the Debian umbrella but those who make the right moves may have a chance of doing so.
Google pays each organization an allowance of USD 500 for each slot awarded to the organization, plus some additional funds related to travel. This generally corresponds to the number of quality candidates identified by the organization during the selection process, regardless of whether the candidate accepts an internship or not. Where more than one organization requests funding (a slot) for the same student, both organizations receive a bounty, we had at least one case like this in 2018.
For 2018, Debian has received USD 17,200 from Google.
Personally, as I indicated in January that I would only be able to do this on a temporary basis, I'm not going to participate as an admin in 2019 so it is a good time for other members of the community to think about the role. Each organization who wants to participate needs to propose a full list of admins to Google in January 2019, therefore, now is the time for potential admins to step forward, decide how they would like to work together as a team and work out the way to recruit mentors and projects.
Thanks to all the other admins, mentors, the GSoC team at Google, the Outreachy organizers and members of the wider free software community who supported this initiative in 2018. I'd particularly like to thank all the students though, it is really exciting to work with people who are so open minded, patient and remain committed even when faced with unanticipated challenges and adversity.
|
Mike Taylor: Don't rely on the shape of (Native)Error.prototype.message |
Over in Bug 1359822, the fine folks on the JS team tried to make error messages for null
or undefined
property keys nicer in Firefox. So rather than something super lame (and sometimes confusing) like:
TypeError: window.pogs is undefined
You'd get something more helpful like:
TypeError: win.pogs is undefined, can't access property "slammer" of it
This was supposed to land in the Firefox 64 release cycle, but we ran into a few snags.
It turns out that sites (big ones like Flipkart) and libraries depend on the exact shape of an error message, and will fail is craptastic ways if their regular expressions expectations are not met. Like, blank pages and non-stop loading loops.
Facebook's idx's library is an example:
const nullPattern = /^null | null$|^[^(]* null /i;
const undefinedPattern = /^undefined | undefined$|^[^(]* undefined /i;
Bug 1359822 adds a comma to the error message, so undefinedPattern
doesn't match anymore.
And yeah, you can't really break the sixth biggest site in India and expect to have users, so we backed out the error improvements patches.
But what does the spec say?, you ask, because you're like, really smart and care about specs and stuff (obviously, because you're reading this blog for and by really smart people™).
ECMAScript® 3000, in section 19.5.6 says:
When an ECMAScript implementation detects a runtime error, it throws a new instance of one of the NativeError objects defined in 19.5.5. Each of these objects has the structure described below, differing only in the name used as the constructor name instead of NativeError, in the name property of the prototype object, and in the implementation-defined message property of the prototype object.
That's spec-talk for "these error messages can be different between browsers, and there's no guarantee they won't change, so attempting to rely on them is harmful to making things better in the future" (approximately).
I filed a bug, but getting upstream open source libraries fixed is way less hard than getting downstream sites to update.
(Almost as hard as pog math.)
The moral of the story is: don't write regular expressions against unstandardized browser-specific messages. Or maybe just never write regular expressions ever again. Seems easier?
https://miketaylr.com/posts/2018/10/(native)-error-prototype-message.html
|
The Rust Programming Language Blog: Update on the October 15, 2018 incident on crates.io |
On Monday, Oct 15, starting at approximately 20:00 UTC, crates.io sustained an operational incident. You can find the status page report here, and our tweets about it here.
A user called cratesio
was created on crates.io and proceeded to upload
packages using common, short names. These packages contained nothing beyond a
Cargo.toml
file and a README.md
instructing users that if they wanted to use
the name, they should open an issue on the crates.io issue tracker.
The rate at which this user uploaded packages eventually resulted in our servers being throttled by GitHub, causing a slowdown in all package uploads or yanks. Endpoints which did not involve updating the index were unaffected.
We decided to take action on this behavior because:
cratesio
, as well as directing people
to the crates-io issue tracker in the crates’ Readme
files)The user’s IP address was banned immediately. We then backdated the users’ packages to remove
their packages from the homepage. We also redirected the cratesio
user’s page to a 404.
Finally, the cratesio
user, and all crates they uploaded were deleted.
The user was reported to GitHub, and has since been banned by them.
cratesio
registers an accountIt should not have been possible for a single user or IP address to upload that many packages in a short period of time. We will be introducing rate limiting on this endpoint to limit the number of packages a script is able to upload in the future.
We are also looking into disallowing usernames that could be impersonating official Rust teams. We will be updating our policies to clearly state that this form of impersonation is not allowed. We will be deciding the exact wording of this policy in the coming weeks.
While it is impossible to tell a user’s intent, many, including the team, have speculated that this action was either associated with or directly related to the recent esclation in community frustration around crates.io policies, in particular, the squatting policy.
Regardless of whether this incident had this intent, the cratesio team would like to reiterate that taking actions such as the one we experienced on Tuesday is not an appropriate way nor effective way to contribute to dialogue about crates.io policy. We will be adding a policy making it clear that attempting to disrupt crates.io in order to make or further a point is not approrpriate and will be considered a malicous attack. We will be deciding on the exact wording of this policy in the coming weeks.
If you feel that a policy is problematic, the correct place to propose a change is by creating an RFC or messaging the team at help@crates.io.
We also have seen a lot of frustration that the crates.io team is not listening to the concerns that are being raised on both official and unofficial Rust forums. We agree that we should improve our communication with the community and intend to develop more processes for folks to communicate with us, as well as for the team to communicate to the general community.
There has been a growing amount of discussion in the community around our squatting policy, and our decision not to have namespacing.
The original squatting policy, published in 2014, contains a lot more information about the rationale behind the policy than what is currently on our website. The full text of the original policy is:
Nobody likes a “squatter”, but finding good rules that define squatting that can be applied mechanically is notoriously difficult. If we require that the package has at least some content in it, squatters will insert random content. If we require regular updates, squatters will make sure to update regularly, and that rule might apply over-zealously to packages that are relatively stable.
A more case-by-case policy would be very hard to get right, and would almost certainly result in bad mistakes and and regular controversies.
Instead, we are going to stick to a first-come, first-served system. If someone wants to take over a package, and the previous owner agrees, the existing maintainer can add them as an owner, and the new maintainer can remove them. If necessary, the team may reach out to inactive maintainers and help mediate the process of ownership transfer. We know that this means, in practice, that certain desirable names will be taken early on, and that those early users may not be using them in the most optimal way (whether they are claimed by squatters or just low-quality packages). Other ecosystems have addressed this problem through the use of more colorful names, and we think that this is actually a feature, not a bug, of this system. We talk about this more below.
We will be discussing whether including some of this information in the policy published on our website would help more people to understand the rationale behind our policy, without requiring members of the team to reply to every forum thread wanting to re-litigate what has already been discussed at length.
We wanted to share the details of what happened, and why the crates.io team chose to take action as quickly as possible. The policy changes we’ve described will be discussed during the next several team meetings. Nothing is set in stone until the team has a chance to discuss them further, but we wanted to share the possible changes we’re discussing to limit speculation on what future actions we’re planning on taking.
As a reminder, if you would like to report an incident regarding cratesio you can message the team at help@crates.io. You can view the status of the service at https://crates-io.statuspage.io/ or by following @cratesiostatus on Twitter.
https://blog.rust-lang.org/2018/10/19/Update-on-crates.io-incident.html
|
Mozilla GFX: WebRender newsletter #26 |
Here comes the 26th issue of WebRender’s newsletter. Let’s see what we have this week:
https://mozillagfx.wordpress.com/2018/10/18/webrender-newsletter-26/
|
Mozilla Open Policy & Advocacy Blog: Getting serious about political ad transparency with Ad Analysis for Facebook |
Do you know who is trying to influence your vote online? The votes of your friends and neighbors? Would you even know how to find out? Despite all the talk of election security, the tech industry still falls short on political ad transparency. With the U.S. midterm elections mere weeks away, this is a big problem.
We can’t solve this problem alone, but we can help by making it more visible and easier to understand. Today we are announcing the release of our experimental extension, Ad Analysis for Facebook, to give you greater transparency into the online advertisements, including political ads, you see on Facebook.
Big tech companies have acknowledged this problem but haven’t done enough to address it. In May, Facebook released the Ad Archive, a database of political ads that have run on the platform. In August, Facebook announced a private beta release of its Ad Archive API. But these are baby steps at a time when we need more. The Ad Archive doesn’t provide the integrated, transparent experience that users really need, nor provide the kind of data journalists and researchers require for honest oversight. The Ad Archive API is only available to select organizations. Facebook’s tools aren’t very useful today, which means they won’t provide meaningful transparency before the midterm elections.
This is why we’re launching Ad Analysis for Facebook. It shows you why you were targeted, and how your targeting might differ from other users. You may be surprised! Facebook doesn’t just target you based on the information you’ve provided in your profile and posts. Facebook also infers your interests based on your activities, the news you read, and your relationships with others on Facebook.
Beyond giving you insight into how you were targeted, Ad Analysis for Facebook provides a view of the overall landscape to help you see outside your filter bubble. The extension also displays a high-level overview of the top political advertisers based on targeting by state, gender, and age. You can view ads for each of these targeting criteria — the kinds of ads you would never normally see.
Political ad transparency is just one of the many areas we need to improve to strengthen our electoral processes for the digital age. Transparency alone won’t solve misinformation problems or election hacking. But at Mozilla, we believe transparency is the most critical piece. Citizens need accurate information and powerful tools to make informed decisions. We encourage you to use our new Ad Analysis for Facebook experiment, as well as our other tools and resources to help you navigate the US midterm elections. It’s all part of learning more about who is trying to influence your vote.
The post Getting serious about political ad transparency with Ad Analysis for Facebook appeared first on Open Policy & Advocacy.
|
Mozilla VR Blog: Introducing Spoke: Make your own custom 3D social scenes |
Today we’re thrilled to announce the beta release of Spoke: the easiest way to create your own custom social 3D scenes you can use with Hubs.
Over the last year, our Social Mixed Reality team has been developing Hubs, a WebVR-based social experience that runs right in your browser. In Hubs, you can communicate naturally in VR or on your phone or PC by simply sharing a link.
Along the way, we’ve added features that enable social presence, self-expression, and content sharing. We’ve also offered a variety of scenes to choose from, like a castle space, an atrium, and even a wide open space high in the sky.
However, as we hinted at earlier in the year, we think creating virtual scenes should be easy for anyone, as easy as creating your first webpage.
Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR. Don’t believe us? Check out our 5 minute tutorial to see how easy it is.
With Spoke, all of the freely-licensed 3D content by thousands of amazing and generous 3D artists can be composed into places you can visit together in VR. We’ve made it easy to import and arrange your own 3D content as well. In a few clicks, you can meet up in a custom 3D scene, in VR, all by just sharing a link. And since you’re in Hubs, you can draw, bring in content from the web, or even take selfies with one another!
We’re beyond excited to get Spoke into your hands, and we can’t wait to see the amazing scenes you create. We’ll be adding more capabilities to Spoke over the coming months which will open up even more possibilities. As always, please join us on our Discord server or file a GitHub issue if you have feedback.
You can download the Spoke beta now for Windows, MacOS, and Linux, or browse our Sketchfab collections for inspiration.
Low Poly Campfire by Mintzkraut
|
Mozilla Security Blog: Encrypted SNI Comes to Firefox Nightly |
TL;DR: Firefox Nightly now supports encrypting the TLS Server Name Indication (SNI) extension, which helps prevent attackers on your network from learning your browsing history. You can enable encrypted SNI today and it will automatically work with any site that supports it. Currently, that means any site hosted by Cloudflare, but we’re hoping other providers will add ESNI support soon.
Although an increasing fraction of Web traffic is encrypted with HTTPS, that encryption isn’t enough to prevent network attackers from learning which sites you are going to. It’s true that HTTPS conceals the exact page you’re going to, but there are a number of ways in which the site’s identity leaks. This can itself be sensitive information: do you want the person at the coffee shop next to you to know you’re visiting cancer.org?
There are four main ways in which browsing history information leaks to the network: the TLS certificate message, DNS name resolution, the IP address of the server, and the TLS Server Name Indication extension. Fortunately, we’ve made good progress shutting down the first two of these: The new TLS 1.3 standard encrypts the server certificate by default and over the past several months, we’ve been exploring the use of DNS over HTTPS to protect DNS traffic. This is looking good and we are hoping to roll it out to all Firefox users over the coming months. The IP address remains a problem, but in many cases, multiple sites share the same IP address, so that leaves SNI.
Ironically, the reason you need an SNI field is because multiple servers share the same IP address. When you connect to the server, it needs to give you the right certificate to prove that you’re connecting to a legitimate server and not an attacker. However, if there is more than one server on the same IP address, then which certificate should it choose? The SNI field tells the server which host name you are trying to connect to, allowing it to choose the right certificate. In other words, SNI helps make large-scale TLS hosting work.
We’ve known that SNI was a privacy problem from the beginning of TLS 1.3. The basic idea is easy: encrypt the SNI field (hence “encrypted SNI” or ESNI). Unfortunately every design we tried had drawbacks. The technical details are kind of complicated, but the basic story isn’t: every design we had for ESNI involved some sort of performance tradeoff and so it looked like only sites which were “sensitive” (i.e., you might want to conceal you went there) would be willing to enable ESNI. As you can imagine, that defeats the point, because if only sensitive sites use ESNI, then just using ESNI is itself a signal that your traffic demands a closer look. So, despite a lot of enthusiasm, we eventually decided to publish TLS 1.3 without ESNI.
However, at the beginning of this year, we realized that there was actually a pretty good 80-20 solution: big Content Distribution Networks (CDNs) host a lot of sites all on the same machines. If they’re willing to convert all their customers to ESNI at once, then suddenly ESNI no longer reveals a useful signal because the attacker can see what CDN you are going to anyway. This realization broke things open and enabled a design for how to make ESNI work in TLS 1.3 (see Alessandro Ghedini’s writeup of the technical details.) Of course, this only works if you can mass-configure all the sites on a given set of servers, but that’s a pretty common configuration.
This is brand-new technology and Firefox is the first browser to get it. At the moment we’re not ready to turn it on for all Firefox users. However, Nightly users can try out this enhancing feature now by performing the following steps: First, you need to make sure you have DNS over HTTPS enabled (see: https://blog.nightly.mozilla.org/2018/06/01/improving-dns-privacy-in-firefox/). Once you’ve done that, you also need to set the “network.security.esni.enabled” preference in about:config to “true”). This should automatically enable ESNI for any site that supports it. Right now, that’s just Cloudflare, which has enabled ESNI for all its customers, but we’re hoping that other providers will follow them. You can go to: https://www.cloudflare.com/ssl/encrypted-sni/ to check for yourself that it’s working.
During the development of TLS 1.3 we found a number of problems where network devices (typically firewalls and the like) would break when you tried to use TLS 1.3. We’ve been pretty careful about the design, but it’s possible that we’ll see similar problems with ESNI. In order to test this, we’ll be running a set of experiments over the next few months and measuring for breakage. We’d also love to hear from you: if you enable ESNI and it works or causes any problems, please let us know.
The post Encrypted SNI Comes to Firefox Nightly appeared first on Mozilla Security Blog.
https://blog.mozilla.org/security/2018/10/18/encrypted-sni-comes-to-firefox-nightly/
|
Hacks.Mozilla.Org: Introducing Opus 1.3 |
The Opus Audio Codec gets another major update with the release of version 1.3 (demo).
Opus is a totally open, royalty-free audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is now included in all major browsers and mobile operating systems. It has been adopted for a wide range of applications, and is the default WebRTC codec.
This release brings quality improvements to both speech and music compression, while remaining fully compatible with RFC 6716. Here’s a few of the upgrades that users and implementers will care about the most.
Opus 1.3 includes a brand new speech/music detector. It is based on a recurrent neural network and is both simpler and more reliable than the detector that has been used since version 1.1. The new detector should improve the Opus performance on mixed content encoding, especially at bitrates below 48 kb/s.
There are also many improvements for speech encoding at lower bitrates, both for mono and stereo. The demo contains many more details, as well as some audio samples. This new release also includes a cool new feature: ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos.
You can read all the details of The Release of Opus 1.3.
The post Introducing Opus 1.3 appeared first on Mozilla Hacks - the Web developer blog.
|
Mozilla Open Innovation Team: If you build it (together), they will come… |
Mozilla and the Khronos Group collaborate to bring glTF capabilities to Blender
Mozilla is committed to the next wave of creativity in the open Web, in which people can access, create and share immersive VR and AR experiences across platforms and devices. What it takes though is an enthusiastic, skilled and growing community of creators, artists, and also businesses forming a healthy ecosystem, as well as tool support for web developers who build content for it. To overcome a fragmented environment and to allow for broad adoption, we need the leading content format to be open, and frameworks and toolsets to be efficient and interoperable. Ensuring that tools for creation, modification and viewing are open to the entire community and that there aren’t gatekeepers to creativity is one of the main working areas for Mozilla’s Mixed Reality (WebXR) Team. Building on its “Open by Design” strategy Open Innovation partnered with that team around Lars Bergstrom to find neat, yet impactful ways to stimulate external collaboration, co-development and co-funding of technology.
In this case: together with the Khronos Group and developers of existing open source Blender tools, we are providing a complete GL Transmission Format (glTF) import AND export add-on for Blender, offering a powerful and free round-trip workflow for WebXR creators. This effort builds on the work of existing open source tools, and adds support for the newest Blender and glTF features. The tool itself along with further technical details will be available in the coming weeks. The blueprint for the collaboration model that allowed us to get there already exists. This article will explain how we got there, how this model works, and why it can accelerate the ecosystem.
Where do we start? Understand the ecosystem!
The glTF format, the “JPEG of 3D”, is a cornerstone for interoperable 3D tools and services. It is royalty free and coordinated by the Khronos consortium. It not only finds uses in consumer products, such as games, or social VR tools like Mozilla Hubs, but also in industrial contexts. This means that there is a complex ecosystem with very diverse parties, different motivations and business models. If we wanted to align interests to maximise impact we needed to better understand the value creation and value capture across this ecosystem. In order to identify the best leverage points for our efforts we started an analysis focusing on the content creators, their motivations, their current pain points, and the expected impact of tools that could empower them.
The initial analysis was shared with the Khronos glTF working group. This working group brings together ecosystem stakeholders “to cooperate in the creation of open standards that deliver on the promise of cross-platform technology”. Several partners participated in the next iteration analysis of the tools for content creators, the contributions of Don McCurdy from Google, Patrick Cozzi from Cesium and our own Fernando Serrano Garc'ia were particularly helpful. The result was a clear recommendation to support the development of Blender tools for glTF.
Leveraging Common Interests
Blender holds a unique place among 3D content creation tools — it is free, open source, popular, and available to anyone. We are especially excited by the timing, as the upcoming release of Blender 2.8 promises powerful physically-based rendering (PBR) and a greatly-improved user interface. Round-trip glTF import and export in Blender raises the baseline for WebXR developers, designers, and creators. Anyone, anywhere in the world, can create, edit, and remix glTF models without having to purchase specialized software.
Blender is not only used by professional filmmakers (think Wonder Woman!) but also by organizations like NASA, and Airbus, who is also one of our partners in the development of the Blender glTF tools. They are using the glTF format internally to visualize their Blender created mock-ups both in VR and AR. Since their internal solution has gotten more and more traction and is to be used group wide on helicopters and aircrafts for realistic renderings and light analysis, they commissioned the independent developer Julien Duroure to build some of the foundational pieces of the Blender tools.
After the joint ecosystem analysis Khronos, UX3D, and Mozilla decided to co-fund the development of the Blender importer and exporter tool. UX3D, a Munich based GPU software solution provider, has been doing most of the development work, building on top of the previous work for the Blender importer by Julien.
And because this is open source, he has kept contributing his time outside of his engagement with Airbus to further develop these tools. Adding to existing contributors, having a strong open source community that is able to maintain these tools makes the project sustainable in the long term.
A Model for Future Developments
This model of collaboration — first jointly identifying priorities and then co-funding the development of specific features of software for the benefit of all parties — enables the ecosystem to move faster while at the same time allowing partners to share the investment. With the Blender glTF tools we created the blueprint: the working group members agreed on the scope of work, Khronos collected the funds from the co-funding partners, and managed the engagement with the developers. We hope that other ecosystem partners will follow this model in the future and that we can together accelerate the advancement of the glTF standard and the WebXR ecosystem.
With Firefox Reality and soon with support for Blender we see how the potential of the open Web is made tangible in emerging technologies such as VR and AR. As these technologies start being used to create different types of content, such as art, educational, and professional resources, the tools to both create and access experiences should be open and accessible for everyone.
With the glTF Blender import and export tool being released and ready for beta testers around the time of the Blender conference in late October, we hope that the Blender tools will unleash the creativity of the global community. We’ll share a deep dive on the tool on Mozilla’s Hacks blog in the coming weeks.
If you build it (together), they will come… was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.
|
Kartikaya Gupta: Mozilla Productivity Tip: Managing try pushes |
I tend to do a lot of try pushes for testing changes to Gecko and other stuff, and by using one of TreeHerder's (apparently) lesser-known features, managing these pushes to see their results is really easy. If you have trouble managing your try pushes, consider this:
Open a tab with an author filter for yourself. You can do this by clicking on your email address on any of your try pushes (see highlighted area in screenshot below). Keep this tab open, forever. By default it shows you the last 10 try pushes you did, and if you leave it open, it will auto-update to show newer try pushes that you do.
With this tab open, you can easily keep an eye on your try pushes. Once the oldest try pushes are "done" (all jobs completed, you've checked the result, and you don't care about it anymore), you can quickly and easily drop it off the bottom by clicking on the "Set as bottom of range" menu item on the oldest push that you do want to keep. (Again, see screenshot below).
[photo]
This effectively turns this tab into a rotating buffer of the try pushes you care about, with the oldest ones moving down and eventually getting removed via use of "Set as bottom of range" and the newer ones automatically appearing on top.
Note: clicking on the "Set as bottom of range" link will also reload the TreeHerder page, which means errors that might otherwise accumulate (due to e.g. sleeping your laptop for a time, or a new TreeHerder version getting deployed) get cleared away, so it's even self-healing!
Bonus tip: Before you clear away old try pushes that you don't care about, quickly go through them to make sure they are all marked "Complete". If they still have jobs running that you don't care about, do everybody a favor and hit the push cancellation button (the "X" icon next to "View Tests") before resetting the bottom of range, as that will ensure we don't waste machine time running jobs nobody cares about.
Extra bonus tip: Since using this technique this makes all those "Thank you for your try submission" Taskcluster emails redundant, set up an email filter to reroute those emails to the /dev/null of your choice. Less email results in a happier you!
Final bonus tip: If you need to copy a link to a specific try push (for pasting in a bug, for example), right-click on the timestamp for that try push (to the left of your email address), and copy the URL for that link. That link is for that specific push, and can be shared to get the desired results.
And there you have it, folks, a nice simple way to manage all your try pushes on a single page and not get overwhelmed.
|
Marco Castelluccio: Searchfox in Phabricator extension |
Being able to search code while reviewing can be really useful, but unfortunately it’s not so straightforward. Many people resort to loading the patch under review in an IDE in order to be able to search code.
Being able to do it directly in the browser can make the workflow much smoother.
To support this use case, I’ve built an extension for Phabricator that integrates Searchfox code search functionality directly in Phabricator differentials. This way reviewers can benefit from hovers, go-to-definition and find-references without having to resort to the IDE or without having to manually navigate to the code on searchfox.org or dxr.mozilla.org. Moreover, compared to searchfox.org or dxr.mozilla.org, the extension highlights both the pre-patch view and the post-patch view, so reviewers can see how pre-existing variables/functions are being used after the patch.
To summarize, the features of the extension currently are:
Here’s a screenshot from the extension in action:
I’m planning to add support for sticky highlighting and blame information (when hovering on the line number on the left side). Indeed, being able to look at the past history of a line is another sought after feature by reviewers.
You can find the extension on AMO, at https://addons.mozilla.org/addon/searchfox-phabricator/.
The source code, admittedly not great as it was written as an experiment, lives at https://github.com/marco-c/mozsearch-phabricator-addon.
Should you find any issues, please file them on https://github.com/marco-c/mozsearch-phabricator-addon/issues.
https://marco-c.github.io/2018/10/18/searchfox-in-phabricator.html
|