Mozilla GFX: WebRender newsletter #36 |
Hi everyone! This week’s highlight is Glenn’s picture caching work which almost landed about a week ago and landed again a few hours ago. Fingers crossed! If you don’t know what picture caching means and are interested, you can read about it in the introduction of this newsletter’s season 01 episode 28.
On a more general note, the team continues focusing on the remaining list of blocker bugs which grows and shrinks depending on when you look, but the overall trend is looking good.
Without further ado:
The team keeps going through the remaining blockers (14 P2 bugs and 29 P3 bugs at the time of writing).
In about:config, set the pref “gfx.webrender.all” to true and restart the browser.
The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.
https://mozillagfx.wordpress.com/2019/01/17/webrender-newsletter-36/
|
Mozilla Localization (L10N): L10n report: January edition |
New localizers
Are you a locale leader and want us to include new members in our upcoming reports? Contact us!
The localization cycle for Firefox 66 in Nightly is approaching its end, and Tuesday (Jan 15) was the last day to get changes into Firefox 65 before it moves to release (Jan 29). These are the key dates for the next cycle:
As of January, localization of the Pocket add-on has moved back into the Firefox main project. That’s a positive change for localization, since it gives us a clearer schedule for updates, while before they were complex and sparse. All existing translations from the stand-alone process were imported into Mercurial repositories (and Pontoon).
In terms of prioritization, there are a couple of features to keep an eye on:
As explained in this blog post, Test Pilot is reaching its end of life. The website localization has been updated in Pontoon to include messages around this change, while other experiments (Send, Monitor) will continue to exist as stand-alone projects. Screenshots is also going to see changes in the upcoming days, mostly on the server side of the project.
Just like for Firefox desktop, the last day to get in localizations for Fennec 65 was Tuesday, Jan 15. Please see the desktop section above for more details.
Firefox iOS v15 localization deadline was Friday, January 11. The app should be released to everyone by Jan 29th, after a phased roll-out. This time around we’ve added seven new locales: Angika, Burmese, Corsican, Javanese, Nepali, Norwegian Bokmal and Sundanese. This means that we’re currently shipping 87 locales out of the 88 that are being localized – which is twice as more than when we first shipped the app. Congrats to all the voluntary localizers involved in this effort over the years!
And stay tuned for an update on the upcoming v16 l10n timeline soon.
We’re also still working with Lockbox Android team in order to get the project plugged in to Pontoon, and you can expect to see something come up in the next couple of weeks.
Firefox Reality project is going to be available and open for localization very soon too. We’re working out the specifics right now, and the timeline will be shared very soon and once everything is ironed out.
Mozilla.org has a few updates.
Mozilla’s big end-of-year push for donations has passed, and thanks in no small part to your efforts, the Foundation’s financial situation is in a much better shape for this year to pick up the fight where they left it before the break. Thank you all for your help!
In these first days of 2019, the fundraising team takes the opportunity of the quiet time to modernize the donation receipts with a better email sent to donors and migrate the receipts to the same infrastructure used to send Mozilla & Firefox newsletters. Content for the new receipts should be exposed in the Fundraising project by the end of the month for the 10-15 locales with the most donations in 2018.
The Advocacy team is still working on the misinfo campaign in Europe, with a first survey coming up to be sent to the people subscribed to the Mozilla newsletter, to get a flavor of where opinion lies with their attitudes to misinformation at the moment. Next steps will include launching a campaign about political ads ahead of the EU elections then promote anti-disinformation tools. Let’s do this!
We re-launched the ability to delete translations. First you need to reject a translation, and then click on the trash can icon, which only appears next to rejected translations. The delete functionality has been replaced by the reject functionality, but over time it became obvious there are various use cases for both features to co-exist. See bug 1397377 for more details about why we first removed and then restored this feature.
Image by Elio Qoshi
Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!
Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.
https://blog.mozilla.org/l10n/2019/01/17/l10n-report-january-edition/
|
Nick Cameron: proc-macro-rules |
I'm announcing a new library for procedural macro authors: proc-macro-rules (and on crates.io). It allows you to do macro_rules
-like pattern matching inside a procedural macro. The goal is to smooth the transition from declarative to procedural macros (this works pretty well when used with the quote crate).
(This is part of my Christmas yak mega-shave. That might someday get a blog post of its own, but I only managed to shave about 1/3 of my yaks, so it might take till next Christmas).
Here's an example,
rules!(tokens => {
($finish:ident ($($found:ident)*) # [ $($inner:tt)* ] $($rest:tt)*) => {
for f in found {
do_something(finish, f, inner, rest[0]);
}
}
(foo $($bar:expr)?) => {
match bar {
Some(e) => foo_with_expr(e),
None => foo_no_expr(),
}
}
});
The example is kind of nonsense. The interesting thing is that the syntax is very similar to macro_rules
macros. The patterns which are matched are exactly the same as in macro_rules
(modulo bugs, of course). Metavariables in the pattern (e.g., $finish
or $found
in the first arm) are bound to fresh variables in the arm's body (e.g., finish
and found
). The types reflect the type of the metavariable (for example, $finish
has type syn::Ident
). Because $found
occurs inside a $(...)*
, it is matched multiple times and so has type Vec
.
The syntax is:
rules!( $tokens:expr => { $($arm)* })
where $tokens
evaluates to a TokenStream
and the syntax of an $arm
is given by
($pattern) => { $body }
or
($pattern) => $body,
where $pattern
is a valid macro_rules
pattern (which is not yet verified by the library, but should be) and $body
is Rust code (i.e., an expression or block.
The intent of this library is to make it easier to write the 'frontend' of a procedural macro, i.e., to make parsing the input a bit easier. In particular to make it easy to convert a macro_rules
macro to a procedural macro and replace a small part with some procedural code, without having to roll off the 'procedural cliff' and rewrite the whole macro.
As an example of converting macros, here is a declarative macro which is sort-of like the vec
macro (example usage: let v = vec![a, b, c]
):
macro_rules! vec {
() => {
Vec::new()
};
( $( $x:expr ),+ ) => {
{
let mut temp_vec = Vec::new();
$(
temp_vec.push($x);
)*
temp_vec
}
};
}
Converting to a procedural macro becomes a mechanical conversion:
use quote::quote;
use proc_macro::TokenStream;
use proc_macro_rules::rules;
#[proc_macro]
pub fn vec(input: TokenStream) -> TokenStream {
rules!(input.into() => {
() => { quote! {
Vec::new()
}}
( $( $x:expr ),+ ) => { quote! {
let mut temp_vec = Vec::new();
#(
temp_vec.push(#x);
)*
temp_vec
}}
}).into()
}
Note that we are using the quote crate to write the bodies of the match arms. That crate allows writing the output of a procedural macro in a similar way to a declarative macro by using quasi-quoting.
I'm going to dive in a little bit to the implementation because I think it is interesting. You don't need to know this to use proc-macro-rules, and if you only want to do that, then you can stop reading now.
rules
is a procedural macro, using syn for parsing, and quote for code generation. The high-level flow is that we parse all code passed to the macro into an AST, then handle each rule in turn (generating a big if
/else
). For each rule, we make a pass over the rule to collect variables and compute their types, then lower the AST to a 'builder' AST (which duplicates some work at the moment), then emit code for the rule. That generated code includes Matches
and MatchesBuilder
structs to collect and store bindings for metavariables. We also generate code which uses syn to parse the supplied tokenstream into the Matches
struct by pattern-matching the input.
The pattern matching is a little bit interesting: because we are generating code (rather than interpreting the pattern) the implementation is very different from macro_rules
. We generate a DFA, but the pattern is not reified in a data structure but in the generated code. We only execute the matching code once, so we must be at the same point in the pattern for all potential matches, but they can be at different points in the input. These matches are represented in the MatchSet
. (I didn't look around for a nice way of doing this, so there may be something much better, or I might have made an obvious mistake).
The key functions on a MatchSet
are expect
and fork
. Both operate by taking a function from the client which operates on the input. expect
compares each in-progress match with the input and if the input can be matched we continue; if it cannot, then the match is deleted. fork
iterates over the in-progress matches, forking each one. One match is matched against the next element in the patten, and one is not. For example, if we have a pattern ab?c
and a single match which has matched a
in the input then we can fork
and one match will attempt to match b
then c
, and one will just match c
.
One interesting aspect of matching is handling metavariable matching in repeated parts of a pattern, e.g., in $($n:ident: $e: expr),*
. Here we would repeatedly try to match $n:ident: $e: expr
and find values for n
and e
, we then need to push each value into a Vec
and a Vec
. We call this 'hoisting' the variables (since we are moving out of a scope while converting T
to U
). We generate code for this which uses an implementation of hoist
in the Fork
trait for each MatchesBuilder
, a MatchesHandler
helper struct for the MatchSet
, and generated code for each kind of repeat which can appear in a pattern.
|
Hacks.Mozilla.Org: Augmented Reality and the Browser — An App Experiment |
We all want to build the next (or perhaps the first) great Augmented Reality app. But there be dragons! The space is new and not well defined. There aren’t any AR apps that people use every day to serve as starting points or examples. Your new ideas have to compete against an already very high quality bar of traditional 2d apps. And building a new app can be expensive, especially for native app environments. This makes AR apps still somewhat uncharted territory, requiring a higher initial investment of time, talent and treasure.
But this also creates a sense of opportunity; a chance to participate early before the space is fully saturated.
From our point of view the questions are: What kinds of tools do artists, developers, designers, entrepreneurs and creatives of all flavors need to be able to easily make augmented reality experiences? What kinds of apps can people build with tools we provide?
For example: Can I watch Trevor Noah on the Daily Show this evening, and then release an app tomorrow that is a riff on a joke he made the previous night? A measure of success is being able to speak in rich media quickly and easily, to be a timely part of a global conversation.
With Blair MacIntyre‘s help I wrote an experiment to play-test a variety of ideas exploring these questions. In this comprehensive post-mortem I’ll review the app we made, what we learned and where we’re going next.
To answer some of the above questions, we started out surveying AR and VR developers, asking them their thoughts and observations. We had some rules of thumb. What we looked for were AR use cases that people value, that are meaningful enough, useful enough, make enough of a difference, that they might possibly become a part of people’s lives.
Existing AR apps also provided inspiration. One simple AR app I like for example is AirMeasure, which is part of a family of similar apps such as the Augmented Reality Measuring Tape. I use it once or twice a month and while not often, it’s incredibly handy. It’s an app with real utility and has 6500 reviews on the App Store – so there’s clearly some appetite already.
Sean White, Mozilla’s Chief R&D Officer, has a very specific definition for an MVP (minimum viable product). He asks: What would 100 people use every day?
When I hear this, I hear something like: What kind of experience is complete, compelling, and useful enough, that even in an earliest incarnation it captures a core essential quality that makes it actually useful for 100 real world people, with real world concerns, to use daily even with current limitations? Shipping can be hard, and finding those first users harder.
New Pixel phones, iPhones and other emerging devices such as the Magic Leap already support Augmented Reality. They report where the ground is, where walls are, and other kinds of environment sensing questions critical for AR. They support pass-through vision and 3d tracking and registration. Emerging standards, notably WebXR, will soon expose these powers to the browser in a standards- based way, much like the way other hardware features are built and made available in the browser.
Native app development toolchains are excellent but there is friction. It can be challenging to jump through the hoops required to release a product across several different app stores or platforms. Costs that are reasonable for a AAA title may not be reasonable for a smaller project. If you want to knock out an app tonight for a client tomorrow, or post an app as a response to an article in the press or a current event— it can take too long.
With AR support coming to the browser there’s an option now to focus on telling the story rather than worrying about the technology, costs and distribution. Browsers historically offer lower barriers to entry, and instant deployment to millions of users, unrestricted distribution and a sharing culture. Being able to distribute an app at the click of a link, with no install, lowers the activation costs and enables virality. This complements other development approaches, and can be used for rapid prototyping of ideas as well.
In our experiment we explored what it would be like to decorate the world with virtual post-it notes. These notes can be posted from within the app, and they stick around between play sessions. Players can in fact see each other, and can see each other moving the notes in real time. The notes are geographically pinned and persist forever.
Using our experiment, a company could decorate their office with hints about how the printers work, or show navigation breadcrumbs to route a bewildered new employee to a meeting. Alternatively, a vacationing couple could walk into an AirBNB, open an “ARBNB” app (pardon the pun) and view post-it notes illuminating where the extra blankets are or how to use the washer.
We had these kinds of aspirational use case goals for our experiment:
Taking these ideas we wrote a standalone app for the iPhone 6S or higher — which you can try at arpersist.glitch.me and play with the source code at https://github.com/anselm/arpersist>github.com/anselm/arpersist.
Here’s a short video of the app running, which you might have seen some days ago in my tweet:
And more detail on how to use the app if you want to try it yourself:
Here’s an image of looking at the space through the iPhone display:
And an image of two players – each player can see the other player’s phone in 3d space and a heart placed on top of that in 3d:
You’ll need the WebXR Viewer for iOS, which you can get on the iTunes store. (WebXR standards are still maturing so this doesn’t yet run directly in most browsers.)
This work is open source, it’s intended to be re-used and intended to be played with, but also — because it works against non-standard browser extensions — it cannot be treated as something that somebody could build a commercial product with (yet).
The videos embedded above offer a good description: Basically, you open ARPersist, (using the WebXR viewer linked above on an iPhone 6s or higher), by going to the URL (arpersist.glitch.me). This drops you into a pass-through vision display. You’ll see a screen with four buttons on the right. The “seashell” button at the bottom takes you to a page where you can load and save maps. You’ll want to “create an anchor” and optionally “save your map”. At this point, from the main page, you can use the top icon to add new features to the world. Objects you place are going to stick to the nearest floor or wall. If you join somebody else’s map, or are at a nearby geographical location, you can see other players as well in real time.
This app features downloadable 3d models from Sketchfab. These are the assets I’m using:
Coming out of that initial phase of development I’ve had many surprising realizations, and even a few eureka moments. Here’s what went well, which I describe as essential attributes of the AR experience:
This gets a bit geeky — but the main principle is that if you use modern public key cryptography to self-sign your own documents, then a central service is not needed to validate your identity. Here I implemented a public/private keypair system similar to Metamask. The strategy is that the user provides a long phrase and then I use Ian Coleman’s Mnemonic Code Converter bip39 to turn that into a public/private keypair. (In this case, I am using bitcoin key-signing algorithms.)
In my example implementation, a given keypair can be associated with a given collection of objects, and it helps prune a core responsibility away from any centralized social network. Users self-sign everything they create.
We also identified many challenges. Here are some of the ones we faced:
This is ultimately a hardware problem. Apple/Google have done an unbelievable job with pure software but the hardware is not designed for the job. Probably the best short-term answer is to use a QRCode. A longer term answer is to just wait a year for better hardware. Apparently next-gen iPhones will have active depth sensors and this may be an entirely solved problem in a year or two. (The challenge is that we want to play with the future before it arrives — so we do need some kind of temporary solution for now.)
Here’s where I feel this work will go next:
This research wasn’t just focused on user experience but also explored internal architecture. As a general rule I believe that the architecture behind an MVP should reflect a mature partitioning of jobs that the fully-blown app will deliver. In nascent form, the MVP has to architecturally reflect a larger code base. The current implementation of this app consists of these parts (which I think reflect important parts of a more mature system):
One other technical point deserves a bit more elaboration. Before we started we had to answer the question of “how do we represent or store the location of virtual objects?”. Perhaps this isn’t a great conversation starter at the pub on a Saturday night, but it’s important nevertheless.
We take so many things for granted in the real world – signs, streetlights, buildings. We expect them to stick around even when you look away. But programming is like universe building, you have to do everything by hand.
The approach we took may seem obvious: to define object position with GPS coordinates. We give every object a latitude, longitude and elevation (as well as orientation).
But the gotcha is that phones today don’t have precise geolocation. We had to write a wrapper of our own. When users start our app we build up (or load) an augmented reality map of the area. That map can be saved back to a server with a precise geolocation. Once there is a map of a room, then everything in that map is also very precisely geo-located. This means everything you place or do in our app is in fact specified in earth global coordinates.
Blair points out that although modern smartphones (or devices) today don’t have very accurate GPS, this is likely to change soon. We expect that in the next year or two GPS will become hyper-precise – augmented by 3d depth maps of the landscape – making our wrapper optional.
Our exploration has been taking place in conversation and code. Personally I enjoy this praxis — spending some time talking, and then implementing a working proof of concept. Nothing clarifies thinking like actually trying to build an example.
At the 10,000 foot view, at the idealistic end of the spectrum, it is becoming obvious that we all have different ideas of what AR is or will be. The AR view I crave is one of many different information objects from many of different providers — personal reminders, city traffic overlays, weather bots, friend location notifiers, contrails of my previous trajectories through space etc. It feels like a creative medium. I see users wanting to author objects, where different objects have different priorities, where different objects are “alive” — that they have their own will, mobility and their own interactions with each other. In this way an AR view echoes a natural view of the default world— with all kinds of entities competing for our attention.
Stepping back even further — at a 100,000 foot view — there are several fundamental communication patterns that humans use creatively. We use visual media (signage) and we use audio (speaking, voice chat). We have high-resolution high-fidelity expressive capabilities, that includes our body language, our hand gestures, and especially a hugely rich facial expressiveness. We also have text-based media — and many other kinds of media. It feels like when anybody builds a communication medium that easily allows humans to channel some of their high-bandwidth needs over that pipeline, that medium can become very popular. Skype, messaging, wikis, even music — all of these things meet fundamental expressive human drives; they are channels for output and expressiveness.
In that light a question that’s emerging for me is “Is sharing 3D objects in space a fundamental communication medium?”. If so then the question becomes more “What are reasons to NOT build a minimal capability to express the persistent 3d placement of objects in space?”. Clearly work needs to make money and be sustainable for people who make the work. Are we tapping into something fundamental enough, valuable enough, even in early incarnations, that people will spend money (or energy) on it? I posit that if we help express fundamental human communication patterns — we all succeed.
What’s surprising is the power of persistence. When the experience works well I have the mental illusion that my room indeed has these virtual images and objects in it. Our minds seem deeply fooled by the illusion of persistence. Similar to using the Magic Leap there’s a sense of “magic” — the sense that there’s another world — that you can see if you squint just right. Even after you put down the device that feeling lingers. Augmented Reality is starting to feel real.
The post Augmented Reality and the Browser — An App Experiment appeared first on Mozilla Hacks - the Web developer blog.
|
The Mozilla Blog: Evolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program |
For the last three years Firefox has invested heavily in innovation, and our users have been an essential part of this journey. Through the Test Pilot Program, Firefox users have been able to help us test and evaluate a variety of potential Firefox features. Building on the success of this program, we’re proud to announce today that we’re evolving our approach to experimentation even further.
Test Pilot was designed to harness the energy of our most passionate users. We gave them early prototypes and product explorations that weren’t ready for wide release. In return, they gave us feedback and patience as these projects evolved into the highly polished features within our products today. Through this program we have been able to iterate quickly, try daring new things, and build products that our users have been excited to embrace.
Since the beginning of the Test Pilot program, we’ve built or helped build a number of popular Firefox features. Activity Stream, which now features prominently on the Firefox homepage, was in the first round of Test Pilot experiments. Activity Stream brought new life to an otherwise barren page and made it easier to recall and discover new content on the web. The Test Pilot team continued to draw the attention of the press and users alike with experiments like Containers that paved the way for our highly successful Facebook Container. Send made private, encrypted, file sharing as easy as clicking a button. Lockbox helped you take your Firefox passwords to iOS devices (and soon to Android). Page Shot started as a simple way to capture and share screenshots in Firefox. We shipped the feature now known as Screenshots and have since added our new approach to anti-tracking that first gained traction as a Test Pilot experiment.
Test Pilot performed better than we could have ever imagined. As a result of this program we’re now in a stronger position where we are using the knowledge that we gained from small groups, evangelizing the benefits of rapid iteration, taking bold (but safe) risks, and putting the user front and center.
We’re applying these valuable lessons not only to continued product innovation, but also to how we test and ideate across the Firefox organization. So today, we are announcing that we will be moving to a new structure that will demonstrate our ability to innovate in exciting ways and as a result we are closing the Test Pilot program as we’ve known it.
Migrating to a new model doesn’t mean we’re doing fewer experiments. In fact, we’ll be doing even more! The innovation processes that led to products like Firefox Monitor are no longer the responsibility of a handful of individuals but rather the entire organization. Everyone is responsible for maintaining the Culture of Experimentation Firefox has developed through this process. These techniques and tools have become a part of our very DNA and identity. That is something to celebrate. As such, we won’t be uninstalling any experiments you’re using today, in fact, many of the Test Pilot experiments and features will find their way to Addons.Mozilla.Org, while others like Send and Lockbox will continue to take in more input from you as they evolve into stand alone products.
We want to thank Firefox users for their input and support of product features and functionality testing through the Test Pilot Program. We look forward to continuing to work closely with our users who are the reason we build Firefox in the first place. In the coming months look out for news on how you can get involved in the next stage of our experimentation.
In the meantime, the Firefox team will continue to focus on the next release and what we’ll be developing in the coming year, while other Mozillians chug away at developing equally exciting and user-centric product solutions and services. You can get a sneak peak at some of these innovations at Mozilla Labs, which touches everything from voice capability to IoT to AR/VR.
And so we say goodbye and thank you to Test Pilot for helping us usher in a bright future of innovation at Mozilla.
The post Evolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program appeared first on The Mozilla Blog.
|
Firefox Nightly: Moving to a Profile per Install Architecture |
With Firefox 67 you’ll be able to run different Firefox installs side by side by default.
Supporting profiles per installation is a feature that has been requested by pre-release users for a long time now and we’re pleased to announce that starting with Firefox 67 users will be able to run different installs of Firefox side by side without needing to manage profiles.
Firefox saves information such as bookmarks, passwords and user preferences in a set of files called your profile. This profile is stored in a location separate from the Firefox program files.
More details on profiles are can be found here.
Previously, all Firefox versions shared a single profile by default. With Firefox 67, Firefox will begin using a dedicated profile for each Firefox version (including Nightly, Beta, Developer Edition, and ESR). This will make Firefox more stable when switching between versions on the same computer and will also allow you to run different Firefox installations at the same time:
If you do nothing, your profile data will be different on each version of Firefox.
If you would like the information you save to Firefox to be the same on all versions, you can use a Firefox Account to keep them in sync.
Sync is the easiest way to make your profiles consistent on all of your versions of Firefox. You also get additional benefits like sending tabs and secure password storage. Get started with Sync here.
You will not lose any personal data or customizations. Any previous profile data is safe and attached to the first Firefox installation that was opened after this change.
Users of only one Firefox install or users of multiple Firefox installs who already had set different profiles for different installations will not notice the change
We really hope that this change will make it simpler for Firefox users to start running Nightly. If you come across a bug or have any suggestions we really welcome your input through our support channels.
Users who already have created manually separate profile for different installations will not notice the change (this has been the advised procedure on Nightly for a while).
https://blog.nightly.mozilla.org/2019/01/14/moving-to-a-profile-per-install-architecture/
|
Cameron Kaiser: TenFourFox FPR12b1 available |
Unfortunately, we continue to accumulate difficult-to-solve JavaScript bugs. The newest one is issue 541, which affects Github most severely and is hampering my ability to use the G5 to work in the interface. This one could be temporarily repaired with some ugly hacks and I'm planning to look into that for FPR13, but I don't have this proposed fix in FPR12 since it could cause parser regressions and more testing is definitely required. However, the definitive fix is the same one needed for the frustrating issue 533, i.e., the new frontend bindings introduced with Firefox 51. I don't know if I can do that backport (both with respect to the technical issues and the sheer amount of time required) but it's increasingly looking like it's necessary for full functionality and it may be more than I can personally manage.
Meanwhile, FPR12 is scheduled for parallel release with Firefox 60.5/65 on January 29. Report new issues in the comments (as always, please verify the issue doesn't also occur in FPR11 before reporting a new regression, since sites change more than our core does).
http://tenfourfox.blogspot.com/2019/01/tenfourfox-fpr12b1-available.html
|
The Servo Blog: This Week In Servo 123 |
In the past three weeks, we merged 72 PRs in the Servo organization’s repositories.
Congratulations to dlrobertson for their new reviewer status for the ipc-channel library!
Our roadmap is available online. Plans for 2019 will be published soon.
This week’s status updates are here.
ChannelSplitterNode
WebAudio API.HTMLTrackElement
API.source
API for message events.Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!
|
Hacks.Mozilla.Org: Designing the Flexbox Inspector |
The new Flexbox Inspector, created by Firefox DevTools, helps developers understand the sizing, positioning, and nesting of Flexbox elements. You can try it out now in Firefox DevEdition or join us for its official launch in Firefox 65 on January 29th.
The UX challenges of this tool have been both frustrating and a lot of fun for our team. Built on the basic concepts of the CSS Grid Inspector, we sought to expand on the possibilities of what a design tool could be. I’m excited to share a behind-the-scenes look at the UX patterns and processes that drove our design forward.
CSS Flexbox is an increasingly popular layout model that helps in building robust dynamic page layouts. However, it has a big learning curve—at the beginning of this project, our team wasn’t sure if we understood Flexbox ourselves, and we didn’t know what the main challenges were. So, we gathered data to help us design the basic feature set.
Our earliest research on design-focused tools included interviews with developer/designer friends and community members who told us they wanted to understand Flexbox better.
We also ran a survey to rank the Flexbox features folks most wanted to see. Min/max width and height constraints received the highest score. The ranking of shrink/grow features was also higher than we expected. This greatly influenced our plans, as we had originally assumed these more complicated features could wait for a version 2.0. It was clear however that these were the details developers needed most.
Most of the early design work took the form of spirited brainstorming sessions in video chat, text chat, and email. We also consulted the experts: Daniel Holbert, our Gecko engine developer who implemented the Flexbox spec for Firefox; Dave Geddes, CSS educator and creator of the Flexbox Zombies course; and Jen Simmons, web standards champion and designer of the Grid Inspector.
The discussions with friendly and passionate colleagues were among the best parts of working on this project. We were able to deep-dive into the meaty questions, the nitty-gritty details, and the far-flung ideas about what could be possible. As a designer, it is amazing to work with developers and product managers who care so much about the design process and have so many great UX ideas.
After our info-gathering, we worked to build our own mental models of Flexbox.
While trying to learn Flexbox myself, I drew diagrams that show its different features.
My colleague Gabriel created a working prototype of a Flexbox highlighter that greatly influenced our first launch version of the overlay. It’s a monochrome design similar to our Grid Inspector overlay, with a customizable highlight color to make it clearly visible on any website.
We use a dotted outline for the container, solid outlines for items, and diagonal shading between the items to represent the free space created by justify-content and margins.
We got more adventurous with the Flexbox pane inside DevTools. The flex item diagram (or “minimap” as we love to call it) shows a visualization of basis, shrink/grow, min/max clamping, and the final size—each attribute appearing only if it’s relevant to the layout engine’s sizing decisions.
Many other design ideas, such as these flex container diagrams, didn’t make it into the final MVP, but they helped us think through the options and may get incorporated later.
With help from our Gecko engineers, we were able to display a chart with step-by-step descriptions of how a flex item’s size is determined. Basic color-coding between the diagram and chart helps to connect the two UIs.
Flex badges in the markup view serve as indicators of flex containers as well as shortcuts for turning on the in-page overlay. Early data shows that this is the most common way to turn on the overlay; the toggle switch in the Layout panel and the button next to the display:flex
declaration in Rules are two other commonly used methods. Having multiple entry points accommodates different workflows, which may focus on any one of the three Inspector panels.
Building new tools can be risky due to the presumption of modifying developers’ everyday workflows. One of my big fears was that we’d spend countless hours on a new feature only to hide it away somewhere inside the complicated megaplex that is Firefox Developer Tools. This could result in people never finding it or not bothering to navigate to it.
To invite usage, we automatically show Flexbox info in the Layout panel whenever a developer selects a flex container or item inside the markup view. The Layout panel will usually be visible by default in the third Inspector column which we added in Firefox 62. From there, the developer can choose to dig deeper into flex visualizations and relationships.
One new thing we’re trying is a page-style navigation in which the developer goes “forward a page” to traverse down the tree (to child elements), or “back a page” to go up the tree (to parent elements). We’re also making use of a select menu for jumping between sibling flex items. Inspired by mobile interfaces, the Firefox hamburger menu, and other page-style UIs, it’s a big experimental departure from the simpler navigation normally used in DevTools.
One of the trickier parts of the structure was coming up with a cohesive design for flex containers, items, and nested container-items. My colleague Patrick figured out that we should have two types of flex panes inside the Layout panel, showing whichever is relevant: an Item pane or a Container pane. Both panes show up when the element is both a container and an item.
When hovering over element names inside the Flexbox panes, we highlight the element in the page, strengthening the connection between the code and the output without including extra ‘inspect’ icons or other steps. I plan to introduce more of this type of intuitive hover behavior into other parts of DevTools.
After lots of iteration, I created a high-fidelity prototype to share with our community channels. We received lots of helpful comments that fed back into the design.
We had our first foray into formal user testing, which was helpful in revealing the confusing parts of our tool. We plan to continue improving our user research process for all new projects.
Later this month, developers from our team will be writing a more technical deep-dive about the Flexbox Inspector. Meanwhile, here are some fun tidbits from the dev process: Lots and lots of issues were created in Bugzilla to organize every implementation task of the project. Silly test pages, like this one, created by my colleague Mike, were made to test out every Flexbox situation. Our team regularly used the tool in Firefox Nightly with various sites to dog-food the tool and find bugs.
2018 was a big year for Firefox DevTools and the new Design Tools initiative. There were hard-earned lessons and times of doubt, but in the end, we came together as a team and we shipped!
We have more work to do in improving our UX processes, stepping up our research capabilities, and understanding the results of our decisions. We have more tools to build—better debugging tools for all types of CSS layouts and smoother workflows for CSS development. There’s a lot more we can do to improve the Flexbox Inspector, but it’s time for us to put it out into the world and see if we can validate what we’ve already built.
Now we need your help. It’s critical that the Flexbox Inspector gets feedback from real-world usage. Give it a spin in DevEdition, and let us know via Twitter or Discourse if you run into any bugs, ideas, or big wins.
Thanks to Martin Balfanz, Daniel Holbert, Patrick Brosset, and Jordan Witte for reviewing drafts of this article.
The post Designing the Flexbox Inspector appeared first on Mozilla Hacks - the Web developer blog.
https://hacks.mozilla.org/2019/01/designing-the-flexbox-inspector/
|
Mozilla GFX: WebRender newsletter #35 |
Bonsoir! Another week, another newsletter. I stealthily published WebRender on crates.io this week. This doesn’t mean anything in terms of API stability and whatnot, but it makes it easier for people to use WebRender in their own rust projects. Many asked for it so there it is. Everyone is welcome to use it, find bugs, report them, submit fixes and improvements even!
In other news we are initiating a notable workflow change: WebRender patches will land directly in Firefox’s mozilla-central repository and a bot will automatically mirror them on github. This change mostly affects the gfx team. What it means for us is that testing webrender changes becomes a lot easier as we don’t have to manually import every single work in progress commit to test it against Firefox’s CI anymore. Also Kats won’t have to spend a considerable amount of his time porting WebRender changes to mozilla-central anymore.
We know that interacting with mozilla-central can be intimidating for external contributors so we’ll still accept pull requests on the github repository although instead of merging them from there, someone in the gfx team will import them in mozilla-central manually (which we already had to do for non-trivial patches to run them against CI before merging). So for anyone who doesn’t work everyday on WebRender this workflow change is pretty much cosmetic. You are still welcome to keep following and interacting with the github repository.
The team keeps going through the remaining blockers (19 P2 bugs and 34 P3 bugs at the time of writing).
In about:config, set the pref “gfx.webrender.all” to true and restart the browser.
The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.
https://mozillagfx.wordpress.com/2019/01/10/webrender-newsletter-35/
|
The Mozilla Blog: Eric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference |
The Levchin Prize awards two entrepreneurs every year for significant contributions to solving global, real-world cryptography issues that make the internet safer at scale. This year, we’re proud to announce that our very own Firefox CTO, Eric Rescorla, was awarded one of these prizes for his involvement in spearheading the latest version of Transport Layer Security (TLS). TLS 1.3 incorporates significant improvements in both security and speed, and was completed in August and already secures 10% of sites.
Eric has contributed extensively to many of the core security protocols used in the internet, including TLS, DTLS, WebRTC, ACME, and the in development IETF QUIC protocol. Most recently, he was editor of TLS 1.3, which already secures 10% of websites despite having been finished for less than six months. He also co-founded Let’s Encrypt, a free and automated certificate authority that now issues more than a million certificates a day, in order to remove barriers to online encryption and helped HTTPS grow from around 30% of the web to around 75%. Previously, he served on the California Secretary of State’s Top To Bottom Review where he was part of a team that found severe vulnerabilities in multiple electronic voting devices.
The 2019 winners were selected by the Real-World Cryptography conference steering committee, which includes professors from Stanford University, University of Edinburgh, Microsoft Research, Royal Holloway University of London, Cornell Tech, University of Florida, University of Bristol, and NEC Research.
This prize was announced on January 9th at the 2019 Real-World Crypto Conference in San Jose, California. The conference brings together cryptography researchers and developers who are implementing cryptography on the internet, the cloud and embedded devices from around the world. The conference is organized by the International Association of Cryptologic Research (IACR) to strengthen and advance the conversation between these two communities.
For more information about the Levchin Prize visit www.levchinprize.com.
The post Eric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference appeared first on The Mozilla Blog.
|
Mozilla Open Policy & Advocacy Blog: Our Letter to Congress About Facebook Data Sharing |
Last week Mozilla sent a letter to the House Energy and Commerce Committee concerning its investigation into Facebook’s privacy practices. We believe Facebook’s representations to the Committee — and more recently — concerning Mozilla are inaccurate and wanted to set the record straight about any past and current work with Facebook. You can read the full letter here.
The post Our Letter to Congress About Facebook Data Sharing appeared first on Open Policy & Advocacy.
https://blog.mozilla.org/netpolicy/2019/01/08/our-letter-to-congress-about-facebook/
|
The Mozilla Blog: Mozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices |
Last year, Mozilla set out to build a best-in-class browser that was made specifically for immersive browsing. The result was Firefox Reality, a browser designed from the ground up to work on virtual reality headsets. To kick off 2019, we are happy to announce that we are partnering with HTC VIVE to power immersive web experiences across Vive’s portfolio of devices.
What does this mean? It means that Vive users will enjoy all of the benefits of Firefox Reality (such as its speed, power, and privacy features) every time they open the Vive internet browser. We are also excited to bring our feed of immersive web experiences to every Vive user. There are so many amazing creators out there, and we are continually impressed by what they are building.
“This year, Vive has set out to bring everyday computing tasks into VR for the first time,” said Michael Almeraris, Vice President, HTC Vive. “Through our exciting and innovative collaboration with Mozilla, we’re closing the gap in XR computing, empowering Vive users to get more content in their headset, while enabling developers to quickly create content for consumers.”
Virtual reality is one example of how web browsing is evolving beyond our desktop and mobile screens. Here at Mozilla, we are working hard to ensure these new platforms can deliver browsing experiences that provide users with the level of privacy, ease-of-use, and control that they have come to expect from Firefox.
In the few months since we released Firefox Reality, we have already released several new features and improvements based on the feedback we’ve received from our users and content creators. In 2019, you will see us continue to prove our commitment to this product and our users with every update we provide.
Stay tuned to our mixed reality blog and twitter account for more details. In the meantime, you can check out all of the announcements from HTC Vive here.
If you have an all-in-one VR device running Vive Wave, you can search for “Firefox Reality” in the Viveport store to try it out right now.
The post Mozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices appeared first on The Mozilla Blog.
|
Mozilla VR Blog: Navigation Study for 3DoF Devices |
Over the past few months I’ve been building VR demos and writing tutorial blogs. Navigation on a device with only three degrees of freedom (3dof) is tricky, So I decided to do a survey of many native apps and games for the Oculus Go to see how each of them handled it. Below are my results.
For this study I looked only at navigation, meaning how the user moves around in the space, either by directly moving or by jumping to semantically different spaces (ex: click a door to go to the next room). I don't cover other interactions like how buttons or sliders work. Just navigation.
Don’t touch the camera. The camera is part of the users head. Don’t try to move it. All apps which move the camera induce some form of motion sickness. Instead use one of a few different forms of teleportation, always under user control.
The ideal control for me was teleportation to semantically meaningful locations, not just 'forward ten steps'. Further more, when presenting the user with a full 360 environment it is helpful to have a way to recenter the view, such as by using left/right buttons on the controller. Without a recentering option the user will have to physically turn themselves around, which is cumbersome unless you are in a swivel chair.
To help complete the illusion I suggest subtle sound effects for movement, selection, and recentering. Just make sure they aren't very noticable.
This is a roller coaster simulator, except it lets you do things that a real rollercoaster can’t, such as jumping between tracks and being chased by dinosaurs. To start you have pointer interaction across three panels: left, center, right. Everything has hover/rollover effects with sound. During the actual rollercoaster ride you are literally a camera on rails. Press the trigger to start and then the camera moves at constant speed. All you can do is look around. Speed and angle changes made me dizzy and I had to take it off after about five minutes, but my 7 year old loves Epic Roller Coaster.
A PBS app that teaches you about black holes, the speed of light, and other physics concepts. You use pointer interaction to click buttons then watch non-interactive 3D scenes/info, though they are in full 360 3D, rather than plain movies.
A collection of many 360 and 3D movies. Pointer interaction to pick videos, some scrolling w/ touch gestures. Then non-interactive videos except for the video controls.
Explorer famous monuments and locations like Mount Rushmore. You can navigate around 360 videos by clicking on hotspots with the pointer. Some trigger photos or audio. Others are teleportation spots. There is no free navigation or free teleportation, only to the hotspots. You can adjust the camera with left and right swipes, though.
An intense driving and tilting music game. It uses pointer control for menus. In the game you run at a constant speed. The track itself turns but you are always stable in the middle. Particle effects stream at you, reinforcing the illusion of the tube you are in.
A fishing simulator. You use pointer clicks for navigation in the menus. The main interaction is a fishing pole. Hold then release button at the right time while flicking the pole forward to cast, then click to reel it back in.
Basically like the rollercoaster game, but you learn about various dinosaurs by driving on a constant speed rails car to different scenes. It felt better to me than Epic Roller Coaster because the velocity is constant, rather than changing.
Text overlays with audio and 360 background image. You can navigate through full 3D space by jumping to hard coded teleport spots. You can click certain spots to get to these points, hear audio, and lightly interact with artifacts. If you look up at an angle you see a flip-180 button to change the view. This avoids the problem of having to be in a 360 chair to navigate around. You cannot camera adjust with left/right swipes.
In every scene you float over a static mini-landscape, sort of like you are above a game board. You cannot adjust the angle or camera, just move your head to see stuff. Everything laid around you for easy viewing from the fixed camera point. Individual mini games may use different mechanics for playing, but they are all using the same camera. Essentially the camera and world never move. You can navigate your player character around the board by clicking on spots, similar to an RTS like Starcraft.
Menus are a static camera view with mouse interaction. Once inside of a star field you are at the center and can look in any direction of the virtual night sky. Swipe left / right to move camera 45 degrees, which happens instantly, not with navigation, though there are sound effects.
Click on a star or other object in the sky to get more info. The info appears attached to your controller. Rotate thumb in a on the touch area to get different info on the object. The info has a model of the object, either a full 3d model of a star / planet, or a 2d image of a galaxy, etc.
Mouse menu interaction. In-game the level is a maze mapped onto a cylinder surrounding you. You click to guide Lila through the maze, sometimes she must slide from one side across the center to the other side. You guide her and the spotlight with your gaze. You activate things by connecting dots with the pointer. There is no way I can see to adjust the camera. This is a bit annoying in the levels which require you to navigate a full 360 degrees. I really wish it had recentering.
A mini RTS / tower defense game. The camera is at a fixed position above the board. The boards are designed to lay around the camera, so you turn left or right to see the whole board. Control your units by clicking on them then clicking a destination.
A puzzle game where you lightly move things around to complete a sun light reflecting mechanism. The camera is fixed and the board is always an object in front of you. You rotate the board with left / right swipes on the touchpad. You move the sun by holding the trigger and moving the pointer around. Menus use mouse cursor navigation. The board is always in front of you but extra info like the level you are on and your score are to the left or right of you. Interestingly these are positioned far enough to the sides that you won't see the extra info while solving a puzzle. Very nice. You are surrounded by a pretty sky box that changes colors as the sun moves.
Weaver is 360 photo viewer using a gaze cursor to navigate. Within the photos you cannot move them, just rotate your head. If you look down a floating menu appears to go to the next photo or main menu.
This is a nice underwater simulation of a coral reef. Use the pointer for menus and navigate around undersea with controller. The camera moves fairly slowly but does have some acceleration which made me a little sick. No camera rotation or recentering, just turn your head.
Rhythm game. Lights on the game board make it look like you are moving forward w/ your bowling ball, or that the board is moving backward. Either way it’s at a constant speed. Use touchpad interactions to control your ball to the beat.
In Endspae you fly a space fighter into battle. There is a static cockpit around you and it uses head direction to move the camera around. The controller is used to aim the weapon. I could only handle this for about 60 seconds before I started to feel sick. Basically everything is moving around you constantly in all directions, so I immediately started to feel floaty.
You navigate by jumping to teleportation spots using a gaze cursor. When you teleport the camera moves to the new spot at a constant velocity. Because movement is slow and the horizon is level I didn’t get too queasy, but still not as great as instant teleport. On the other hand, instant teleport might make it hard to know where you just moved to. Losing spatial context would be bad in this game. You can rotate your view using left and right swipes.
This is a high resolution 360 movie following a dinosaur from the newest Jurassic Park movie. The camera is generally fixed though sometimes moves very slowly along a line to take you towards the action. I never experienced any dizziness from the movement.
Spooky short films in full 360. In the one I watched, The Office, the camera did not move at all, though things did sometimes come from angles away from where they knew the viewer would be looking. This is a very clever way to do jump scares without controlling the camera.
You wander around a maze trying to find the exit. I found the control scheme awkward. You walk forward in whatever direction you gaze in for more than a moment, or when you press a button on the controller. The direction is always controlled by your gaze, however. The movement speed goes from stationary to full speed over a second or so. I would have preferred to not have the ramp up time. Also, you can’t click left or right on the controller trackpad to shift the view. I’m guessing this was originally for cardboard or similar devices.
A Zombie shooter. The Oculus Store has several of these. Dead Shot has both a comfort mode and regular mode. In regular mode the camera does move, but at a slow and steady pace that didn’t give me any sickness. In comfort mode the camera never moves. Instead it teleports to the new location, including a little eyeblink animation for the transition. Nicely done! To make sure you don’t get lost it only teleports to near by locations you can see.
A creature creation game. While there are several rooms all interaction happens from fixed positions where you look around you. You travel to various rooms by pointing and clicking on the door that you want to go to.
Shoot ghosts in an old west town. You don’t move at all in this game. You always shoot from fixed camera locations, simliar to a carnival game.
This is actually a side scroller with a set that looks like little dollhouses that you view from the side. I’d say it was cute except that there’s monsters anywhere. In any case, you don’t move the camera at all except to look from one end of a level to another.
A game where you walk around a haunted house. The control scheme is interesting. You use the trigger on the controller to move forward, however the direction is controlled by your head view. The direction of the controller is used for your flashlight. I think it would be better if it was reversed. Use the controller direction for movement and your head for the flashlight. Perhaps there’s a point later in the game where their decision matters. I did notice that the speed is constant. You seem to be moving or not. I didn’t experience any discomfort.
This is a little puzzle adventure game that takes you to the movie’s trailer. For being essentially advertising it was surprisingly good. You navigate by pointing at and clicking on glowing lights that are trigger points. These then move you toward that spot. The movement is at a constant speed but here is a slight slowdown when you reach the trigger point instead of immediately stopping. I felt a slight tinge of sickness but not much. In other parts of the game you climb by pointing and clicking on hand holds on a wall. I like how they used the same mechanic in different ways.
A literal dungeon crawler where you walk through dark halls looking for clues to find the way out. This game uses the trigger to collect things and the forward button on the touchpad to move forward. It uses the direction of the controller for movement rather than head direction. This means you can move sideways. It also means you can smoothly move around twisty halls if you are sitting in a swivel chair. I like it more than the way Affected does it.
This game lets you visit various ancient wonders and wander around as citizens. You navigate by teleporting to wherever you point at. You can swipe left or right on the touchpad to rotate your view, though I found it a bit twitchy. Judging from the in game tutorial World of Wonders was designed original for the Gear VR so perhaps it’s not calibrated for the Oculus Go.
Short distance teleporting is fine for when you are walking around a scene, but to get between scenes you click on the sky to bring up a map which then lets you jump to the other scenes. Within a scene you can also click on various items to learn more about them.
One interesting interaction is that sometimes characters in the scenes will talk to you and ask you questions. You can respond yes or now by nodding or shaking your head. I don’t think I’ve ever seen that in a game before. Interestingly, nods and shakes are not universal. Different cultures use these gestures differently.
A fighting game where you slash at enemies. It doesn’t appear that you move at all, just that enemies attack you and you attack back with melee weapons.
Spaceship piloting game. This seems to be primarily a mullti-player game but I did the single player training levels to learn how to navigate. All action takes place in the cockpit of a spaceship. You navigate by targeting where you want to go and tilting your head. Once you have picked a target you press turbo to go their quickly. Oddly the star field is fixed while the cockpit floats around you. I think this means that if I wanted to go backwards I’d have to completely rotate myself around. Better have a swivel chair!
A game where you smash glass things. The camera is on rails moving quickly straight forward. I was slightly dizzy at first because of the speed but quickly got used to it. You press the trigger to fire at the direction your head is pointing. It doesn’t use the controller orientation. I’m guessing this is a game originally designed for Cardboard? The smashing of objects is satisfying and there are interesting challenges as further levels have more and more stuff to smash. There is no actual navigation because you are on rails.
A simulation of the moon landing with additional information about the Apollo missions. Mostly this consists of watching video clips or cinematics, which are when the camera is moved around a scene, such as going up the elevator of the Saturn V rocket. In a few places you can control something, such as docking the spaceship to the LEM. The cinematics are good, especially for a device as limited graphically as the Go, but I did get a twinge of dizziness whenever the camera accelerated or de-celerated. Largely are you are in a fixed position with zero interaction.
|
QMO: Firefox 65 Beta 10 Testday, January 11th |
Hello Mozillians,
We are happy to let you know that Friday, January 11th, we are organizing Firefox 65 Beta 10 Testday. We’ll be focusing our testing on: Firefox Monitor, Content Blocking and Find Toolbar.
Check out the detailed instructions via this etherpad.
No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.
Join us and help us make Firefox better!
See you on Friday!
https://quality.mozilla.org/2019/01/firefox-65-beta-10-testday-january-11th/
|
Gregory Szorc: Seeking Employment |
After almost seven and a half years as an employee of Mozilla Corporation, I'm moving on. I have already worked my final day as an employee.
This post is the first time that I've publicly acknowledged my departure. To any Mozillians reading this, I regret that I did not send out a farewell email before I left. But the circumstances of my departure weren't conducive to doing so. I've been drafting a proper farewell blog post. But it has been very challenging to compose. Furthermore, each passing day brings with it new insights into my time at Mozilla and a new wrinkle to integrate into the reflective story I want to tell in that post. I vow to eventually publish a proper goodbye that serves as the bookend to my employment at Mozilla. Until then, just let me say that I'm already missing working with many of you. I've connected with several people since I left and still owe responses or messages to many more. If you want to get in touch, my contact info is in my r'esum'e.
I left Mozilla without new employment lined up. That leads me to the subject line of this post: I'm seeking employment. The remainder of this post is thus tailored to potential employers.
My r'esum'e has been updated. But that two page summary only scratches the surface of my experience and set of skills. The Body of Work page of my website is a more detailed record of the work I've done. But even it is not complete!
Perusing through my posts on this blog will reveal even more about the work I've done and how I go about it. My r'esum'e links to a few posts that I think are great examples of the level of understanding and detail that I'm capable of harnessing.
As far as the kind of work I want to do or the type of company I want to work for, I'm trying to keep an open mind. But I do have some biases.
I prefer established companies to early start-ups for various reasons. Dan Luu's Big companies v. startups is aligned pretty well with my thinking.
One of the reasons I worked for Mozilla was because of my personal alignment with the Mozilla Manifesto. So I gravitate towards employers that share those principles and am somewhat turned off by those that counteract them. But I recognize that the world is complex and that competing perspectives aren't intrinsically evil. In other words, I try to maintain an open mind.
I'm attracted to employers that align their business with improving the well-being of the planet, especially the people on it. The link between the business and well-being can be tenuous: a B2B business for example is presumably selling something that helps people, and that helping is what matters to me. The tighter the link between the business and improving the world will increase my attraction to a employer.
I started my university education as a biomedical engineer because I liked the idea of being at the intersection of technology and medicine. And part of me really wants to return to this space because there are few things more noble than helping a fellow human being in need.
As for the kind of role or technical work I want to do, I could go in any number of directions. I still enjoy doing individual contributor type work and believe I could be an asset to an employer doing that work. But I also crave working on a team, performing technical mentorship, and being a leader of technical components. I enjoy participating in high-level planning as well as implementing the low-level aspects. I recognize that while my individual output can be substantial (I can provide data showing that I was one of the most prolific technical contributors at Mozilla during my time there) I can be more valuable to an employer when I bestow skills and knowledge unto others through teaching, mentorship, setting an example, etc.
I have what I would consider expertise in a few domains that may be attractive to employers.
I was a technical maintainer of Firefox's build system and initiated a transition away from an architecture that had been in place since the Netscape days. I definitely geek out way too much on build systems.
I am a contributor to the Mercurial version control tool. I know way too much about the internals of Mercurial, Git, and other version control tools. I am intimately aware of scaling problems with these tools. Some of the scaling work I did for Mercurial saved Mozilla tens of thousands of dollars in direct operational costs and probably hundreds of thousands of dollars in saved people time due to fewer service disruptions and faster operations.
I have exposure to both client and server side work and the problems encountered within each domain. I've dabbled in lots of technologies, operating systems, and tools. I'm not afraid to learn something new. Although as my experience increases, so does my skepticism of shiny new things (I've been burned by technical fads too many times).
I have a keen fascination with optimization and scaling, whether it be on a technical level or in terms of workflows and human behavior. I like to ask and then what so I'm thinking a few steps out and am prepared for the next problem or consequence of an immediate action.
I seem to have a knack for caring about user experience and interfaces. (Although my own visual design skills aren't the greatest - see my website design for proof.) I'm pretty passionate that tools that people use should be simple and usable. Cognitive dissonance, latency, and distractions are real and as an industry we don't do a great job minimizing these disruptions so focus and productivity can be maximized. I'm not saying I would be a good product manager or UI designer. But it's something I've thought about because not many engineers seem to exhibit the passion for good user experience that I do and that intersection of skills could be valuable.
My favorite time at Mozilla was when I was working on a unified engineering productivity team. The team controlled most of the tools and infrastructure that Firefox developers interacted with in order to do their jobs. I absolutely loved taking a whole-world view of that problem space and identifying the high-level problems - and low-hanging fruit - to improve the overall Firefox development experience. I derived a lot of satisfaction from identifying pain points, equating them to a dollar cost by extrapolating people time wasted due to them, justifying working on them, and finally celebrating - along with the overall engineering team - when improvements were made. I think I would be a tremendous asset to a company working in this space. And if my experience at Mozilla is any indicator, I would more than offset my total employment cost by doing this kind of work.
I've been entertaining the idea of contracting for a while before I resume full-time employment with a single employer. However, I've never contracted before and need to do some homework before I commit to that. (Please leave a comment or email me if you have recommendations on reading material.)
My dream contract gig would likely be to finish the Mercurial wire protocol and storage work I started last year. I would need to type up a formal proposal, but the gist of it is the work I started has the potential to leapfrog Git in terms of both client-side and server-side performance and scalability. Mercurial would be able to open Git repositories on the local filesystem as well as consume them via the Git wire protocol. Transparent Git interoperability would enable Mercurial to be used as a drop-in replacement for Git, which would benefit users who don't have control over the server (such as projects that live on GitHub). Mercurial's new wire protocol is designed with global scalability and distribution in mind. The goal is to enable server operators to deploy scalable VCS servers in a turn-key manner by relying on scalable key-value stores and content distribution networks as much as possible (Mercurial and Git today require servers to perform way too much work and aren't designed with modern distributed systems best practices, which is why scaling them is hard). The new protocol is being designed such that a Mercurial server could expose Git data. It would then be possible to teach a Git client to speak the Mercurial wire protocol, which would result in Mercurial being a more scalable Git server than Git is today. If my vision is achieved, this would make server-side VCS scaling problems go away and would eliminate the religious debate between Git and Mercurial (the answer would be deploy a Mercurial server, allow data to be exposed to Git, and let consumers choose). I conservatively estimate that the benefits to industry would be in the millions of dollars. How I would structure a contract to deliver aspects of this, I'm not sure. But if you are willing to invest six figures towards this bet, let's talk. A good foundation of this work is already implemented in Mercurial and the Mercurial core development team is already on-board with many aspects of the vision, so I'm not spewing vapor.
Another potential contract opportunity would be funding PyOxidizer. I started the project a few months back as a side-project as an excuse to learn Rust while solving a fun problem that I thought needed solving. I was hoping for the project to be useful for Mercurial and Mozilla one day. But if social media activity is any indication, there seems to be somewhat widespread interest in this project. I have no doubt that once complete, companies will be using PyOxidizer to ship products that generate revenue and that PyOxidizer will save them engineering resources. I'd very much like to recapture some of that value into my pockets, if possible. Again, I'm somewhat naive about how to write contracts since I've never contracted, but I imagine deliver a tool that allows me to ship product X as a standalone binary to platforms Y and Z is definitely something that could be structured as a contract.
As for the timeline, I was at Mozilla for what feels like an eternity in Silicon Valley. And Mozilla's way of working is substantially different from many companies. I need some time to decompress and unlearn some Mozilla habits. My future employer will inherit a happier and more productive employee by allowing me to take some much-needed time off.
I'm looking to resume full-time work no sooner than March 1. I'd like to figure out what the next step in my career is by the end of January. Then I can sign some papers, pack up my skiis, and become a ski bum for the month of February: if I don't use this unemployment opportunity to have at least 20 days on the slopes this season and visit some new mountains, I will be very disappointed in myself!
If you want to get in touch, my contact info is in my r'esum'e. I tend not to answer incoming calls from unknown numbers, so email is preferred. But if you leave a voicemail, I'll try to get back to you.
I look forward to working for a great engineering organization in the near future!
|
Will Kahn-Greene: Everett v1.0.0 released! |
Everett is a configuration library for Python apps.
Goals of Everett:
From that, Everett has the following features:
This release fixes many sharp edges, adds a YAML configuration environment, and fixes Everett so that it has no dependencies unless you want to use YAML or INI.
It also drops support for Python 2.7--Everett no longer supports Python 2.
At Mozilla, I'm using Everett for Antenna which is the edge collector for the crash ingestion pipeline for Mozilla products including Firefox and Fennec. It's been in production for a little under a year now and doing super. Using Everett makes it much easier to:
It's also used in a few other places and I plan to use it for the rest of the components in the crash ingestion pipeline.
First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.
If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple and os.environ.get('CONFIGVAR', 'default_value') style of configuration.
Enjoy!
Thank you to Paul Jimenez who helped fixing issues and provided thoughtful insight on API ergonomics!
For more specifics on this release, see here: https://everett.readthedocs.io/en/latest/history.html#january-7th-2019
Documentation and quickstart here: https://everett.readthedocs.io/en/latest/
Source code and issue tracker here: https://github.com/willkg/everett
|
Niko Matsakis: Rust in 2019: Focus on sustainability |
To me, 2018 felt like a big turning point for Rust, and it wasn’t just the edition. Suddenly, it has become “normal” for me to meet people using Rust at their jobs. Rust conferences are growing and starting to have large number of sponsors. Heck, I even met some professional Rust developers amongst the parents at a kid’s birthday party recently. Something has shifted, and I like it.
At the same time, I’ve also noticed a lot of exhaustion. I know I feel it – and a lot of people I talk to seem to feel the same way. It’s great that so much is going on in the Rust world, but we need to get better at scaling our processes up and processing it effectively.
When I think about a “theme” for 2019, the word that keeps coming to mind for me is sustainability. I think Rust has been moving at a breakneck pace since 1.0, and that’s been great: it’s what Rust needed. But as Rust gains more solid footing out there, it’s a good idea for us to start looking for how we can go back and tend to the structures we’ve built.
There has been a lot of great constructive criticism of our current processes: most recently, boat’s post on organizational debt, along with Florian’s series of posts, did a great job of crystallizing a lot of the challenges we face. I am pretty confident that we can adjust our processes here and make things a lot better, though obviously some of these problems have no easy solution.
Obviously, I don’t know exactly what we should do here. But I think I see some of the pieces of the puzzle. Here is a variety of bullet points that have been kicking around in my head.
Working groups. In general, I would like to see us adopting the idea of working groups as a core “organizational unit” for Rust, and in particular as the core place where work gets done. A working group is an ad-hoc set of people that includes both members of the relevant Rust team but also interested volunteers. Among other benefits, they can be a great vehicle for mentoring, since it gives people a particular area to focus on, versus trying to participate in the Rust project as a whole, which can be very overwhelming.
Explicit stages. Right now, Rust features go through a number of official and semi-official stages before they become “stable”. As I have argued before, I think we would benefit from making these stages a more explicit part of the process (much as e.g. the TC39 and WebAssembly groups already do).
Finishing what we start. Right now, we have no mechanism to expose the “capacity” of our teams – we tend to, for example, accept RFCs without any idea who will implement it, or even mentor an implementation. In fact, there isn’t really a defined set of people to try and ensure that it happens. The result is that a lot of things linger in limbo, either unimplemented, undocumented, or unstabilized. I think working groups can help to solve this, by having a core leadership team that is committed to seeing the feature through.
Expose capacity. Continuing the previous point, I think we should integrate a notion of capacity into the staging process: so that we avoid moving too far in the design until we have some idea who is going to be implementing (or mentoring an implementation). If that is hard to do, then it indicates we may not have the capacity to do this idea right now – if that seems unacceptable, then we need to find something else to stop doing.
Don’t fly solo. One of the things that we discussed in a recent compiler team steering meeting is that being the leader of a working group is super stressful – it’s a lot to manage! However, being a co-leader of a working group is very different. Having someone else (or multiple someones) that you can share work with, bounce ideas off of, and so forth makes all the difference. It’s also a great mentoring opportunities, as the leaders of working groups don’t necessarily have to be full members of the team (yet). Part of exposing capacity, then, is trying to ensure that we don’t just have one person doing any one thing – we have multiple. This is scary: we will get less done. But we will all be happier doing it.
Evaluate priorities regularly. In my ideal world, we would make it very easy to find out what each person on a team is working on, but we would also have regular points where we evaluate whether those are the right things. Are they advancing our roadmap goals? Did something else more promising arise in the meantime? Part of the goal here is to leave room for serendipity: maybe some random person came in from the blue with an interesting language idea that seems really cool. We want to ensure we aren’t too “locked in” to pursue that idea. Incidentally, this is another benefit to not “flying solo” – if there are multiple leaders, then we can shift some of them around without necessarily losing context.
Keeping everyone in sync. Finally, I think we need to think hard about how to help keep people in sync. The narrow focus of working groups is great, but it can be a liability. We need to develop regular points where we issue “public-facing” updates, to help keep people outside the working group abreast of the latest developments. I envision, for example, meetings where people give an update on what’s been happening, the key decision and/or controversies, and seek feedback on interesting points. We should probably tie these to the stages, so that ideas cannot progress forward unless they are also being communicated.
TL;DR. The points above aren’t really a coherent proposal yet, though there are pieces of proposals in them. Essentially I am calling for a bit more structure and process, so that it is clearer what we are doing now and it’s more obvious when we are making decisions about what we should do next. I am also calling for more redundancy. I think that both of these things will initially mean that we do fewer things, but we will do them more carefully, and with less stress. And ultimately I think they’ll pay off in the form of a larger Rust team, which means we’ll have more capacity.
So what about the technical side of things? I think the “sustainable” theme fits here, too. I’ve been working on rustc for 7 years now (wow), and in all of that time we’ve mostly been focused on “getting the next goal done”. This is not to say that nobody ever cleans things up; there have been some pretty epic refactoring PRs. But we’ve also accumulated a fair amount of technical debt. We’ve got plenty of examples where a new system was added to replace the old – but only 90%, meaning that now we have two systems in use. This makes it harder to learn how rustc works, and it makes us spend more time fixing bugs and ICEs.
I would like to see us put a lot of effort into making rustc more
approachable and maintaineable. This means writing documentation, both
of the rustdoc and rustc-guide variety. It also means finishing up
things we started but never quite finished, like replacing the
remaining uses of NodeId
with the newer HirId
. In some cases,
it might mean rewriting whole subsystems, such as with the trait
system and chalk.
None of this means we can’t get new toys. Cleaning up the trait system implementation, for example, makes things like Generic Associated Types (GATs) and specialization much easier. Finishing the transition into the on-demand query system should enable better incremental compilation as well as more complete parallel compilation (and better IDE support). And so forth.
Finally, it seems clear that we need to continue our focus on reducing compilation time. I think we have a lot of good avenues to pursue here, and frankly a lot of them are blocked on needing to improve the compiler’s internal structure.
When one talks about sustainability, that naturally brings to mind the question of financial sustainability as well. Mozilla has been the primary corporate sponsor of Rust for some time, but we’re starting to see more and more sponsorship from other companies, which is great. This comes in many forms: both Google and Buoyant have been sponsoring people to work on the async-await and Futures proposals, for example (and perhaps others I am unaware of); other companies have used contracting to help get work done that they need; and of course many companies have been sponsoring Rust conferences for years.
Going into 2019, I think we need to open up new avenues for supporting the Rust project financially. As a simple example, having more money to help with running CI could enable us to parallelize the bors queue more, which would help with reducing the time to land PRs, which in turn would help everything move faster (not to mention improving the experience of contributing to Rust).
I do think this is an area where we have to tread carefully. I’ve definitely heard horror stories of “foundations gone wrong”, for example, where decisions came to be dominated more by politics and money than technical criteria. There’s no reason to rush into things. We should take it a step at a time.
From a personal perspective, I would love to see more people paid to work part- or full-time on rustc. I’m not sure how best to make that happen, but I think it is definitely important. It has happened more than once that great rustc contributors wind up taking a job elsewhere that leaves them no time or energy to continue contributing. These losses can be pretty taxing on the project.
I already mentioned that I think the compiler needs to put more emphasis on documentation as a means for better sustainability. I think the same also applies to the language: I’d like to see the lang team getting more involved with the Rust Reference and really trying to fill in the gaps. I’d also like to see the Unsafe Code Guidelines work continue. I think it’s quite likely that these should be roadmap items in their own right.
http://smallcultfollowing.com/babysteps/blog/2019/01/07/rust-in-2019-focus-on-sustainability/
|
The Servo Blog: This Week In Servo 122 |
In the past three weeks, we merged 130 PRs in the Servo organization’s repositories.
Congratulations to Ygg01 for their new reviewer status for the html5ever repository!
Our roadmap is available online. Plans for 2019 will be published soon.
This week’s status updates are here.
once
argument for addEventListener
.simpleservo
embedding crate into crates for Rust, Java, and C.playbackRate
media API.Rc
values not be reported as unrooted by the rooting static analysis.Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!
|