Cameron Kaiser: Mozilla's future footgun add-on policy (or, how MoFo leadership is getting it totally wrong) |
First, Electrolysis. As mentioned, we won't support it in TenFourFox; we would need to implement a userland spawn implementation for 10.4 from scratch for starters, and I suspect that the overhead required will end up performing substantially worse on old Macs plus the inevitable OS bugs it will undoubtedly uncover. Currently Mozilla is predicting Electrolysis will reach the release channel by Fx43, which I find incredibly optimistic given predictions for Australis which slipped deadline after deadline, but it's clear Electrolysis' public unveiling in the relative near future is inevitable. Once it becomes no longer possible to launch the browser in single-process mode, likely one or two versions after, that's the end of source parity. My suspicion is that it will actually reach release by Fx45, which is the next ESR anyway, and there should be an emergency fallback to single-process that we can exploit to keep us running at ESR parity for the last time.
To facilitate addons in the new e10s world, Mozilla is effectively announcing that XPCOM/XUL-based addons are now deprecated because of their highly synchronous nature. (Technically, they'll be deprecated six months after Electrolysis goes golden master, and completely unsupported and/or incompatible within six months after that, but as far as I'm concerned announcing a future deprecation is the same as deprecating it now.) This sucks because the use of XPCOM and XUL in the Mozilla Suite and later Firefox and SeaMonkey meant easy cross-platform add-ons that could do powerful things like implementing a completely new protocol within the browser. Although jetpack addons will still work, sort of, any jetpack addon that requires chrome features is subject to this policy also. Mozilla will be enforcing this brave new XUL-free world by refusing to sign addons that rely on XPCOM or XUL in this later timeframe, which dovetails nicely with not allowing unsigned addons starting with Firefox 42. (Parenthetically I don't agree with the mandatory signing policy, and if there is a TenFourFox 45 it will disable this feature. I don't port Gecko code for the walled garden, guys, thanks.)
Calling this a footgun and the future death of Firefox is not merely hyperbole. I suspect, and I suspect Mozilla is ignoring the fact, that many Firefox users use it because of the presence of such powerful addons that just can't be replicated in other browsers. Chrome, for example, doesn't have near the functionality because it doesn't expose it, and its addons are much less useful in general. But Mozilla is not content to merely shoot themselves in the foot here; they've emptied the whole magazine into their leg by making the new add-on world based on the almost completely different WebExtensions API. WebExtensions is Blink-compatible, the engine powering Chrome. That means an author can easily create a much less functional addon that runs not only on Firefox but also on Chrome. Yup, you read that right: soon the only functional difference between Firefox and Chrome at this rate will be the name and the source tree. More to the point, many great classic addons won't work in the new API, and some addons will probably never be made to work with WebExtensions.
Riddle me this, Batman Mozilla leadership: if the addons are the same, the features are the same, the supported standards are the same, the interface is converging and Mozilla's marketshare is shrinking ... why bother using Firefox? I mean, I guess I could start porting SeaMonkey, although this announcement probably kicks the last leg out from under that too, but does Firefox itself, MoCo/MoFo's premier browser brand, serve any useful purpose now? Don't say "because it makes the web free" -- people can just go download and compile WebKit, which is open source, well understood and widely available, and they can even usefully embed it, another opportunity Mozilla leadership shortsightedly threw away. They can fork it like Google did. They can throw a shell around it. Where's the Gecko value now?
Maybe this is a sign that the great Mozilla experiment has finally outlived its usefulness. And frankly there won't be much value in a Gecko-based browser or even a Servo-based one that works exactly the same way as everything else; notice the absolute lack of impact Firefox OS is having on mobile, although I use and prefer Firefox Android personally just because I don't trust Chrome. Maybe that trust will be the only reason to keep using Firefox on any platform, because I certainly can't think of anything else.
Meanwhile, this weekend I'm rewriting TenFourFox wiki documentation on Github ahead of the impending read-only status of Google Code. Since this is true Markdown, I'm using Nathan Hill's SimpleMarkPPC since it works pretty well for simple documents of this type and runs on 10.4. I won't be copying over all the old release notes, but starting with 38.3 all future ones will be on Google Code as well. After that we'll work on the MP3 support to finalize it, and I've got a secret project to share hopefully next week.
http://tenfourfox.blogspot.com/2015/08/mozillas-future-footgun-add-on-policy.html
|
Michael Kaply: My Take on WebExtensions |
Let me start out by saying that I understand the need for something like WebExtensions. A cross browser extension API will be a great thing for the future of browsers. I understand why Mozilla is doing it. What I take issue with is the belief that existing Firefox only add-on developers will jump at the opportunity to use this new API. As far as I’m concerned, the only add-on developers that will benefit from this new API are Chrome developers who will find it much easier to port their extensions to Firefox.
Most Firefox extension developers do it as a hobby. Typically they have an itch about something in Firefox and that write an extension to scratch it. Then they make that extension available to everyone. Over time we all build up a set of extensions that make Firefox behave the way we (and clearly other people) want it to. (Chris Finke is a great example of this.) Every so often something changes in Firefox that breaks one of our extensions. At that point we have to make a decision; it it worth the time and energy to keep this extension going. Sometimes we keep it going, sometimes we give up (hence the ton of dead extensions on AMO). Luckily most of the time Firefox changes don’t break all our extensions, so we usually can keep going. With e10s coming up though, lots of developers have had to make decisions as to whether or not it is worth it to rewrite and some developers have gone through that pain (and it is pain - a lot of pain).
Now developers are being told in the next one to two years they will have to completely rewrite ALL of their add-ons. What are the odds that these hobby add-on developers are going to do that?
Let’s be honest. Availability of APIs isn’t the difficult part of the discussion. Availability of time and energy to even attempt to rewrite all of our add-ons is the problem. And when you add in the fact that Mozilla hasn’t given add-on developers the marketplace we’ve been promised for years (which Chrome has had since day one), you’ll end up with a lot of developers deciding that it’s simply not worth it.
But let's talk availability of APIs. I'll use two of my extensions as examples. Keyword Search accesses the wrappedJSObject of search submissions in order to manipulate the submission. Will there really be an API for that? Or what about the CCK2? Will there really be APIs that allow me to modify the built-in preferences pages including removing pages or controls? Or what about disabling private browsing? Or removing sync? Or removing access to about:config? I doubt it. There are just too many things that extensions do (most of them pretty obscure) to be able to provide an complete API.
I'll watch what goes on and hope that I'm wrong, but I'm not very optimistic.
I will say this, though. It's a great day to be a Chrome developer.
|
Ahmed Nefzaoui: It’s not following chrome, it’s called making the web and the web’s content more compatible |
From DownThemAll:
It is safe to say, that Firefox will not be Firefox anymore as far as extensions go, but instead will become yet another Chrome-clone.
A quote from a blog post I read about Firefox as a friend is going to die so I wanted to quickly echo an opinion I have off the top my head without being too technical:
I personally don’t see how implementing a common set of APIs or a spec that the rest of the browser vendors agreed on and implemented is turning Firefox into a chrome-clone.
WebExtensions’ concept is by far implemented everywhere else except in Firefox. So implementing that here is just as beneficial as when W3C publishes a spec about Flexbox or WebRTC or CSS Logical Properties (wink wink) and chrome implements that, and then Firefox implements it too: that. is. not. following. chrome, it’s called making the web and the web’s content more compatible, and if we are to sit in a corner and implement our own exclusive stuff we will only become another Microsoft of year 2000 with its ActiveX technology where the only way to have the luxury of accessing its features is if developers built their websites mainly for IE.
|
Air Mozilla: Webdev Beer and Tell: August 2015 |
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...
|
Bill McCloskey: Firefox Add-on Changes |
This post is related to “The Future of Developing Firefox Add-ons” on the add-ons blog. Please read that first for context. A couple of concerns from that post have come up that I would like to address here.
One concern people have is that their favorite add-on is no longer going to be supported, especially add-ons for power users. Some of the ones being mentioned are:
We have a lot of ideas about how to make these sorts of extensions work using better APIs than we have now.
We’re hoping people will have a lot of other ideas for the extensions that they care about. If you’d like to propose or vote on ideas, please visit webextensions.uservoice.com to express your opinion.
There are also concerns that restricting people to the WebExtensions API will limit innovation: we can make APIs to support the XUL extensions people have already made, but how will we know what other ones we’re missing out on?
It’s likely that we’ll still allow some access to XUL in the future. We want people to be able to experiment with new ideas, and they shouldn’t have to wait for us to design, implement, and finalize a new API. However, we don’t want this to become another feature like require('chrome')
in Jetpack, which is used by virtually every add-on. We’re still trying to figure out how to avoid that fate. We know that we need to be more proactive about providing APIs that add-ons need. But is that enough?
Our big fear is that, once we provide a WebExtensions API, there won’t be anything to motivate people to switch over to it. We can try to deprecate access to the parts of XPCOM used to implement the functionality, but often there won’t be a clear mapping between the old and the new APIs.
Again, we’re open to ideas about how to do this. Moving away from XUL will be a long process. We’re announcing all of this early so that we can begin to gather feedback. APIs that are created in a vacuum probably aren’t going to be very useful to people.
https://billmccloskey.wordpress.com/2015/08/21/firefox-add-on-changes/
|
Chris Finke: My Future of Developing Firefox Add-ons |
Mozilla announced today that add-ons that depend on XUL, XPCOM, or XBL will be deprecated and subsequently incompatible with future versions of Firefox:
Consequently, we have decided to deprecate add-ons that depend on XUL, XPCOM, and XBL. We don’t have a specific timeline for deprecation, but most likely it will take place within 12 to 18 months from now. We are announcing the change now so that developers can prepare and offer feedback.
In response to this announcement, I’ve taken the step of discontinuing all of my Firefox add-ons. They all depend on XUL or XPCOM, so there’s no sense in developing them for the next year only to see them become non-functional. AutoAuth, Comment Snob, Feed Sidebar, Links Like This, OPML Support, RSS Ticker, and Tab History Redux should be considered unsupported as of now. (If for any reason, you’d like to take over development of any of them, e-mail me.)
While I don’t like Mozilla’s decision (and I don’t think it’s the best thing for the future of Firefox), I understand it; there’s a lot of innovation that could happen in Web browser technology that is stifled because of a decade-old add-on model. I only hope that the strides a lighter-weight Firefox can make will outweigh the loss of the thousands of add-ons that made it as popular as it is today.
http://www.chrisfinke.com/2015/08/21/my-future-of-developing-firefox-add-ons/
|
Air Mozilla: Edgar Chen: TaskCluster Interactive Sessions |
Come learn about TaskCluster Interactive Sessions in a quick presentation by Edgar Chen!
https://air.mozilla.org/edgar-chen-taskcluster-interactive-sessions/
|
Support.Mozilla.Org: What’s up with SUMO – 21st August |
Hello, SUMO Nation! How have you been? We skipped last week as I was away from the keyboard, discovering the wonders of the offline world but… we’re back and ready to share news and updates with you.
That’s it for today, looking forward to seeing you all on Monday – take care, take it easy, and safe travels!
https://blog.mozilla.org/sumo/2015/08/21/whats-up-with-sumo-21st-august/
|
Mozilla Addons Blog: The Future of Developing Firefox Add-ons |
Today we are announcing some major upcoming changes to Firefox add-ons. Our add-on ecosystem has evolved through incremental, organic growth over the years, but there are some modernizations to Firefox that require some foundational changes to support:
To help the add-on development community understand how we will enable these improvements, we are making four related announcements today:
For our add-on development community, these changes will bring benefits, like greater cross-browser add-on compatibility, but will also require redevelopment of a number of existing add-ons. We’re making a big investment by expanding the team of engineers, add-on reviewers, and evangelists who work on add-ons and support the community that develops them. They will work with the community to improve and finalize the WebExtensions API, and will help developers of unsupported add-ons make the transition to newer APIs and multi-process support.
We’re announcing all of the changes today to make developers aware of our plans and to give everyone an opportunity to offer feedback. We are committed to doing what we can to make this transition as easy as possible. Together with our Mozilla community, we will create the future of Firefox add-ons.
For some time we’ve heard from add-on developers that our APIs could be better documented and easier to use. In addition, we’ve noticed that many Firefox add-on developers also maintain a Chrome, Safari, or Opera extension with similar functionality. We would like add-on development to be more like Web development: the same code should run in multiple browsers according to behavior set by standards, with comprehensive documentation available from multiple vendors.
To this end, we are implementing a new, Blink-compatible API in Firefox called WebExtensions. Extension code written for Chrome, Opera, or, possibly in the future, Microsoft Edge will run in Firefox with few changes as a WebExtension. This modern and JavaScript-centric API has a number of advantages, including supporting multi-process browsers by default and mitigating the risk of misbehaving add-ons and malware.
WebExtensions will behave like other Firefox add-ons; they will be signed by Mozilla, and discoverable through addons.mozilla.org (AMO) or through the developer’s website. With this API, extension developers should be able to make the same extension available on Firefox and Chrome with a minimal number of changes to repackage for each platform.
A preview release of WebExtensions is available in Firefox 42, which is currently on Developer Edition, and information on how to start testing WebExtensions is available in the Mozilla wiki. We have started discussions with other browser vendors to begin an effort to standardize at least some of this API, and will continue to post additional information and more details about WebExtensions in the wiki.
Phase one of our Electrolysis project, which uses a separate operating system process to run Web content, has been moving towards our release channel. Subsequent phases will bring multiple content processes and improved sandboxing capabilities. Using a separate rendering process lays the foundation enabling us to bring significant performance and security improvements to Firefox, but it also breaks some add-ons, especially those that modify content. However, there are a number of mitigations in place to keep add-ons functional:
require(‘chrome’)
or some of the low-level APIs to touch objects in the content process.Starting now, add-on developers need to think about their strategy to work with a multi-process Firefox:
The final release schedule for Electrolysis will be determined over the next several months as we test with more users. We would like developers to understand that, although there is a chance that the Electrolysis release schedule will be delayed or modified in the coming months, they should plan to update their add-ons to meet our current release plan as follows:
The Electrolysis team has posted a list of popular add-ons for compatibility testing at http://arewee10syet.com. In addition to the steps above, developers are encouraged to review the list and follow the instructions to submit information about whether their add-ons are Electrolysis-compatible or not.
We currently use a blocklisting mechanism to defend against malicious add-ons, but additional measures are needed to better protect our users as some add-on developers have adapted to work around blocklisting. Blocklisting is also reactive: users can be harmed by dangerous add-ons that are installed until they are identified and blocked. Starting in Firefox 42, add-on developers will be required to submit extensions for review and signing by Mozilla prior to deployment, and unsigned add-ons cannot be installed or used with Firefox. You can read more about the rationale for signing in a separate blog post.
We realize that the add-on review process can sometimes be inconvenient for developers. Reviewing is a mostly manual, human process today, and moving an extension from the initial submission to passing a full review that meets our guidelines can be a time-consuming process that can take weeks or months. A major advantage of WebExtensions is that they can be reviewed more quickly. In general, it’s easier to develop a correct WebExtension, and the permissions system makes it easier to recognize malicious add-ons.
Our goal is to increase automation of the review process so that the wait time for reviews of new WebExtensions listed on addons.mozilla.org can be reduced to five days, and that the wait time for updates to existing WebExtensions can be reduced to one to two days. Current wait times for unlisted add-ons submitted for signing are less than three days. We are also expanding the team of paid and volunteer add-on reviewers and continue to make improvements to the automatic validator, both of which will reduce existing review queue wait times for all extensions in the immediate future.
While extension signing will not be enforced until Firefox 42, the code has shipped with Firefox 40, allowing users to see if installed extensions have been validated by Mozilla. Users of Firefox Developer Edition will have noticed that unsigned add-ons were blocked beginning on August 14.
The full schedule for add-on signing is currently as follows:
xpinstall.signatures.required
) that allows signature enforcement to be turned off.XPCOM and XUL are two of the most fundamental technologies to Firefox. The ability to write much of the browser in JavaScript has been a huge advantage for Mozilla. It also makes Firefox far more customizable than other browsers. However, the add-on model that arose naturally from these technologies is extremely permissive. Add-ons have complete access to Firefox’s internal implementation. This lack of modularity leads to many problems.
A permissive add-on model means that we have limited flexibility in changing the foundations of Firefox. The add-on breakage caused by Electrolysis is an important example of this problem. Technologies like CPOWs help us to work around add-on problems; however, CPOWs have been a huge investment in effort and they are still slow and somewhat unreliable.
Without a fundamental shift to the way Firefox add-ons work, we will be unable to use new technologies like Electrolysis, Servo or browser.html as part of Firefox.
The tight coupling between the browser and its add-ons also creates shorter-term problems for Firefox development. It’s not uncommon for Firefox development to be delayed because of broken add-ons. In the most extreme cases, changes to the formatting of a method in Firefox can trigger problems caused by add-ons that modify our code via regular expressions. Add-ons can also cause Firefox to crash when they use APIs in unexpected ways.
Consequently, we have decided to deprecate add-ons that depend on XUL, XPCOM, and XBL. We don’t have a specific timeline for deprecation, but most likely it will take place within 12 to 18 months from now. We are announcing the change now so that developers can prepare and offer feedback. Add-ons that are built using the new WebExtension API will continue to work. We will also continue supporting SDK add-ons as long as they don’t use require(‘chrome’)
or some of the low-level APIs that provide access to XUL elements.
A major challenge we face is that many Firefox add-ons cannot possibly be built using either WebExtensions or the SDK as they currently exist. Over the coming year, we will seek feedback from the development community, and will continue to develop and extend the WebExtension API to support as much of the functionality needed by the most popular Firefox extensions as possible.
The strategy announced here necessarily involves a lot of trade-offs. Developers who already support Chrome extensions will benefit since they will have one codebase to support instead of two. Developers of Firefox-only add-ons will have to make changes. Those changes may require considerable development effort up-front, but we feel the end result will be worth that effort for both Firefox’s users and developers.
We want to reiterate our commitment to our add-on development community, and will work with you in porting extensions, designing new APIs, and creating innovative new add-ons that make Firefox great.
We will continue to post additional resources in the coming weeks and months to outline each of these changes in more detail, as well as provide support through our traditional channels via the Mozilla Developer Network, IRC (in #extdev), and the extension developer group.
Update: A lot of people have been asking what WebExtensions will deliver, and how. Bill McCloskey has posted an update on where we want to take them, and how you can contribute ideas and be part of the process. It’s a must-read for people who are concerned about how the addons they develop, use, and love will continue to be part of Firefox.
https://blog.mozilla.org/addons/2015/08/21/the-future-of-developing-firefox-add-ons/
|
Nathan Froyd: explicit is better than implicit: c++ implicitly defined member functions |
In the tradition of The Zen of Python, I’ve been thinking about pushing for explicit declarations of otherwise implicitly-defined member functions in C++, both in code that I write and in code that I review:
// Instances of this class should not be copied. MyClass(const MyClass&) = delete; MyClass& operator=(const MyClass&) = delete; // We are OK with the default semantics. OtherClass(const OtherClass&) = default; OtherClass& operator=(const OtherClass&) = default; OtherClass(OtherClass&&) = default; OtherClass& operator=(OtherClass&&) = default;
[Background: C++ specifies several member functions that the compiler will implicitly define for you in any class: the default constructor, the copy/move constructor(s), and the copy/move assignment operator(s). I say “implicitly define”, as though that always happens, but there are a number of constraints on when the compiler will do this. For the purposes of the discussion below, I’ll ignore the default constructor bit and focus on the copy/move constructor and assignment operator. (I will also happily ignore all the different variants thereof that can occur, e.g. when the compiler defines MyClass(MyClass&)
for you.) I think the arguments apply equally well to the default constructor case, but most classes I deal with tend to either declare their own default constructor or have several user-defined constructors anyway, which prohibit the compiler from implicitly declaring the default constructor.]
I think the argument for = delete
is more obvious and less controversial, so I’ll start there. = delete
‘ing functions you don’t want used is part of the API contract of the class. Functions that shouldn’t be used shouldn’t be exposed to the user, and = delete
ensures that the compiler won’t implicitly define part of your API surface (and users thereby unknowingly violate API guarantees). The copy constructor/assignment operator are the obvious candidates for = delete
, but using = delete
for the move constructor/assignment operator makes sense in some cases (e.g. RAII classes). Using = delete
gives you pleasant compiler error messages, and it’s clearer than:
private: MyClass(const MyClass&); MyClass& operator=(const MyClass&);
If you’re lucky, there might be a comment to the effect of // Deliberately not defined
. I know which code I’d prefer to read. (Using = delete
also ensures you don’t accidentally use the not-defined members inside the class itself, then spend a while waiting for the linker errors to tell you about your screw-up.)
= default
appears to be a little harder to argue for. “Experienced” programmers always know which functions are provided by the compiler, right?
Understanding whether the compiler implicitly defines something requires looking at the entire class definition (including superclasses) and running a non-trivial decision algorithm. I sure don’t want each reader of the code to do that for two or four different member functions (times superclasses, too), all of which are reasonably important in understanding how a class is intended to be used.
Explicitly declaring what you intend can also avoid performance pitfalls. In reading through the C++ specification to understand when things were implicitly declared, I discovered that the same functions can also be implicitly deleted, including this great note: “When the move constructor is not implicitly declared or explicitly supplied, expressions that otherwise would have invoked the move constructor may instead invoke a copy constructor.” So, if the move constructor was implicitly declared at some point, but then was implicitly deleted through some change, expressions that were previously efficient due to moving would become somewhat less so due to copying. Isn’t C++ great?
Being explicit also avoids the possibility of meaning to define something, but getting tripped up by the finer points of the language:
templateclass MyClass { public: // This does not define a copy constructor for MyClass . template MyClass(const MyClass& aOther) : ... { ... } ... };
Comments could serve to notify the reader that we’re OK with the default definition, but if I could choose between encoding something in a place solely intended for humans, or a place both humans and the compiler will understand, I know which one I’d pick.
|
Niko Matsakis: Virtual Structs Part 3: Bringing Enums and Structs Together |
So, in previous posts, I discussed the pros and cons of two different approaches to modeling variants: Rust-style enums and C++-style classes. In those posts, I explained why I see Rust enums and OO-style class hierarchies as more alike than different (I personally credit Scala for opening my eyes to this, though I’m sure it’s been understood by others for much longer). The key points were as follows:
refinethem to narrow the set of variants that a particular value can have.
What I want to talk about in this post is a proposal (or proto-proposal) for bridging those two worlds in Rust. I’m going to focus on data layout in this post. I’ll defer virtual methods for another post (or perhaps an RFC). Spoiler alert: they can be viewed as a special case of specialization.
I had originally intended to publish this post a few days after the
others. Obviously, I got delayed. Sorry about that! Things have been
very busy! In any case, better late than never, as
some-great-relative-or-other always (no doubt) said. Truth is, I
really miss blogging regularly, so I’m going to make an effort to
write up more in progress
and half-baked ideas (yeah yeah, promises
to blog more are a dime a dozen, I know).
Note: I want to be clear that the designs in this blog post are not
my
work per se. Some of the ideas originated with me, but others
have arisen in the course of conversations with others, as well as
earlier proposals from nrc, which in turn were heavily based on
community feedback. And of course it’s not like we Rust folk invented
OO or algebraic data types or anything in the first place. :)
The key idea is to generalize enums and structs into a single concept.
This is often called an algebraic data type, but algebra
brings
back memories of balancing equations in middle school (not altogether
unpleasant ones, admittedly), so I’m going to use the term type
hierarchy instead. Anyway, to see what I mean, let’s look at my
favorite enum ever, Option
:
1 2 3 |
|
The idea is to reinterpret this enum as three types arranged into a
tree or hierarchy. An important point is that every node in the tree
is now a type: so there is a type representing the Some
variant, and
a type representing the None
variant:
1 2 3 4 |
|
As you can see, the leaves of the tree are called structs. They
represent a particular variant. The inner nodes are called enums, and
they represent a set of variants. Every existing struct
definition
can also be reinterpreted as a hierarchy, but just a hierarchy of size
1.
These generalized type hierarchies can be any depth. This means you can do nested enums, like:
1 2 3 4 5 6 7 |
|
This creates a nested hierarchy:
1 2 3 4 5 6 7 |
|
Since all the nodes in a hiearchy are types, we get refinement types
for free. This means that I can use Mode
as a type to mean any mod
at all
, or Mode::ByRef
for the times when I know something is one
of the ByRef
modes, or even Mode::ByRef::Mutable
(which is a
singleton struct).
As part of this change, it should be possible to declare the variants out of line. For example, we could change enum to look as follows:
1 2 3 4 5 6 7 |
|
This definitely is not exactly equivalent to the older one, of course.
The names Some
and None
live alongside Option
, rather than
within it, and I’ve used a field (value
) rather than a tuple struct.
Enum declarations are extended with the ability to have fields as well
as variants. These fields are inherited by all variants of that enum.
In the syntax, fields must appear before the variants, and it is also
not possible to combine tuple-like
structs with inherited fields.
Let’s revisit an example from the previous post. In the compiler, we currently represent types with an enum. However, there are certain fields that every type carries. These are handled via a separate struct, so that we wind up with something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Under this newer design, we could simply include the common fields in the enum definition:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Naturally, when I create a TypeData
I should supply all the fields,
including the inherited ones (though in a later section I’ll present
ways to extract the initialization of common fields into a reusable
fn):
1 2 3 4 5 6 |
|
And, of course, given a reference &TypeData<'tcx>
, we can access these common
fields:
1 2 3 |
|
Convenient!
As today, the size of an enum type, by default, is equal to the largest of its variants. However, as I’ve outlined in the last two posts, it is often useful to have each value be sized to a particular variant. In the previous posts I identified some criteria for when this is the case:
One interesting question is whether we can concisely state conditions in which one would prefer to have “precise variant sizes” (class-like) vs “largest variant” (enum). I think the “precise sizes” approach is better when the following apply:
Therefore, it is possible to declare the root enum in a type hierarchy as either sized (the default) or unsized; this choice is inherited by all enums in the hierarchy. If the hierarchy is declared as unsized, it means that each struct type will be sized just as big as it needs to be. This means in turn that the enum types in the hierarchy are unsized types, since the space required will vary depending on what variant an instance happens to be at runtime.
To continue with our example of types in rustc, we currently go through some contortions so as to introduce indirection for uncommon cases, which keeps the size of the enum under control:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
As discussed in a comment in the code, the current scheme also serves
as a poor man’s refinement type: if at some point in the code we know
we have a fn pointer, we can write a function that takes a
FnPointerData
argument to express that:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
This pattern works OK in practice, but it is not perfect. For one
thing, it’s tedious to construct, and it’s also a little
inefficient. It introduces unnecessary indirection and a second memory
arena. Moreover, the refinement type scheme isn’t great, because you
often have to pass both the ty
(for the common fields) and the
internal data
.
Using a type hierarchy, we can do much better. We simply remove the
FnPointerData
struct and inline its fields directly into TypeData
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Now we can write functions that process specific categories of types very naturally:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
As the previous example showed, one can continue to use match to select
the variant from an enum (sized or not). Maching also gives us an
elegant downcasting mechanism. Instead of writing (Type) value
, as
in Java, or dynamic_cast
, one writes match value
and
handles the resulting cases. Just as with enums today, if let
can be
used if you just want to handle a single case.
An important part of the design is that the entire type hierarchy must be declared within a single crate. This is of course trivially true today: all variants of an enums are declared in one item, and structs correspond to singleton hierarchies.
Limiting the hierarchy to a single crate has a lot of advantages.
Without it, you simply can’t support today’s sized
enums, for one
thing. It allows us to continue doing exhaustive checks for matches
and to generate more efficient code. It is interesting to compare to
dynamic_cast
, the C++ equivalent to a match:
dynamic_cast
is often viewed as a kind of code smell, versus a
virtual method. I’m inclined to agree, as dynamic_cast
only checks
for a particular variant, rather than specifying handling for the
full range of variants; this makes it fragile in the face of edits
to the code. In contrast, the exhaustive nature of a Rust match
ensures that you handle every case (of course, one must still be
judicious in your use of _
patterns, which, while convenient, can
be a refactoring hazard).dynamic_cast
is somewhat inefficient, since it must handle the
fully general case of classes that spread across compilation units;
in fact, it is very uncommon to have a class hierarchy that is truly
extensible – and in such cases, using dynamic_cast
is
particularly hard to justify. This leads to projects like LLVM
reimplementing RTTI (the C++ name for matching) from scratch.Another advantage of confining the hierarchy to a single crate is that
it allows us to continue doing variance inference across the entire
hierarchy at once. This means that, for example, that in the out of
line
version of Option
(below) we can infer a variance for the
parameter T
declared on Option
, in the same way we do today
(otherwise, the declaration of enum Option
would require some
form of phantom data, and that would be binding on the types
declared in other crates).
I also find that confining the hierarchy to a single crate helps to clarify the role of type hierarchies versus traits and, in turn, avoid some of the pitfalls so beloved by OO haters. Basically, it means that if you want to define an open-ended extension point, you must use a trait, which also offers the most flexibility; a type hierarchy, like an enum today, can only be used to offer a choice between a fixed number of crate-local types. An analogous situation in Java would be deciding between an abstract base class and an interface; under this design, you would have to use an interface (note that the problem of code reuse can be tackled separately, [via specialization]).
Finally, confining extension to a trait is relevant to the construction of vtables and handling of specialization, but we’ll dive into that another time.
Even though I think that limiting type hierarchies to a single crate is very helpful, it’s worth pointing out that it IS possible to lift this restriction if we so choose. This can’t be done in all cases, though, due to some of the inherent limitations involved.
In the previous section, I mentioned that enums and traits (both today and in this proposed design) both form a kind of interface. Whereas traits define a list of methods, enums indicate something about the memory layout of the value: for example, they can tell you about a common set of fields (though not the complete set), and they clearly narrow down the universe of types to be just the relevant variants. Therefore, it makes sense to be able to use an enum type as a bound on a type parameter. Let’s dive into an example to see what I mean and why you might want this.
Imagine we’re using a type hiererachy to represent the HTML DOM. It might look something like this (browser people: forgive my radical oversimplification):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Now imagine that I have a helper function that selects nodes based on whether they intersect a particular box on the screen:
1 2 3 4 5 6 7 8 9 |
|
OK, great! But now imagine that I have a slice of text elements
(&[Rc
), and I would like to use this function. I will
get back a Vec>
– I’ve lost track of the fact that my
input contained only text elements.
Using generics and bounds, I can rewrite the function:
1 2 3 |
|
Nothing in the body had to change, only the signature.
Permitting enum types to appear as bounds also means that they can be
referenced by traits as supertraits. This allows you to define
interfaces that cut across the primary inheritance hierarchy. So, for
example, in the DOM both the HTMLTextAreaElement
and the
HTMLInputElement
can carry a block of text, which implies that they
have a certain set of text-related methods and properties in
common. And of course they are both elements. This can be modeled
using a trait like so:
1 2 3 4 |
|
This means that if you have an &TextApis
object, you can access the
fields from HTMLElement
with no overhead, because they are stored in
the same place for both cases. But if you want to access other things,
such as maxLength
, that implies virtual dispatch, since the address
is dynamically computed and will vary.
The notion of enums as bounds raises questions about potential overlap in purpose between enums and traits. I would argue that this overlap already exists: both enums and traits today are ways to let you write a single function that operates over values of more than one type. However, in practice, it’s rarely hard to know which one you want to use. This I think is because they come at the problem from two different angles:
all the different variants of types in the Rust languageor
some and none.
If we extend enums in the way described here, then they will become more capable and convenient, and so you might find that they overlap a bit more with plausible use cases for traits. However, I think that in practice there are still clear guidelines for which to choose when:
Because enums are tied to a fixed set of cases, they allow us to
generate tighter code, particularly when you are not monomorphizing to
a particular variant. That is, if you have a value of type
&TypeData
, where TypeData
is the enum we mentioned before, you can
access common fields at no overhead, even though we don’t know what
variant it is. Moreover, the pointer is thin and thus takes only a
single word.
In contrast, if you had made TypeData
a trait and hence &TypeData
was a trait object, accessing common fields would require some
overhead. (This is true even if we were to add virtual fields
to
traits, as eddyb and kimundi proposed in RFC #250.) Also,
because traits are added on
to other values, your pointer would be a
fat pointer, and hence take two words.
(As an aside, I still like the idea of adding virtual fields to
traits. The idea is that these fields could be remapped
in an
implementation to varying offsets. Accessing such a field implies
dynamically loading the offset, which is slower than a regular field
but faster than a virtual call. If we additionally added the
restriction that those fields must access content that is orthogonal
from one another, we might be able to make the borrow checker more
permissive in the field case as well. But that is kind of an
orthogonal extension to what I’m talking about here – and one that
fits well with my framing of traits are for open-ended extension
across heterogeneous types, enums are for a single cohesive type
hierarchy
.)
One of the distinctive features of OO-style classes is that they
feature constructors. Constructors allow you to layer initialization
code, so that you can build up a function that initializes (say) the
fields for Node
, and that function is used as a building block by
one that initializes the Element
fields, and so on down the
hierarchy. This is good for code reuse, but constructors have an
Achilles heel: while we are initializing the Node
fields, what value
do the Element
fields have? In C++, the answer is who knows
– the
fields are simply uninitialized, and accessing them is undefined
behavior. In Java, they are null. But Rust has no such convenient
answer. And there is an even weirder question: what happens when you
downcast or match on a value while it is being constructed?
Rust has always sidestepped these questions by using the functional language approach, where you construct an aggregate value (like a struct) by supplying all its data at once. This works good for small structs, but it doesn’t scale up to supporting refinement types and common fields. Consider the example of types in the compiler:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
I would like to be able to write some initialization routines that
compute the id
, flags, and whatever else and then reuse those across
different variants. But it’s hard to know what such a function should
return:
1 2 3 |
|
What is this type XXX
? What I want is basically a struct with just
the common fields (though of course I don’t want to have to define
such a struct mself, too repetitive):
1 2 3 4 5 |
|
And of course I also want to be able to use an instance of this struct
in an initializer as part of a ..
expression, like so:
1 2 3 4 5 6 7 8 |
|
If we had a type like this, it strikes a reasonable nice balance
between the functional and OO styles. We can layer constructors and
build constructor abstractions, but we also don’t have a value of type
TypeData
until all the fields are initialized. In the interim, we
just have a value of this type XXX
, which only has the shared fields
that are common to all variants.
All we need now is a reasonable name for this type XXX
. The proposal
is that every enum has an associated struct type called struct
(i.e,
the keyword). So instead of XXX
, I could write TypeData::struct
,
and it means a struct with all the fields common to any `TypeData`
variant
. Note that a TypeData::struct
value is not a TypeData
variant; it just has the same data as a variant.
There is one final wrinkle worth covering in the proposal. And unfortunately, it’s a tricky one. I’ve been sort of tacitly assuming that an enum and its variants have some sort of typing relationship, but I haven’t said explicitly what it is. This part is going to take some experimentation to find the right mix. But let me share some intermediate thoughts.
Unsized enums. For unsized enums, we are always dealing with an
indirection. So e.g. we have to be able to smoothly convert from a
reference to a specific struct like &TextElement
to a reference to a
base enum like &Node
. We’ve traditionally viewed this as a special
case of DST coercions
. Basically, coercing to &Node
is
more-or-less exactly like coercion to a trait object, except that we
don’t in fact need to attach a vtable – that is, the extra data
on
the &Node
fat pointer is just ()
. But in fact we don’t necessarily
HAVE to view upcasting like this as a coercion – after all, there is
no runtime change happening here.
This gets at an interesting point. Subtyping between OO classes is
normally actually subtyping between references. That is, in Java we
say that String <: Object
, but that is because everything in Java is
in fact a reference. In C++, not everything is a reference, so if you
aren’t careful this in fact gives rise to creepy hazards like
object slicing. The problem here is that in C++ the superclass
type is really just the superclass fields; so if you do superclass =
subclass
, then you are just going to drop the extra fields from the
subclass on the floor (usually). This probably isn’t what you meant to
do.
Because of unsized types, though, Rust can safely say that a struct
type is a subtype of its containing enum(s). So, in the DOM example,
we could say that TextElement <: Node
. We don’t have to fear slicing
because the type TextElement
is unsized, and hence the user could
only ever make use of it by ref. In other words, object slicing arises
C++ precisely because it doesn’t have a notion of unsized types.
Sized enums. To be honest, unsized enums are not the scary case, because they are basically a new feature to the language. The harder and more interesting case is sized enums. The problem here is that we are introducing new types into existing code, and we want to be sure not to break things. So consider this example:
1 2 |
|
In today’s world, the first assignment gives x
a type of
Option<_>
, where the _
represents something to be inferred
later. This is because the expression None
has type Option<_>
. But
under this RFC, the type of None
is None<_>
– and hence we have
to be smart enough to infer that the type of x
should not be
None<_>
but rather Option<_>
(because it is later assigned a
Some<_>
value).
This kind of inference, where the type of a variable changes based on
the full set of values assigned to it, is traditionally what we have
called subtyping
in the Rust compiler. (In contrast, coercion is an
instantaneous decision that the compiler makes based on the types it
knows thus far.) This is sort of technical minutia in how the compiler
works, but of course it impacts the places in Rust that you need type
annoations.
Now, to some extent, we already have this problem. There are known
cases today where coercions don’t work as well as we would like. The
proposed box
syntax, for example, suffers from this a bit, as do
other patterns. We’re investing ways to make the compiler smarter,
and it may be that we can combine all of this into a more intelligent
inference infrastructure.
Variance and mutable references. It’s worth pointing out that
we’ll always need some sort of coercion support, because subtyping
alone doesn’t allow one to convert between mutable references. In
other words, &mut TextElement
is not a subtype of &mut Node
, but
we do need to be able to coercion from the former to the latter. This
is safe because the type Node
is unsized (basically, it is safe for
the same reason that &mut [i32; 3]
-> &mut [i32]
is safe). The
fact that &mut None
-> &mut Option
is not safe is an
example of why sized enums can in fact be more challenging here. (If it’s
not clear why that should be unsafe, the Nomicon’s section on variance
may help clear things up.)
If, in fact, we can’t solve the subtyping inference problems, there is
another option. Rather than unifying enums and structs, we could add
struct inheritance and leave enums as they are. Things would work
more-or-less the same as in this proposal, but base structs would play
the role of unsized enums, and sized enums would stay how they
are. This can be justified on the basis that enums are used in
different stylistic ways (like Option
etc) where e.g. refinement
types and common fields are less important; however, I do find the
setup described in this blog post appealing.
One other detail I want to note. At least to start, I anticipate a
requirement that every type in the hierarchy has the same set of type
parameters (just like an enum
today). If you use the inline
syntax, this is implicit, but you’ll have to write it explicitly with
the out of line syntax (we could permit reordering, but there should
be a 1-to-1 correspondence). This simplifies the type-checker and
ensures that this is more of an incremental step in complexity when
compared to today’s enums, versus the giant leap we could have
otherwise – loosening this rule also interacts with monomorphization
and specialization, but I’ll dig into that more another time.
This post describes a proposal for unifying structs and enums to make each of them more powerful. It builds on prior work but adds a few new twists that close important gaps:
associated structfor enums, allowing for constructors.
One of the big goals of this design is to find something that fits well within Rust’s orthogonal design. Today, data types like enums and structs are focused on describing data layout and letting you declare natural relationships that mesh well with the semantics of your program. Traits, in contrast, are used to write generic code that works across a heterogeneous range of types. This proposal retains that character, while alleviating some of the pain points in Rust today:
|
Air Mozilla: Quality Team (QA) Public Meeting |
This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...
https://air.mozilla.org/quality-team-qa-public-meeting-20150819/
|
Mozilla Community Ops Team: Weekly Status Update 2015-08-19 |
Since this was only our second attempt at a weekly update, we left a lot of the same updates in if we didn’t have anything new for a sub-project in case people missed them the first time around. In the future we’ll only be posting what’s new!
There are some changes to Discourse that should be made to make it more suitable to Mozillian’s needs
To improve the login experience for people using Discourse within Mozilla, bridge the gap in various ways between our different instances (e.g. single username across instances), and integrate better with Mozilla wore widely (with Mozillians integration, etc.)
To make Discourse more user friendly for Mozillians, we need some good documentation on how to use it
Putting all Discourse instances on one infrastructure, automated with Ansible and CloudFormation
This will help us keep the many Discourse instances we have secure, up to date and running common plugins easily. At scale. AT SCALE. also saves us $$$ while allowing all of our instances to be HA.
Migrating the Webmaker, Science and Hive Discourse instances to MECHADISCOURSE
This provides the teams with more stable Infra for their Discourse instances.
#ConfigManagement
Tested out a few services like DataDog, but they’re unreasonably expensive for where we are right now.
Not using Icinga because it’s no longer a fork of Nagios, more or less its own thing – Nagios isn’t exactly great, but it’s an (the?) industry standard so generally well-supported and well-documented.
No changes from last week
We need to understand which sites are being actively used and which no longer need hosting, or need different hosting than they currently have
Status [Backlog]: Need to define minimum viable product (MVP) for community website to measure against. We’ll be reaching out to relevant communities and teams to start working on this. We could use help from people who’d like to help drive this.
We will be moving away from OVH to simplify community hosting and save money
Status [In Progress]: Mesosphere is in progress, awaiting some approvals on Participation Infrastructure side
No changes from last week
(see above)
Our wiki pages our out of date, and shouldn’t be under IT anymore
Links to JIRA, will use it to help with project management, decision tracking.
Communication protocol which attempts to bind various different ones together – could possibly be used by us as a Telegram-esque IRC bouncer
Recap of contribution opportunities from status updates:
http://ops.mozilla-community.org/weekly-status-update-2015-08-19/
|
Laura de Reynal: 110 things to learn |
When interviewing people in Chicago, from teenagers, to parents, educators and bloggers, we asked them to think about what they wanted to learn, what skills mattered the most to them as they were using the Web, and what they would teach us if we were completely new to the internet.
The result is a list of 110 things to learn. A serious, candid and sometimes surprising list, which highlights the skills that appear most important to these 69 participants when speaking of digital literacy.
While we are getting ready to publish the full report, I wanted to share this list here, for the happiness of all.
|
Air Mozilla: Product Coordination Meeting |
Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...
https://air.mozilla.org/product-coordination-meeting-20150819/
|
Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 26 |
Watch mconley livehack on Firefox Desktop bugs!
https://air.mozilla.org/the-joy-of-coding-mconley-livehacks-on-firefox-episode-26/
|
QMO: Firefox 41 Beta 6 Test Day, September 1st |
I’m writing to let you know that Tuesday, September 1st, we’ll be hosting the Firefox 41.0 Beta 6 Test Day. The main focus of this event is going to be set on verify Windows 10 bugs and make firefox crash on windows 10. Detailed participation instructions are available in this etherpad.
No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.
Hope to see you all on Tuesday! Let’s make Firefox better together!
https://quality.mozilla.org/2015/08/firefox-41-beta-6-test-day-september-1st/
|
Hannah Kane: Pledge to Teach Survey Results (first month) |
At the very end of June, we added a “Pledge to Teach” action to teach.mozilla.org. After people complete the pledge, they are able to take a survey that allows us to find out more about their particular context for teaching the Web, and their needs.
I’d like to share results from the first month, during which 77 people completed the survey, out of 263 users who took the pledge (29% response rate).
First, we asked what people are interested in, in terms of teaching the Web, and provided some options for people to choose from (people could choose as many as they liked).
The results from this question align with our strategic plans to develop more curriculum, provide more professional development offerings, and build tools to help people connect with one another.
We asked about the age range of learners:
These findings align with the age-range that most of our existing curriculum is optimized for (14-24). That said, we know our audience is broader, and that content can be adapted for different audiences. Certainly we have community members that work in the K-12 space and higher ed. Given the numbers for learners over 24, we may consider more intermediate/advanced web literacy content, and/or address this audience with more in-depth Teach Like Mozilla and MDN content.
We asked how many learners people expect to reach this year:
This data speaks to the fact that survey participants are more likely individuals who have direct interactions with learners, vs larger partners with wider networks. The survey was intended to reach individual educators/mentors, but we might consider a similar survey directed to partners, too.
Note: we’ve since added a question to the survey that will allow us to know how many learners people reach at any one time. This will inform our curricular design process.
We asked about the contexts in which people teach (again, respondents were able to choose multiple answers):
Some of these results surprised us. For example, the responses for teaching at home and with friends is higher than we’d expected, as were the number of people teaching in professional meetups. If these trends continue, they will inform our curricular and professional development content offerings. We are also having a Web Literacy Training Fellow join us later in the year, and she will specifically address these contexts.
These findings also show that people are teaching across various contexts, which may speak to some leadership pathways (e.g. classroom teachers also hosting standalone events to reach more people).
Finally, we asked people about their motivations for teaching the Web. Here is a sample of those responses:
—
This is a very small sample so far, but we’ll look at results for the second month soon and see if trends continue.
In the meantime, the results of the survey will inform several of our next steps, including:
We are also starting a new research effort with support from the Webmaker product team, to complement the survey. The project hasn’t been fully designed yet, but will likely help us dig deeper into questions about our community’s assets, needs, and contexts.
http://hannahgrams.com/2015/08/19/pledge-to-teach-survey-results-first-month/
|
Gijs Kruitbosch: Why you might be asked to file a new bug/issue (instead of commenting on old ones) |
Sometimes, after we close a bug because we fix it, or because it is a duplicate of another bug, or because the symptoms have gone away — invalid and wontfix bugs are a little different — people come along that have a problem that they believe is identical to the original bugreport. Quite often, they end up commenting on the “old” bugreport and say something along the lines of “hey, this is not fixed yet” or “this broke again” or “why did you close this bug, I’m still seeing this”!
In 99% of cases, I (and many other people) ask people in this situation to file a new bug.
The reasons why we do this vary a little, but on the whole they tend to be pretty similar, and so I figured it would be worth documenting them. In no particular order, we prefer new bugs over reopening closed ones because:
It might be interesting if we had an easy way to split a comment into a new bug, though that would still defeat some of the points raised earlier. In the meantime though, think twice before commenting on older, closed bugs with a “this is still broken” comment!
First off, this strategy can be detrimental in many cases (see e.g. this or this – or just consider how much “email to all the people…to get attention” sounds like “spam”).
Second, we get a lot of bugreports. We’re working on ensuring they get triaged effectively. This should already be a lot better than it was a few months ago (see this post by Benjamin Smedberg), and will continue to improve.
Finally… if you file a bug that is extremely similar to an old bug, it seems fair to me to leave a comment in the old bug, mark the new bug as blocking the old bug, and/or set the needinfo flag for the assignee (if available) of the fixed bug, to draw their attention to this new bug.
|