-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Cameron Kaiser: Mozilla's future footgun add-on policy (or, how MoFo leadership is getting it totally wrong)

Суббота, 22 Августа 2015 г. 07:02 + в цитатник
So long, Firefox. It was nice to know you.

First, Electrolysis. As mentioned, we won't support it in TenFourFox; we would need to implement a userland spawn implementation for 10.4 from scratch for starters, and I suspect that the overhead required will end up performing substantially worse on old Macs plus the inevitable OS bugs it will undoubtedly uncover. Currently Mozilla is predicting Electrolysis will reach the release channel by Fx43, which I find incredibly optimistic given predictions for Australis which slipped deadline after deadline, but it's clear Electrolysis' public unveiling in the relative near future is inevitable. Once it becomes no longer possible to launch the browser in single-process mode, likely one or two versions after, that's the end of source parity. My suspicion is that it will actually reach release by Fx45, which is the next ESR anyway, and there should be an emergency fallback to single-process that we can exploit to keep us running at ESR parity for the last time.

To facilitate addons in the new e10s world, Mozilla is effectively announcing that XPCOM/XUL-based addons are now deprecated because of their highly synchronous nature. (Technically, they'll be deprecated six months after Electrolysis goes golden master, and completely unsupported and/or incompatible within six months after that, but as far as I'm concerned announcing a future deprecation is the same as deprecating it now.) This sucks because the use of XPCOM and XUL in the Mozilla Suite and later Firefox and SeaMonkey meant easy cross-platform add-ons that could do powerful things like implementing a completely new protocol within the browser. Although jetpack addons will still work, sort of, any jetpack addon that requires chrome features is subject to this policy also. Mozilla will be enforcing this brave new XUL-free world by refusing to sign addons that rely on XPCOM or XUL in this later timeframe, which dovetails nicely with not allowing unsigned addons starting with Firefox 42. (Parenthetically I don't agree with the mandatory signing policy, and if there is a TenFourFox 45 it will disable this feature. I don't port Gecko code for the walled garden, guys, thanks.)

Calling this a footgun and the future death of Firefox is not merely hyperbole. I suspect, and I suspect Mozilla is ignoring the fact, that many Firefox users use it because of the presence of such powerful addons that just can't be replicated in other browsers. Chrome, for example, doesn't have near the functionality because it doesn't expose it, and its addons are much less useful in general. But Mozilla is not content to merely shoot themselves in the foot here; they've emptied the whole magazine into their leg by making the new add-on world based on the almost completely different WebExtensions API. WebExtensions is Blink-compatible, the engine powering Chrome. That means an author can easily create a much less functional addon that runs not only on Firefox but also on Chrome. Yup, you read that right: soon the only functional difference between Firefox and Chrome at this rate will be the name and the source tree. More to the point, many great classic addons won't work in the new API, and some addons will probably never be made to work with WebExtensions.

Riddle me this, Batman Mozilla leadership: if the addons are the same, the features are the same, the supported standards are the same, the interface is converging and Mozilla's marketshare is shrinking ... why bother using Firefox? I mean, I guess I could start porting SeaMonkey, although this announcement probably kicks the last leg out from under that too, but does Firefox itself, MoCo/MoFo's premier browser brand, serve any useful purpose now? Don't say "because it makes the web free" -- people can just go download and compile WebKit, which is open source, well understood and widely available, and they can even usefully embed it, another opportunity Mozilla leadership shortsightedly threw away. They can fork it like Google did. They can throw a shell around it. Where's the Gecko value now?

Maybe this is a sign that the great Mozilla experiment has finally outlived its usefulness. And frankly there won't be much value in a Gecko-based browser or even a Servo-based one that works exactly the same way as everything else; notice the absolute lack of impact Firefox OS is having on mobile, although I use and prefer Firefox Android personally just because I don't trust Chrome. Maybe that trust will be the only reason to keep using Firefox on any platform, because I certainly can't think of anything else.

Meanwhile, this weekend I'm rewriting TenFourFox wiki documentation on Github ahead of the impending read-only status of Google Code. Since this is true Markdown, I'm using Nathan Hill's SimpleMarkPPC since it works pretty well for simple documents of this type and runs on 10.4. I won't be copying over all the old release notes, but starting with 38.3 all future ones will be on Google Code as well. After that we'll work on the MP3 support to finalize it, and I've got a secret project to share hopefully next week.

http://tenfourfox.blogspot.com/2015/08/mozillas-future-footgun-add-on-policy.html


Michael Kaply: My Take on WebExtensions

Суббота, 22 Августа 2015 г. 02:16 + в цитатник

Let me start out by saying that I understand the need for something like WebExtensions. A cross browser extension API will be a great thing for the future of browsers. I understand why Mozilla is doing it. What I take issue with is the belief that existing Firefox only add-on developers will jump at the opportunity to use this new API. As far as I’m concerned, the only add-on developers that will benefit from this new API are Chrome developers who will find it much easier to port their extensions to Firefox.

Most Firefox extension developers do it as a hobby. Typically they have an itch about something in Firefox and that write an extension to scratch it. Then they make that extension available to everyone. Over time we all build up a set of extensions that make Firefox behave the way we (and clearly other people) want it to. (Chris Finke is a great example of this.) Every so often something changes in Firefox that breaks one of our extensions. At that point we have to make a decision; it it worth the time and energy to keep this extension going. Sometimes we keep it going, sometimes we give up (hence the ton of dead extensions on AMO). Luckily most of the time Firefox changes don’t break all our extensions, so we usually can keep going. With e10s coming up though, lots of developers have had to make decisions as to whether or not it is worth it to rewrite and some developers have gone through that pain (and it is pain - a lot of pain).

Now developers are being told in the next one to two years they will have to completely rewrite ALL of their add-ons. What are the odds that these hobby add-on developers are going to do that?

Let’s be honest. Availability of APIs isn’t the difficult part of the discussion. Availability of time and energy to even attempt to rewrite all of our add-ons is the problem. And when you add in the fact that Mozilla hasn’t given add-on developers the marketplace we’ve been promised for years (which Chrome has had since day one), you’ll end up with a lot of developers deciding that it’s simply not worth it.

But let's talk availability of APIs. I'll use two of my extensions as examples. Keyword Search accesses the wrappedJSObject of search submissions in order to manipulate the submission. Will there really be an API for that? Or what about the CCK2? Will there really be APIs that allow me to modify the built-in preferences pages including removing pages or controls? Or what about disabling private browsing? Or removing sync? Or removing access to about:config? I doubt it. There are just too many things that extensions do (most of them pretty obscure) to be able to provide an complete API.

I'll watch what goes on and hope that I'm wrong, but I'm not very optimistic.

I will say this, though. It's a great day to be a Chrome developer.

https://mike.kaply.com/2015/08/21/my-take-on-webextensions/


Ahmed Nefzaoui: It’s not following chrome, it’s called making the web and the web’s content more compatible

Суббота, 22 Августа 2015 г. 00:50 + в цитатник

From DownThemAll:

It is safe to say, that Firefox will not be Firefox anymore as far as extensions go, but instead will become yet another Chrome-clone.

A quote from a blog post I read about Firefox as a friend is going to die so I wanted to quickly echo an opinion I have off the top my head without being too technical:
I personally don’t see how implementing a common set of APIs or a spec that the rest of the browser vendors agreed on and implemented is turning Firefox into a chrome-clone.
WebExtensions’ concept is by far implemented everywhere else except in Firefox. So implementing that here is just as beneficial as when W3C publishes a spec about Flexbox or WebRTC or CSS Logical Properties (wink wink) and chrome implements that, and then Firefox implements it too: that. is. not. following. chrome, it’s called making the web and the web’s content more compatible, and if we are to sit in a corner and implement our own exclusive stuff we will only become another Microsoft of year 2000 with its ActiveX technology where the only way to have the luxury of accessing its features is if developers built their websites mainly for IE.

read more

http://nefzaoui.tn/blog/2015/08/its-not-following-chrome-its-called-making-the-web-and-the-webs-content-more-compatible/


Air Mozilla: Webdev Beer and Tell: August 2015

Суббота, 22 Августа 2015 г. 00:00 + в цитатник

Webdev Beer and Tell: August 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

https://air.mozilla.org/webdev-beer-and-tell-august-2015/


Bill McCloskey: Firefox Add-on Changes

Пятница, 21 Августа 2015 г. 23:03 + в цитатник

This post is related to “The Future of Developing Firefox Add-ons” on the add-ons blog. Please read that first for context. A couple of concerns from that post have come up that I would like to address here.

One concern people have is that their favorite add-on is no longer going to be supported, especially add-ons for power users. Some of the ones being mentioned are:

  • Tree Style Tab
  • NoScript
  • Vimperator/Pentadactyl
  • Tab Mix Plus
  • FireGestures
  • Classic theme restorer

We have a lot of ideas about how to make these sorts of extensions work using better APIs than we have now.

  • Opera has a sidebar API. Combined with a way to hide the tab strip, we think this could be used to implement an extension like Tree Style Tab.
  • We’re working with Giorgio Maone, the developer of NoScript, to design the APIs he needs to implement NoScript as a WebExtension.

We’re hoping people will have a lot of other ideas for the extensions that they care about. If you’d like to propose or vote on ideas, please visit webextensions.uservoice.com to express your opinion.

There are also concerns that restricting people to the WebExtensions API will limit innovation: we can make APIs to support the XUL extensions people have already made, but how will we know what other ones we’re missing out on?

It’s likely that we’ll still allow some access to XUL in the future. We want people to be able to experiment with new ideas, and they shouldn’t have to wait for us to design, implement, and finalize a new API. However, we don’t want this to become another feature like require('chrome') in Jetpack, which is used by virtually every add-on. We’re still trying to figure out how to avoid that fate. We know that we need to be more proactive about providing APIs that add-ons need. But is that enough?

Our big fear is that, once we provide a WebExtensions API, there won’t be anything to motivate people to switch over to it. We can try to deprecate access to the parts of XPCOM used to implement the functionality, but often there won’t be a clear mapping between the old and the new APIs.

Again, we’re open to ideas about how to do this. Moving away from XUL will be a long process. We’re announcing all of this early so that we can begin to gather feedback. APIs that are created in a vacuum probably aren’t going to be very useful to people.


https://billmccloskey.wordpress.com/2015/08/21/firefox-add-on-changes/


Chris Finke: My Future of Developing Firefox Add-ons

Пятница, 21 Августа 2015 г. 21:53 + в цитатник

Mozilla announced today that add-ons that depend on XUL, XPCOM, or XBL will be deprecated and subsequently incompatible with future versions of Firefox:

Consequently, we have decided to deprecate add-ons that depend on XUL, XPCOM, and XBL. We don’t have a specific timeline for deprecation, but most likely it will take place within 12 to 18 months from now. We are announcing the change now so that developers can prepare and offer feedback.

In response to this announcement, I’ve taken the step of discontinuing all of my Firefox add-ons. They all depend on XUL or XPCOM, so there’s no sense in developing them for the next year only to see them become non-functional. AutoAuth, Comment Snob, Feed Sidebar, Links Like This, OPML Support, RSS Ticker, and Tab History Redux should be considered unsupported as of now. (If for any reason, you’d like to take over development of any of them, e-mail me.)

While I don’t like Mozilla’s decision (and I don’t think it’s the best thing for the future of Firefox), I understand it; there’s a lot of innovation that could happen in Web browser technology that is stifled because of a decade-old add-on model. I only hope that the strides a lighter-weight Firefox can make will outweigh the loss of the thousands of add-ons that made it as popular as it is today.

http://www.chrisfinke.com/2015/08/21/my-future-of-developing-firefox-add-ons/


Air Mozilla: Edgar Chen: TaskCluster Interactive Sessions

Пятница, 21 Августа 2015 г. 21:00 + в цитатник

Edgar Chen: TaskCluster Interactive Sessions Come learn about TaskCluster Interactive Sessions in a quick presentation by Edgar Chen!

https://air.mozilla.org/edgar-chen-taskcluster-interactive-sessions/


Support.Mozilla.Org: What’s up with SUMO – 21st August

Пятница, 21 Августа 2015 г. 18:30 + в цитатник

Hello, SUMO Nation! How have you been? We skipped last week as I was away from the keyboard, discovering the wonders of the offline world but… we’re back and ready to share news and updates with you.

A warm welcome to those who joined us recently!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Vanja – for his motivation (with a level of over 9000!) to localize the KB for Firefox into Serbian.
We salute you!

Monday SUMO Community meetings

  • The previous one was unfortunately cancelled, which means no notes to share – sorry!
  • Remember that you can watch the archived meeting videos on our YouTube channel.
  • The next one is happening on Monday, 24th of August. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Help needed – thank you!

Developers

Community

Support Forum

  • Reminder: the One and Done SUMO Contributor Support Training is live. Start here!

L10n

Firefox (for Desktop, for Android, for iOS)

That’s it for today, looking forward to seeing you all on Monday – take care, take it easy, and safe travels!

https://blog.mozilla.org/sumo/2015/08/21/whats-up-with-sumo-21st-august/


Mozilla Addons Blog: The Future of Developing Firefox Add-ons

Пятница, 21 Августа 2015 г. 16:00 + в цитатник

Today we are announcing some major upcoming changes to Firefox add-ons. Our add-on ecosystem has evolved through incremental, organic growth over the years, but there are some modernizations to Firefox that require some foundational changes to support:

  • Taking advantage of new technologies like Electrolysis and Servo
  • Protecting users from spyware and adware
  • Shortening the time it takes to review add-ons

To help the add-on development community understand how we will enable these improvements, we are making four related announcements today:

  • We are implementing a new extension API, called WebExtensions—largely compatible with the model used by Chrome and Opera—to make it easier to develop extensions across multiple browsers.
  • A safer, faster, multi-process version of Firefox is coming soon with Electrolysis; we need developers to ensure their Firefox add-ons will be compatible with it.
  • To ensure third-party extensions provide customization without sacrificing security, performance or exposing users to malware, we will require all extensions to be validated and signed by Mozilla starting in Firefox 41, which will be released on September 22nd 2015.
  • We have decided on an approximate timeline for the deprecation of XPCOM- and XUL-based add-ons.

For our add-on development community, these changes will bring benefits, like greater cross-browser add-on compatibility, but will also require redevelopment of a number of existing add-ons. We’re making a big investment by expanding the team of engineers, add-on reviewers, and evangelists who work on add-ons and support the community that develops them. They will work with the community to improve and finalize the WebExtensions API, and will help developers of unsupported add-ons make the transition to newer APIs and multi-process support.

We’re announcing all of the changes today to make developers aware of our plans and to give everyone an opportunity to offer feedback. We are committed to doing what we can to make this transition as easy as possible. Together with our Mozilla community, we will create the future of Firefox add-ons.

Introducing the WebExtensions API

For some time we’ve heard from add-on developers that our APIs could be better documented and easier to use. In addition, we’ve noticed that many Firefox add-on developers also maintain a Chrome, Safari, or Opera extension with similar functionality. We would like add-on development to be more like Web development: the same code should run in multiple browsers according to behavior set by standards, with comprehensive documentation available from multiple vendors.

To this end, we are implementing a new, Blink-compatible API in Firefox called WebExtensions. Extension code written for Chrome, Opera, or, possibly in the future, Microsoft Edge will run in Firefox with few changes as a WebExtension. This modern and JavaScript-centric API has a number of advantages, including supporting multi-process browsers by default and mitigating the risk of misbehaving add-ons and malware.

WebExtensions will behave like other Firefox add-ons; they will be signed by Mozilla, and discoverable through addons.mozilla.org (AMO) or through the developer’s website. With this API, extension developers should be able to make the same extension available on Firefox and Chrome with a minimal number of changes to repackage for each platform.

A preview release of WebExtensions is available in Firefox 42, which is currently on Developer Edition, and information on how to start testing WebExtensions is available in the Mozilla wiki. We have started discussions with other browser vendors to begin an effort to standardize at least some of this API, and will continue to post additional information and more details about WebExtensions in the wiki.

Multi-process Firefox and Add-ons

Phase one of our Electrolysis project, which uses a separate operating system process to run Web content, has been moving towards our release channel. Subsequent phases will bring multiple content processes and improved sandboxing capabilities. Using a separate rendering process lays the foundation enabling us to bring significant performance and security improvements to Firefox, but it also breaks some add-ons, especially those that modify content. However, there are a number of mitigations in place to keep add-ons functional:

  • WebExtensions are fully compatible with Electrolysis. As the API matures and Electrolysis is enabled by default, this will be the way to port or develop extensions for Firefox.
  • Add-ons based on the Jetpack SDK will work well as long as they don’t use require(‘chrome’) or some of the low-level APIs to touch objects in the content process.
  • Add-ons that haven’t been upgraded to work with Electrolysis will run in a special compatibility environment that resembles single-process Firefox as much as possible. If an add-on touches content, the access will happen via cross-process object wrappers (CPOWs). However, CPOWs are much slower than the equivalent DOM operations in single-process Firefox, and can affect the user experience negatively. Also, some accesses aren’t supported by the compatibility layer and will throw exceptions.

Starting now, add-on developers need to think about their strategy to work with a multi-process Firefox:

The final release schedule for Electrolysis will be determined over the next several months as we test with more users. We would like developers to understand that, although there is a chance that the Electrolysis release schedule will be delayed or modified in the coming months, they should plan to update their add-ons to meet our current release plan as follows:

  • August 11th (Firefox 42 merges to Developer Edition). Electrolysis has been enabled by default on Developer Edition (it is already the default on Nightly).
  • September 22nd (Firefox 42 merges to Beta). Electrolysis will be available to users as an “opt-in” on the beta channel.
  • November 3rd (Firefox 43 merges to Beta). The earliest release Electrolysis will be enabled by default on Beta. When Electrolysis is enabled by default we will begin blocklisting Electrolysis-incompatible add-ons that cause major performance and/or stability problems.
  • December 15th (Firefox 43 merges to release). The earliest release Electrolysis will be enabled on the release channel, and our current planned release.
  • Six months past enabling Electrolysis on Release. The deprecation of CPOWs and compatibility shims will begin. We will release further scheduling information as appropriate, but developers should be aware that any add-ons that depend on them will stop working within six to twelve months of the general availability of Electrolysis.

The Electrolysis team has posted a list of popular add-ons for compatibility testing at http://arewee10syet.com. In addition to the steps above, developers are encouraged to review the list and follow the instructions to submit information about whether their add-ons are Electrolysis-compatible or not.

Signing

We currently use a blocklisting mechanism to defend against malicious add-ons, but additional measures are needed to better protect our users as some add-on developers have adapted to work around blocklisting. Blocklisting is also reactive: users can be harmed by dangerous add-ons that are installed until they are identified and blocked. Starting in Firefox 42, add-on developers will be required to submit extensions for review and signing by Mozilla prior to deployment, and unsigned add-ons cannot be installed or used with Firefox. You can read more about the rationale for signing in a separate blog post.

We realize that the add-on review process can sometimes be inconvenient for developers. Reviewing is a mostly manual, human process today, and moving an extension from the initial submission to passing a full review that meets our guidelines can be a time-consuming process that can take weeks or months. A major advantage of WebExtensions is that they can be reviewed more quickly. In general, it’s easier to develop a correct WebExtension, and the permissions system makes it easier to recognize malicious add-ons.

Our goal is to increase automation of the review process so that the wait time for reviews of new WebExtensions listed on addons.mozilla.org can be reduced to five days, and that the wait time for updates to existing WebExtensions can be reduced to one to two days. Current wait times for unlisted add-ons submitted for signing are less than three days. We are also expanding the team of paid and volunteer add-on reviewers and continue to make improvements to the automatic validator, both of which will reduce existing review queue wait times for all extensions in the immediate future.

While extension signing will not be enforced until Firefox 42, the code has shipped with Firefox 40, allowing users to see if installed extensions have been validated by Mozilla. Users of Firefox Developer Edition will have noticed that unsigned add-ons were blocked beginning on August 14.

The full schedule for add-on signing is currently as follows:

  • Firefox 40: Users will see a warning in the add-ons manager about unsigned extensions, but all extensions will continue to work.
  • Firefox 41: Unsigned extensions will be disabled by default, and Firefox will have a preference (xpinstall.signatures.required) that allows signature enforcement to be turned off.
  • Firefox 42 and beyond:
    • The Beta and Release versions of Firefox based on 42 and above (Beta 42 will be released at the same time as Firefox 41) will remove the preference that allows unsigned extensions to be installed, and will disable and/or prevent the installation of unsigned extensions.
    • The Nightly and Developer Editions of Firefox based on 42 and above will retain the preference to disable signing enforcement, allowing the development and/or use of unsigned add-ons in those versions. Unbranded versions of Firefox based on releases will also be made available for developers, and are expected to be in place for Firefox 42 for release (and potentially beta).

Deprecation of XUL, XPCOM, and the permissive add-on model

XPCOM and XUL are two of the most fundamental technologies to Firefox. The ability to write much of the browser in JavaScript has been a huge advantage for Mozilla. It also makes Firefox far more customizable than other browsers. However, the add-on model that arose naturally from these technologies is extremely permissive. Add-ons have complete access to Firefox’s internal implementation. This lack of modularity leads to many problems.

A permissive add-on model means that we have limited flexibility in changing the foundations of Firefox. The add-on breakage caused by Electrolysis is an important example of this problem. Technologies like CPOWs help us to work around add-on problems; however, CPOWs have been a huge investment in effort and they are still slow and somewhat unreliable.

Without a fundamental shift to the way Firefox add-ons work, we will be unable to use new technologies like Electrolysis, Servo or browser.html as part of Firefox.

The tight coupling between the browser and its add-ons also creates shorter-term problems for Firefox development. It’s not uncommon for Firefox development to be delayed because of broken add-ons. In the most extreme cases, changes to the formatting of a method in Firefox can trigger problems caused by add-ons that modify our code via regular expressions. Add-ons can also cause Firefox to crash when they use APIs in unexpected ways.

Consequently, we have decided to deprecate add-ons that depend on XUL, XPCOM, and XBL. We don’t have a specific timeline for deprecation, but most likely it will take place within 12 to 18 months from now. We are announcing the change now so that developers can prepare and offer feedback. Add-ons that are built using the new WebExtension API will continue to work. We will also continue supporting SDK add-ons as long as they don’t use require(‘chrome’) or some of the low-level APIs that provide access to XUL elements.

A major challenge we face is that many Firefox add-ons cannot possibly be built using either WebExtensions or the SDK as they currently exist. Over the coming year, we will seek feedback from the development community, and will continue to develop and extend the WebExtension API to support as much of the functionality needed by the most popular Firefox extensions as possible.

Moving forward

The strategy announced here necessarily involves a lot of trade-offs. Developers who already support Chrome extensions will benefit since they will have one codebase to support instead of two. Developers of Firefox-only add-ons will have to make changes. Those changes may require considerable development effort up-front, but we feel the end result will be worth that effort for both Firefox’s users and developers.

We want to reiterate our commitment to our add-on development community, and will work with you in porting extensions, designing new APIs, and creating innovative new add-ons that make Firefox great.

We will continue to post additional resources in the coming weeks and months to outline each of these changes in more detail, as well as provide support through our traditional channels via the Mozilla Developer Network, IRC (in #extdev), and the extension developer group.

Update: A lot of people have been asking what WebExtensions will deliver, and how. Bill McCloskey has posted an update on where we want to take them, and how you can contribute ideas and be part of the process. It’s a must-read for people who are concerned about how the addons they develop, use, and love will continue to be part of Firefox.

https://blog.mozilla.org/addons/2015/08/21/the-future-of-developing-firefox-add-ons/


Nathan Froyd: explicit is better than implicit: c++ implicitly defined member functions

Четверг, 20 Августа 2015 г. 19:14 + в цитатник

In the tradition of The Zen of Python, I’ve been thinking about pushing for explicit declarations of otherwise implicitly-defined member functions in C++, both in code that I write and in code that I review:

// Instances of this class should not be copied.
MyClass(const MyClass&) = delete;
MyClass& operator=(const MyClass&) = delete;

// We are OK with the default semantics.
OtherClass(const OtherClass&) = default;
OtherClass& operator=(const OtherClass&) = default;
OtherClass(OtherClass&&) = default;
OtherClass& operator=(OtherClass&&) = default;

[Background: C++ specifies several member functions that the compiler will implicitly define for you in any class: the default constructor, the copy/move constructor(s), and the copy/move assignment operator(s). I say “implicitly define”, as though that always happens, but there are a number of constraints on when the compiler will do this. For the purposes of the discussion below, I’ll ignore the default constructor bit and focus on the copy/move constructor and assignment operator. (I will also happily ignore all the different variants thereof that can occur, e.g. when the compiler defines MyClass(MyClass&) for you.) I think the arguments apply equally well to the default constructor case, but most classes I deal with tend to either declare their own default constructor or have several user-defined constructors anyway, which prohibit the compiler from implicitly declaring the default constructor.]

I think the argument for = delete is more obvious and less controversial, so I’ll start there.  = delete‘ing functions you don’t want used is part of the API contract of the class.  Functions that shouldn’t be used shouldn’t be exposed to the user, and = delete ensures that the compiler won’t implicitly define part of your API surface (and users thereby unknowingly violate API guarantees).  The copy constructor/assignment operator are the obvious candidates for = delete, but using = delete for the move constructor/assignment operator makes sense in some cases (e.g. RAII classes). Using = delete gives you pleasant compiler error messages, and it’s clearer than:

private:
  MyClass(const MyClass&);
  MyClass& operator=(const MyClass&);

If you’re lucky, there might be a comment to the effect of // Deliberately not defined.  I know which code I’d prefer to read. (Using = delete also ensures you don’t accidentally use the not-defined members inside the class itself, then spend a while waiting for the linker errors to tell you about your screw-up.)

= default appears to be a little harder to argue for.  “Experienced” programmers always know which functions are provided by the compiler, right?

Understanding whether the compiler implicitly defines something requires looking at the entire class definition (including superclasses) and running a non-trivial decision algorithm. I sure don’t want each reader of the code to do that for two or four different member functions (times superclasses, too), all of which are reasonably important in understanding how a class is intended to be used.

Explicitly declaring what you intend can also avoid performance pitfalls. In reading through the C++ specification to understand when things were implicitly declared, I discovered that the same functions can also be implicitly deleted, including this great note: “When the move constructor is not implicitly declared or explicitly supplied, expressions that otherwise would have invoked the move constructor may instead invoke a copy constructor.” So, if the move constructor was implicitly declared at some point, but then was implicitly deleted through some change, expressions that were previously efficient due to moving would become somewhat less so due to copying. Isn’t C++ great?

Being explicit also avoids the possibility of meaning to define something, but getting tripped up by the finer points of the language:

template
class MyClass
{
public:
  // This does not define a copy constructor for MyClass.
  template
  MyClass(const MyClass& aOther) : ... { ... }
  ...
};

Comments could serve to notify the reader that we’re OK with the default definition, but if I could choose between encoding something in a place solely intended for humans, or a place both humans and the compiler will understand, I know which one I’d pick.

https://blog.mozilla.org/nfroyd/2015/08/20/explicit-is-better-than-implicit-c-implicitly-defined-member-functions/


Air Mozilla: Reps weekly

Четверг, 20 Августа 2015 г. 18:00 + в цитатник

Niko Matsakis: Virtual Structs Part 3: Bringing Enums and Structs Together

Четверг, 20 Августа 2015 г. 16:29 + в цитатник

So, in previous posts, I discussed the pros and cons of two different approaches to modeling variants: Rust-style enums and C++-style classes. In those posts, I explained why I see Rust enums and OO-style class hierarchies as more alike than different (I personally credit Scala for opening my eyes to this, though I’m sure it’s been understood by others for much longer). The key points were as follows:

  • Both Rust-style enums and C++-style classes can be used to model the idea of a value that be one of many variants, but there are differences in how they work at runtime. These differences mean that Rust-style enums are more convenient for some tasks, and C++-style classes for others. In particular:
    • A Rust-style enum is sized as large as the largest variant. This is great because you can lay them out flat in another data structure without requiring any allocation. You can also easily change from one variant to another. One downside of Rust enums is that you cannot refine them to narrow the set of variants that a particular value can have.
    • A C++-style class is sized to be exactly as big as one variant. This is great because it can be much more memory efficient. However, if you don’t know what variant you have, you must manipulate the value by pointer, so it tends to require more allocation. It is also impossible to change from one variant to another. Class hierarchies also give you a simple, easily understood kind of refinement, and the ability to have common fields that are shared between variants.
  • C++-style classes offer constructors, which allows for more abstraction and code reuse when initially creating an instance, but raise thorny questions about the type of a value under construction; Rust structs and enums are always built in a single-shot today, which is simpler and safer but doesn’t compose as well.

What I want to talk about in this post is a proposal (or proto-proposal) for bridging those two worlds in Rust. I’m going to focus on data layout in this post. I’ll defer virtual methods for another post (or perhaps an RFC). Spoiler alert: they can be viewed as a special case of specialization.

I had originally intended to publish this post a few days after the others. Obviously, I got delayed. Sorry about that! Things have been very busy! In any case, better late than never, as some-great-relative-or-other always (no doubt) said. Truth is, I really miss blogging regularly, so I’m going to make an effort to write up more in progress and half-baked ideas (yeah yeah, promises to blog more are a dime a dozen, I know).

Note: I want to be clear that the designs in this blog post are not my work per se. Some of the ideas originated with me, but others have arisen in the course of conversations with others, as well as earlier proposals from nrc, which in turn were heavily based on community feedback. And of course it’s not like we Rust folk invented OO or algebraic data types or anything in the first place. :)

Unifying structs and enums into type hierarchies

The key idea is to generalize enums and structs into a single concept. This is often called an algebraic data type, but algebra brings back memories of balancing equations in middle school (not altogether unpleasant ones, admittedly), so I’m going to use the term type hierarchy instead. Anyway, to see what I mean, let’s look at my favorite enum ever, Option:

1
2
3
enum Option<T> {
    Some(T), None
}

The idea is to reinterpret this enum as three types arranged into a tree or hierarchy. An important point is that every node in the tree is now a type: so there is a type representing the Some variant, and a type representing the None variant:

1
2
3
4
enum Option<T>
|
+- struct None<T>
+- struct Some<T>

As you can see, the leaves of the tree are called structs. They represent a particular variant. The inner nodes are called enums, and they represent a set of variants. Every existing struct definition can also be reinterpreted as a hierarchy, but just a hierarchy of size 1.

These generalized type hierarchies can be any depth. This means you can do nested enums, like:

1
2
3
4
5
6
7
enum Mode {
    enum ByRef {
        Mutable,
        Immutable
    }
    ByValue
}

This creates a nested hierarchy:

1
2
3
4
5
6
7
enum Mode
|
+- enum ByRef
|  |
|  +- struct Mutable
|  +- struct Immutable
+- ByValue

Since all the nodes in a hiearchy are types, we get refinement types for free. This means that I can use Mode as a type to mean any mod at all, or Mode::ByRef for the times when I know something is one of the ByRef modes, or even Mode::ByRef::Mutable (which is a singleton struct).

As part of this change, it should be possible to declare the variants out of line. For example, we could change enum to look as follows:

1
2
3
4
5
6
7
enum Option<T> {
}
struct Some<T>: Option<T> {
    value: T
}
struct None<T>: Option<T> {
}

This definitely is not exactly equivalent to the older one, of course. The names Some and None live alongside Option, rather than within it, and I’ve used a field (value) rather than a tuple struct.

Common fields

Enum declarations are extended with the ability to have fields as well as variants. These fields are inherited by all variants of that enum. In the syntax, fields must appear before the variants, and it is also not possible to combine tuple-like structs with inherited fields.

Let’s revisit an example from the previous post. In the compiler, we currently represent types with an enum. However, there are certain fields that every type carries. These are handled via a separate struct, so that we wind up with something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
type Ty<'tcx> = &'tcx TypeData<'tcx>;

struct TypeData<'tcx> {
    id: u32,
    flags: u32,
    ...,
    structure: TypeStructure<'tcx>,
}

enum TypeStructure<'tcx> {
    Int,
    Uint,
    Ref(Ty<'tcx>),
    ...
}

Under this newer design, we could simply include the common fields in the enum definition:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
type Ty<'tcx> = &'tcx TypeData<'tcx>;

enum TypeData<'tcx> {
    // Common fields:
    id: u32,
    flags: u32,
    ...,

    // Variants:
    Int { },
    Uint { },
    Ref { referent_ty: Ty<'tcx> },
    ...
}

Naturally, when I create a TypeData I should supply all the fields, including the inherited ones (though in a later section I’ll present ways to extract the initialization of common fields into a reusable fn):

1
2
3
4
5
6
let ref =
    TypeData::Ref {
        id: id,
        flags: flags,
        referent_ty: some_ty
    };

And, of course, given a reference &TypeData<'tcx>, we can access these common fields:

1
2
3
fn print_id<'tcx>(t: &TypeData<'tcx>) {
    println!("The id of `{:?}` is `{:?}`", t, t.id);
}

Convenient!

Unsized enums

As today, the size of an enum type, by default, is equal to the largest of its variants. However, as I’ve outlined in the last two posts, it is often useful to have each value be sized to a particular variant. In the previous posts I identified some criteria for when this is the case:

One interesting question is whether we can concisely state conditions in which one would prefer to have “precise variant sizes” (class-like) vs “largest variant” (enum). I think the “precise sizes” approach is better when the following apply:

  • A recursive type (like a tree), which tends to force boxing anyhow. Examples: the AST or types in the compiler, DOM in servo, a GUI.
  • Instances never change what variant they are.
  • Potentially wide variance in the sizes of the variants.

Therefore, it is possible to declare the root enum in a type hierarchy as either sized (the default) or unsized; this choice is inherited by all enums in the hierarchy. If the hierarchy is declared as unsized, it means that each struct type will be sized just as big as it needs to be. This means in turn that the enum types in the hierarchy are unsized types, since the space required will vary depending on what variant an instance happens to be at runtime.

To continue with our example of types in rustc, we currently go through some contortions so as to introduce indirection for uncommon cases, which keeps the size of the enum under control:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
type Ty<'tcx> = &'tcx TypeData<'tcx>;

enum TypeData<'tcx> {
    ...,

    // The data for a fn type is stored in a different struct
    // which is cached in a special arena. This is helpful
    // because (a) the size of this variant is only a single word
    // and (b) if we have a type that we know is a fn pointer,
    // we can pass the `BareFnTy` struct around instead of the
    // `TypeData`.
    FnPointer { data: &'tcx FnPointerData<'tcx> },
}

struct FnPointerData<'tcx> {
    unsafety: Unsafety,
    abi: Abi,
    signature: Signature,
}

As discussed in a comment in the code, the current scheme also serves as a poor man’s refinement type: if at some point in the code we know we have a fn pointer, we can write a function that takes a FnPointerData argument to express that:

1
2
3
4
5
6
7
8
9
10
11
12
13
fn process_ty<'tcx>(ty: Ty<'tcx>) {
    match ty {
        &TypeData::FnPointer { data, .. } => {
            process_fn_ty(ty, data)
        }
        ...
    }
}

// This function expects that `ty` is a fn pointer type. The `FnPointerData`
// contains the fn pointer information for `ty`.
fn process_fn_ty<'tcx>(ty: Ty<'tcx>, data: &FnPointerData<'tcx>) {
}

This pattern works OK in practice, but it is not perfect. For one thing, it’s tedious to construct, and it’s also a little inefficient. It introduces unnecessary indirection and a second memory arena. Moreover, the refinement type scheme isn’t great, because you often have to pass both the ty (for the common fields) and the internal data.

Using a type hierarchy, we can do much better. We simply remove the FnPointerData struct and inline its fields directly into TypeData:

1
2
3
4
5
6
7
8
9
10
11
12
13
type Ty<'tcx> = &'tcx TypeData<'tcx>;

unsized enum TypeData<'tcx> {
    ...,

    // No indirection anymore. What's more, the type `FnPointer`
    // serves as a refinement type automatically.
    FnPointer {
        unsafety: Unsafety,
        abi: Abi,
        signature: Signature,
    }
}

Now we can write functions that process specific categories of types very naturally:

1
2
3
4
5
6
7
8
9
10
11
12
13
fn process_ty<'tcx>(ty: Ty<'tcx>) {
    match ty {
        fn_ty @ &TypeData::FnPointer { .. } => {
            process_fn_ty(fn_ty)
        }
        ...
    }
}

// Don't even need a comment: it's obvious that `ty` should be a fn type
// (and enforced by the type system).
fn process_fn_ty<'tcx>(ty: &TypeData::FnPointer<'tcx>) {
}

Matching as downcasting

As the previous example showed, one can continue to use match to select the variant from an enum (sized or not). Maching also gives us an elegant downcasting mechanism. Instead of writing (Type) value, as in Java, or dynamic_cast(value), one writes match value and handles the resulting cases. Just as with enums today, if let can be used if you just want to handle a single case.

Crate locality

An important part of the design is that the entire type hierarchy must be declared within a single crate. This is of course trivially true today: all variants of an enums are declared in one item, and structs correspond to singleton hierarchies.

Limiting the hierarchy to a single crate has a lot of advantages. Without it, you simply can’t support today’s sized enums, for one thing. It allows us to continue doing exhaustive checks for matches and to generate more efficient code. It is interesting to compare to dynamic_cast, the C++ equivalent to a match:

  • dynamic_cast is often viewed as a kind of code smell, versus a virtual method. I’m inclined to agree, as dynamic_cast only checks for a particular variant, rather than specifying handling for the full range of variants; this makes it fragile in the face of edits to the code. In contrast, the exhaustive nature of a Rust match ensures that you handle every case (of course, one must still be judicious in your use of _ patterns, which, while convenient, can be a refactoring hazard).
  • dynamic_cast is somewhat inefficient, since it must handle the fully general case of classes that spread across compilation units; in fact, it is very uncommon to have a class hierarchy that is truly extensible – and in such cases, using dynamic_cast is particularly hard to justify. This leads to projects like LLVM reimplementing RTTI (the C++ name for matching) from scratch.

Another advantage of confining the hierarchy to a single crate is that it allows us to continue doing variance inference across the entire hierarchy at once. This means that, for example, that in the out of line version of Option (below) we can infer a variance for the parameter T declared on Option, in the same way we do today (otherwise, the declaration of enum Option would require some form of phantom data, and that would be binding on the types declared in other crates).

I also find that confining the hierarchy to a single crate helps to clarify the role of type hierarchies versus traits and, in turn, avoid some of the pitfalls so beloved by OO haters. Basically, it means that if you want to define an open-ended extension point, you must use a trait, which also offers the most flexibility; a type hierarchy, like an enum today, can only be used to offer a choice between a fixed number of crate-local types. An analogous situation in Java would be deciding between an abstract base class and an interface; under this design, you would have to use an interface (note that the problem of code reuse can be tackled separately, [via specialization]).

Finally, confining extension to a trait is relevant to the construction of vtables and handling of specialization, but we’ll dive into that another time.

Even though I think that limiting type hierarchies to a single crate is very helpful, it’s worth pointing out that it IS possible to lift this restriction if we so choose. This can’t be done in all cases, though, due to some of the inherent limitations involved.

Enum types as bounds

In the previous section, I mentioned that enums and traits (both today and in this proposed design) both form a kind of interface. Whereas traits define a list of methods, enums indicate something about the memory layout of the value: for example, they can tell you about a common set of fields (though not the complete set), and they clearly narrow down the universe of types to be just the relevant variants. Therefore, it makes sense to be able to use an enum type as a bound on a type parameter. Let’s dive into an example to see what I mean and why you might want this.

Imagine we’re using a type hiererachy to represent the HTML DOM. It might look something like this (browser people: forgive my radical oversimplification):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
unsized enum Node {
  // where this node is positioned after layout
  position: Rectangle,
  ...
}

enum Element: Node {
  ...
}

struct TextElement: Element {
  ...
}

struct ParagraphElement: Element {
  ...
}

...

Now imagine that I have a helper function that selects nodes based on whether they intersect a particular box on the screen:

1
2
3
4
5
6
7
8
9
fn intersects(box: Rectangle, elements: &[Rc<Node>]) -> Vec<Rc<Node>> {
    let mut result = vec![];
    for element in elements {
        if element.position.intersects(box) {
            result.push(element.clone());
        }
    }
    result
}

OK, great! But now imagine that I have a slice of text elements (&[Rc]), and I would like to use this function. I will get back a Vec> – I’ve lost track of the fact that my input contained only text elements.

Using generics and bounds, I can rewrite the function:

1
2
3
fn intersects<T:Node>(box: Rectangle, elements: &[Rc<T>]) -> Vec<Rc<T>> {
    // identical to before
}

Nothing in the body had to change, only the signature.

Permitting enum types to appear as bounds also means that they can be referenced by traits as supertraits. This allows you to define interfaces that cut across the primary inheritance hierarchy. So, for example, in the DOM both the HTMLTextAreaElement and the HTMLInputElement can carry a block of text, which implies that they have a certain set of text-related methods and properties in common. And of course they are both elements. This can be modeled using a trait like so:

1
2
3
4
trait TextAPIs: HTMLElement {
    fn maxLength(&self) -> usize;
    ..
}

This means that if you have an &TextApis object, you can access the fields from HTMLElement with no overhead, because they are stored in the same place for both cases. But if you want to access other things, such as maxLength, that implies virtual dispatch, since the address is dynamically computed and will vary.

Enums vs traits

The notion of enums as bounds raises questions about potential overlap in purpose between enums and traits. I would argue that this overlap already exists: both enums and traits today are ways to let you write a single function that operates over values of more than one type. However, in practice, it’s rarely hard to know which one you want to use. This I think is because they come at the problem from two different angles:

  • Traits start with the assumption that you want to work with any type, and let you narrow that. Basically, you get code that is as general as possible.
  • In contrast, enums assume you want to work with a fixed set of types. This means you can write code that is as specific as possible. Enums also work best when the types you are choosing between are related into a kind of family, like all the different variants of types in the Rust language or some and none.

If we extend enums in the way described here, then they will become more capable and convenient, and so you might find that they overlap a bit more with plausible use cases for traits. However, I think that in practice there are still clear guidelines for which to choose when:

  • If you have a fixed set of related types, use an enum. Having an enumerated set of cases is advantageous in a lot of ways: we can generate faster code, you can write matches, etc.
  • If you want open-ended extension, use a trait (and/or trait object). This will ensure that your code makes as few assumptions as possible, which in turn means that you can handle as many clients as possible.

Because enums are tied to a fixed set of cases, they allow us to generate tighter code, particularly when you are not monomorphizing to a particular variant. That is, if you have a value of type &TypeData, where TypeData is the enum we mentioned before, you can access common fields at no overhead, even though we don’t know what variant it is. Moreover, the pointer is thin and thus takes only a single word.

In contrast, if you had made TypeData a trait and hence &TypeData was a trait object, accessing common fields would require some overhead. (This is true even if we were to add virtual fields to traits, as eddyb and kimundi proposed in RFC #250.) Also, because traits are added on to other values, your pointer would be a fat pointer, and hence take two words.

(As an aside, I still like the idea of adding virtual fields to traits. The idea is that these fields could be remapped in an implementation to varying offsets. Accessing such a field implies dynamically loading the offset, which is slower than a regular field but faster than a virtual call. If we additionally added the restriction that those fields must access content that is orthogonal from one another, we might be able to make the borrow checker more permissive in the field case as well. But that is kind of an orthogonal extension to what I’m talking about here – and one that fits well with my framing of traits are for open-ended extension across heterogeneous types, enums are for a single cohesive type hierarchy.)

Associated structs (constructors)

One of the distinctive features of OO-style classes is that they feature constructors. Constructors allow you to layer initialization code, so that you can build up a function that initializes (say) the fields for Node, and that function is used as a building block by one that initializes the Element fields, and so on down the hierarchy. This is good for code reuse, but constructors have an Achilles heel: while we are initializing the Node fields, what value do the Element fields have? In C++, the answer is who knows – the fields are simply uninitialized, and accessing them is undefined behavior. In Java, they are null. But Rust has no such convenient answer. And there is an even weirder question: what happens when you downcast or match on a value while it is being constructed?

Rust has always sidestepped these questions by using the functional language approach, where you construct an aggregate value (like a struct) by supplying all its data at once. This works good for small structs, but it doesn’t scale up to supporting refinement types and common fields. Consider the example of types in the compiler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
enum TypeData<'tcx> {
    // Common fields:
    id: u32,
    flags: u32,
    counter: usize, // ok, I'm making this field up :P

    ...,
    FnPointer {
        unsafety: Unsafety,
        abi: Abi,
        signature: Signature,
    }
    ..., // other variants here
}

I would like to be able to write some initialization routines that compute the id, flags, and whatever else and then reuse those across different variants. But it’s hard to know what such a function should return:

1
2
3
fn init_type_data(cx: &mut Context) -> XXX {
    XXX { id: cx.next_id(), flags: DEFAULT_FLAGS, counter: 0 }
}

What is this type XXX? What I want is basically a struct with just the common fields (though of course I don’t want to have to define such a struct mself, too repetitive):

1
2
3
4
5
struct XXX {
    id: u32,
    flags: u32,
    counter: usize,
}

And of course I also want to be able to use an instance of this struct in an initializer as part of a .. expression, like so:

1
2
3
4
5
6
7
8
fn make_fn_type(cx: &mut Context, unsafety: Unsafety, abi: Abi, signature: Signature) {
    TypeData::FnPointer {
        unsafety: unsafety,
        abi: abi,
        signature: signature,
        ..init_type_data(cx)   // <-- initializes the common fields 
    }
}

If we had a type like this, it strikes a reasonable nice balance between the functional and OO styles. We can layer constructors and build constructor abstractions, but we also don’t have a value of type TypeData until all the fields are initialized. In the interim, we just have a value of this type XXX, which only has the shared fields that are common to all variants.

All we need now is a reasonable name for this type XXX. The proposal is that every enum has an associated struct type called struct (i.e, the keyword). So instead of XXX, I could write TypeData::struct, and it means a struct with all the fields common to any `TypeData` variant. Note that a TypeData::struct value is not a TypeData variant; it just has the same data as a variant.

Subtyping and coercion

There is one final wrinkle worth covering in the proposal. And unfortunately, it’s a tricky one. I’ve been sort of tacitly assuming that an enum and its variants have some sort of typing relationship, but I haven’t said explicitly what it is. This part is going to take some experimentation to find the right mix. But let me share some intermediate thoughts.

Unsized enums. For unsized enums, we are always dealing with an indirection. So e.g. we have to be able to smoothly convert from a reference to a specific struct like &TextElement to a reference to a base enum like &Node. We’ve traditionally viewed this as a special case of DST coercions. Basically, coercing to &Node is more-or-less exactly like coercion to a trait object, except that we don’t in fact need to attach a vtable – that is, the extra data on the &Node fat pointer is just (). But in fact we don’t necessarily HAVE to view upcasting like this as a coercion – after all, there is no runtime change happening here.

This gets at an interesting point. Subtyping between OO classes is normally actually subtyping between references. That is, in Java we say that String <: Object, but that is because everything in Java is in fact a reference. In C++, not everything is a reference, so if you aren’t careful this in fact gives rise to creepy hazards like object slicing. The problem here is that in C++ the superclass type is really just the superclass fields; so if you do superclass = subclass, then you are just going to drop the extra fields from the subclass on the floor (usually). This probably isn’t what you meant to do.

Because of unsized types, though, Rust can safely say that a struct type is a subtype of its containing enum(s). So, in the DOM example, we could say that TextElement <: Node. We don’t have to fear slicing because the type TextElement is unsized, and hence the user could only ever make use of it by ref. In other words, object slicing arises C++ precisely because it doesn’t have a notion of unsized types.

Sized enums. To be honest, unsized enums are not the scary case, because they are basically a new feature to the language. The harder and more interesting case is sized enums. The problem here is that we are introducing new types into existing code, and we want to be sure not to break things. So consider this example:

1
2
let mut x = None;
x = Some(3);

In today’s world, the first assignment gives x a type of Option<_>, where the _ represents something to be inferred later. This is because the expression None has type Option<_>. But under this RFC, the type of None is None<_> – and hence we have to be smart enough to infer that the type of x should not be None<_> but rather Option<_> (because it is later assigned a Some<_> value).

This kind of inference, where the type of a variable changes based on the full set of values assigned to it, is traditionally what we have called subtyping in the Rust compiler. (In contrast, coercion is an instantaneous decision that the compiler makes based on the types it knows thus far.) This is sort of technical minutia in how the compiler works, but of course it impacts the places in Rust that you need type annoations.

Now, to some extent, we already have this problem. There are known cases today where coercions don’t work as well as we would like. The proposed box syntax, for example, suffers from this a bit, as do other patterns. We’re investing ways to make the compiler smarter, and it may be that we can combine all of this into a more intelligent inference infrastructure.

Variance and mutable references. It’s worth pointing out that we’ll always need some sort of coercion support, because subtyping alone doesn’t allow one to convert between mutable references. In other words, &mut TextElement is not a subtype of &mut Node, but we do need to be able to coercion from the former to the latter. This is safe because the type Node is unsized (basically, it is safe for the same reason that &mut [i32; 3] -> &mut [i32] is safe). The fact that &mut None -> &mut Option is not safe is an example of why sized enums can in fact be more challenging here. (If it’s not clear why that should be unsafe, the Nomicon’s section on variance may help clear things up.)

An alternative variation

If, in fact, we can’t solve the subtyping inference problems, there is another option. Rather than unifying enums and structs, we could add struct inheritance and leave enums as they are. Things would work more-or-less the same as in this proposal, but base structs would play the role of unsized enums, and sized enums would stay how they are. This can be justified on the basis that enums are used in different stylistic ways (like Option etc) where e.g. refinement types and common fields are less important; however, I do find the setup described in this blog post appealing.

Type parameters, GADTs, etc

One other detail I want to note. At least to start, I anticipate a requirement that every type in the hierarchy has the same set of type parameters (just like an enum today). If you use the inline syntax, this is implicit, but you’ll have to write it explicitly with the out of line syntax (we could permit reordering, but there should be a 1-to-1 correspondence). This simplifies the type-checker and ensures that this is more of an incremental step in complexity when compared to today’s enums, versus the giant leap we could have otherwise – loosening this rule also interacts with monomorphization and specialization, but I’ll dig into that more another time.

Conclusion

This post describes a proposal for unifying structs and enums to make each of them more powerful. It builds on prior work but adds a few new twists that close important gaps:

  • Enum bounds for type parameters, allowing for smoother interaction with generic code.
  • The associated struct for enums, allowing for constructors.

One of the big goals of this design is to find something that fits well within Rust’s orthogonal design. Today, data types like enums and structs are focused on describing data layout and letting you declare natural relationships that mesh well with the semantics of your program. Traits, in contrast, are used to write generic code that works across a heterogeneous range of types. This proposal retains that character, while alleviating some of the pain points in Rust today:

  • Support for refinement types and nested enum hierarchies;
  • Support for common fields shared across variants;
  • Unsized enums that allow for more efficient memory layout.

http://smallcultfollowing.com/babysteps/blog/2015/08/20/virtual-structs-part-3-bringing-enums-and-structs-together/


Air Mozilla: Quality Team (QA) Public Meeting

Среда, 19 Августа 2015 г. 23:30 + в цитатник

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

https://air.mozilla.org/quality-team-qa-public-meeting-20150819/


Mozilla Community Ops Team: Weekly Status Update 2015-08-19

Среда, 19 Августа 2015 г. 22:02 + в цитатник

Since this was only our second attempt at a weekly update, we left a lot of the same updates in if we didn’t have anything new for a sub-project in case people missed them the first time around. In the future we’ll only be posting what’s new!

Discourse

Discourse UX improvements (@Leo, @yousef)

There are some changes to Discourse that should be made to make it more suitable to Mozillian’s needs

  • Status [In Progress]: See SSO update below. (No other changes from last week) We can use help researching if any of our improvements are already in the works by the Discourse team or other plugin authors. We can use help building the plugins that we need.

SSO (@Leo)

To improve the login experience for people using Discourse within Mozilla, bridge the gap in various ways between our different instances (e.g. single username across instances), and integrate better with Mozilla wore widely (with Mozillians integration, etc.)

  • Status [In Progress]: Working on initial version of SSO server which will have the same features as our current authentication system
Discourse Documentation (@Kensie)

To make Discourse more user friendly for Mozillians, we need some good documentation on how to use it

  • Status [In Progress]: (No changes from last week) We’ve created a sub-category to make documentation easy to find. We can use help writing the basic how-to documentation. It will help us if people using Discourse ask us questions so we know what to include in the documentation. Need to set a timeline to get this done ASAP

MECHADISCOURSE (@Yousef)

Putting all Discourse instances on one infrastructure, automated with Ansible and CloudFormation

This will help us keep the many Discourse instances we have secure, up to date and running common plugins easily. At scale. AT SCALE. also saves us $$$ while allowing all of our instances to be HA.

  • Status [In Progress]: Pretty much everything is ready, just need to implement a few things like SSL and we can start getting instances on it.

MoFo Discourse migrations

Migrating the Webmaker, Science and Hive Discourse instances to MECHADISCOURSE

This provides the teams with more stable Infra for their Discourse instances.

  • Status [In Progress]: Production instance of Webmaker Discourse is up. Waiting on the last few fixes to MECHADISCOURSE and we can spin up new instances in minutes.

Ansible (@Tanner)

#ConfigManagement

  • Makes it 100x easier to set up servers
  • https://jira.mozilla-community.org/secure/RapidBoard.jspa?projectKey=ANS&rapidView=15
  • https://github.com/Mozilla-cIT/ansible-playbooks
  • Initializes servers, will be used with MECHADISCOURSE as its first “big” project
  • http://cerasis.com/wp-content/uploads/2015/02/cloud-computing-buzzwords-explained.jpg

 

  • Status [In progress]: Production-ready, Jenkins has been set up with Crowd so we can trigger Ansible tasks easily.

Monitoring (@Tanner)

Tested out a few services like DataDog, but they’re unreasonably expensive for where we are right now.

Not using Icinga because it’s no longer a fork of Nagios, more or less its own thing – Nagios isn’t exactly great, but it’s an (the?) industry standard so generally well-supported and well-documented.

  • Status [In progress]: Trying to decide between using Nagios or trying to get funding for a paid solution. Discussion here.

Community Hosting (@Tanner, @yousef)

No changes from last week

Audit

We need to understand which sites are being actively used and which no longer need hosting, or need different hosting than they currently have

Status [Backlog]: Need to define minimum viable product (MVP) for community website to measure against. We’ll be reaching out to relevant communities and teams to start working on this. We could use help from people who’d like to help drive this.

Migration

We will be moving away from OVH to simplify community hosting and save money

Status [In Progress]: Mesosphere is in progress, awaiting some approvals on Participation Infrastructure side

Documentation (@Kensie)

No changes from last week

Discourse documentation

(see above)

Wiki update

Our wiki pages our out of date, and shouldn’t be under IT anymore

  • Status [stalled]: Discourse documentation is a higher priority. In the fall will schedule another sprint. If some dedicated soul wanted to take this on for us, we’d be happy to help provide information and review the content.

Confluence (@Kensie)

Links to JIRA, will use it to help with project management, decision tracking.

  • Status [In Progress]: Help from Atlassian experts would be very welcome!

Matrix (@Leo)

Communication protocol which attempts to bind various different ones together – could possibly be used by us as a Telegram-esque IRC bouncer

Miscellaneous

  • Set up Crowd for authentication for Jenkins, Jira, Confluence and for whatever monitoring we end up going with.
  • Discussing proposals for MozFest sessions

Contribution Opportunities

Recap of contribution opportunities from status updates:

  • Discourse
    • Research/coding customizations
    • Documenting how to use Discourse/need questions to answer
    • Ansible expertise welcome
  • Monitoring
    • Nagios experts/mentors welcome
  • Community Hosting
    • Research MVP for community sites
  • Documentation
    • Discourse (see above)
    • Need writers to help drive wiki update
    • Atlassian experts welcome to help with Confluence/JIRA organization

http://ops.mozilla-community.org/weekly-status-update-2015-08-19/


Laura de Reynal: 110 things to learn

Среда, 19 Августа 2015 г. 21:52 + в цитатник

When interviewing people in Chicago, from teenagers, to parents, educators and bloggers, we asked them to think about what they wanted to learn, what skills mattered the most to them as they were using the Web, and what they would teach us if we were completely new to the internet.

The result is a list of 110 things to learn. A serious, candid and sometimes surprising list, which highlights the skills that appear most important to these 69 participants when speaking of digital literacy.

While we are getting ready to publish the full report, I wanted to share this list here, for the happiness of all.

  1. Writing skills
  2. How to work together
  3. How to talk to each others
  4. Critical thinking
  5. Empower students to share issues that concerns them
  6. I wanted to teach them that the internet is not a school paper
  7. Be empowered to make a statement 
  8. Learning coding to create
  9. How to effectively utilise the internet and find content.
  10. Internet is so much more than a 6 seconds video on Vine
  11. How to formulate your own opinion from research
  12. What to share on the web, and how to share things
  13. Being responsible for their own post
  14. Understanding the consequences of sharing online
  15. Security and web design
  16. Manage their online identity
  17. Make them think critically about their web presence online
  18. We put things inside of a frame, and so people think they are beautiful. Just because something is on Youtube does not mean it’s good to watch. Kids should learn that.
  19. My understanding of digital literacy is not through coding
  20. Critical literacy
  21. Communicating ideas
  22. Asking for help
  23. Typing is a great skill to have
  24. Paying attention to details
  25. Powerpoint, Word, Excel
  26. Find what they are looking for
  27. Create their own thing on the internet
  28. Teamwork
  29. Trying to solve a problem before you complain
  30. Ability to read technical information
  31. Identifying gaps, computers are stupid.
  32. Don’t be influenced
  33. How to discover new opportunities from people around you
  34. Using youtube for science projects
  35. Maths
  36. Problem solving
  37. Self-esteem
  38. Digital journalism
  39. Tech is not the answer
  40. Humanistic Qualities
  41. Being aware of the resources out there
  42. How to make a website to impact other people – the civic engagement aspect
  43. How to be able to collaborate on a team
  44. Understanding HTML and CSS
  45. Storytelling
  46. Being able to work in group
  47. Coding would be nice, but its unrealistic to think everyone would be interested
  48. Being able to evaluate good information from bad
  49. Using Google advanced search options
  50. How to fully use a tool, like google map. They don’t know they can make their own maps
  51. Not necessarily to learn programming, but more to use the tools they have to create or do what they really want.
  52. To me, digital literacy is being able to navigate the web
  53. I teach Microsoft Office because it’s important skill to get a job
  54. Time management
  55. Being responsible
  56. We need to teach Facebook literacy
  57. Icons and jargons
  58. Improve my writing
  59. Use the web for good
  60. Finding content
  61. Have an open mind
  62. Accept opinions from others
  63. Learn japanese, for the anime
  64. Learn to hack – but white hat kind of hacking
  65. I would teach you search: it can be dangerous and helpful
  66. Open chrome but don’t click on everything, they will make other tabs and it’s confusing. Click on what you need.
  67. I would teach you security: watch out for who is out there.
  68. I would teach you privacy and stalkers.
  69. If I could learn one thing, i would learn to use the internet wisely, I dont use the internet much now.
  70. I would teach you: never click on random sites, it can give you virus, i accidentally clicked on random ads and it locked my whole phone
  71. I would teach you: If you make social media – don’t feed them negativity just positive
  72. I would teach you:  just enjoy have fun and just be you
  73. I would teach you : privacy. It’s important not to invade privacy
  74. Security and passwords: important for social media accounts, you dont want people to mess with it
  75. I would teach you to search for positive and good stuff
  76. I would teach you Microsoft Word and typing
  77. I would teach you being smart on the internet
  78. I would teach you a good browser so its easy to use
  79. To learn: Community participation- giving back and being part of something
  80. Credibility, cat fishing
  81. Collaborating and trying with others feedback
  82. To search: don’t believe everything you see
  83. Skills: design that is attractive.
  84. Privacy: choose what you are sharing.
  85. I would teach you: never give personal information
  86. Never respond to people you dont know
  87. Be polite and don’t talk about people online, no cyberbullying
  88. I would  teach you how to make a private conversation on Kik (Messaging app)
  89. I would teach you: have fun
  90. I want to learn how to hack. I would hack into websites and social media, or bad persons computers, and tell the police.
  91. How to organize my thoughts and make something look good
  92. How to create tutorials
  93. How to survive in a zombie apocalypse – Hunting would be a useful skill
  94. How to focus
  95. How to think better
  96. If I was to teach you, first I would ask you what you like to do.
  97. I would want to learn to design a game
  98. I want to learn how to make swords
  99. I want to learn hacking like a swift hacker. Because you can have anything at your hands like money and things.
  100. I want to learn to repair any device
  101. Community leading
  102. I’ll teach them what to press, what to press, don’t bring the juice – juice will mess up the computer.
  103. How to search – thats it. Well thats basically all I do with web: search.
  104. I would teach you 1- never give your address 2- Be careful of the creeps 3- Keep scrolling
  105. Keep Scrolling, means don’t get involved in the discussions, you know, they will put you in trouble
  106. Internet lingo
  107. I would teach you Youtube: how to watch videos
  108. I would teach you all the social media. “IF you don’t learn the social media, you don’t learn the Web.
  109. I would show you how to be really good at it, typing like a pro, knowing everything.
  110. Someone who is really good on the internet means someone who knows all the buttons. They know the computer.


Filed under: Mozilla, Research Tagged: learning, Mozilla, research, skills, teach

http://lauradereynal.com/2015/08/19/110-things-to-learn/


Air Mozilla: Product Coordination Meeting

Среда, 19 Августа 2015 г. 21:00 + в цитатник

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

https://air.mozilla.org/product-coordination-meeting-20150819/


Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 26

Среда, 19 Августа 2015 г. 20:00 + в цитатник

QMO: Firefox 41 Beta 6 Test Day, September 1st

Среда, 19 Августа 2015 г. 17:55 + в цитатник

I’m writing to let you know that Tuesday, September 1st, we’ll be hosting the Firefox 41.0 Beta 6 Test Day. The main focus of this event is going to be set on verify Windows 10 bugs and make firefox crash on windows 10. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Tuesday! Let’s make Firefox better together!

https://quality.mozilla.org/2015/08/firefox-41-beta-6-test-day-september-1st/


Hannah Kane: Pledge to Teach Survey Results (first month)

Среда, 19 Августа 2015 г. 17:51 + в цитатник

At the very end of June, we added a “Pledge to Teach” action to teach.mozilla.org. After people complete the pledge, they are able to take a survey that allows us to find out more about their particular context for teaching the Web, and their needs.

I’d like to share results from the first month, during which 77 people completed the survey, out of 263 users who took the pledge (29% response rate).

First, we asked what people are interested in, in terms of teaching the Web, and provided some options for people to choose from (people could choose as many as they liked).

  • Starting a Mozilla Club: 57%
  • Getting access to Digital Literacy curriculum: 79%
  • Running learning events (Maker Party, Hive Pop-Ups, etc.): 61%
  • Professional Development to help me get better at teaching digital skills: 81%
  • Connecting with a network of peers: 77%
  • Exploring tools to make the Web: 66%

The results from this question align with our strategic plans to develop more curriculum, provide more professional development offerings, and build tools to help people connect with one another.

We asked about the age range of learners:

  • 6-13: 40%
  • 14-24: 78%
  • 25-34: 44%
  • 35-44: 26%
  • 45-54: 17%
  • 55+: 19%

These findings align with the age-range that most of our existing curriculum is optimized for (14-24). That said, we know our audience is broader, and that content can be adapted for different audiences. Certainly we have community members that work in the K-12 space and higher ed. Given the numbers for learners over 24, we may consider more intermediate/advanced web literacy content, and/or address this audience with more in-depth Teach Like Mozilla and MDN content.

We asked how many learners people expect to reach this year:

  • 0-50: 32%
  • 51-100: 22%
  • 101-200: 26%
  • 201-500: 5%
  • More than 500: 14%

This data speaks to the fact that survey participants are more likely individuals who have direct interactions with learners, vs larger partners with wider networks. The survey was intended to reach individual educators/mentors, but we might consider a similar survey directed to partners, too.

Note: we’ve since added a question to the survey that will allow us to know how many learners people reach at any one time. This will inform our curricular design process.

We asked about the contexts in which people teach (again, respondents were able to choose multiple answers):

  • At standalone events (for example, a one-day workshop, hackathon or Maker Party event):  51%
  • In a classroom during the school day: 52%
  • As part of an afterschool or summer program: 31%
  • With my friends and family at home: 56%
  • At professional meet-ups or training events with other adults/mentors/educators: 51%
  • At a college or university: 27%
  • At a library or other community space: 26%

Some of these results surprised us. For example, the responses for teaching at home and with friends is higher than we’d expected, as were the number of people teaching in professional meetups. If these trends continue, they will inform our curricular and professional development content offerings. We are also having a Web Literacy Training Fellow join us later in the year, and she will specifically address these contexts.

These findings also show that people are teaching across various contexts, which may speak to some leadership pathways (e.g. classroom teachers also hosting standalone events to reach more people).

Finally, we asked people about their motivations for teaching the Web. Here is a sample of those responses:

  • The Internet is a place where any information is available, and people ought to know how to access the best of the information they seek, and know how to protect themselves beyond anti-virus programs. I want the opportunity to teach and engage with learners and peers outside of the classroom. (Canada)
  • I always believe that I should never wait for opportunities to help other people rather I should let myself open doors to help others. I want to share a part of what I know to people who wants to learn more about the digital world. (Philippines)
  • I think technology especially the Web could be a wonderful facility to awaken and support children being creative and using free thought as a positive means of fully participating in communities, society and the world. (UK)
  • Am driven by the passion to make the world a better place. I want students from my school to have extra skills apart from the normal curriculum taught in school. (Kenya)

This is a very small sample so far, but we’ll look at results for the second month soon and see if trends continue.

In the meantime, the results of the survey will inform several of our next steps, including:

  • Consider iterations on site information hierarchy and calls-to-action
  • Create content strategy that reflects community needs—this includes everything from site copy to blog content to curricular content to social media
  • Advance partner strategy given these insights

We are also starting a new research effort with support from the Webmaker product team, to complement the survey. The project hasn’t been fully designed yet, but will likely help us dig deeper into questions about our community’s assets, needs, and contexts.


http://hannahgrams.com/2015/08/19/pledge-to-teach-survey-results-first-month/


Gijs Kruitbosch: Why you might be asked to file a new bug/issue (instead of commenting on old ones)

Среда, 19 Августа 2015 г. 13:31 + в цитатник

Sometimes, after we close a bug because we fix it, or because it is a duplicate of another bug, or because the symptoms have gone away — invalid and wontfix bugs are a little different — people come along that have a problem that they believe is identical to the original bugreport. Quite often, they end up commenting on the “old” bugreport and say something along the lines of “hey, this is not fixed yet” or “this broke again” or “why did you close this bug, I’m still seeing this”!

In 99% of cases, I (and many other people) ask people in this situation to file a new bug.

The reasons why we do this vary a little, but on the whole they tend to be pretty similar, and so I figured it would be worth documenting them. In no particular order, we prefer new bugs over reopening closed ones because:

  • More often than not, issues with similar symptoms can be caused by different factors. In Firefox’s case, these can be the version of Firefox, add-ons, different preferences/options that people have selected, third-party software, network setup, hardware and drivers, the OS people are using (Mac, Linux, Windows (what version?), …), changes in public web pages involved with the bug, … it’s a pretty long list. Different causes will require different fixes, and tracking them in the same bug will lead to confusion very quickly.
  • We close bugs as fixed when we land patches. We track the uplift of the patches that landed for a bug in that bug. Reopening a bug that has already had patches landed in the tree, especially once they’ve been uplifted and released, confuses tracking the state of those patches, and any new patches that we write to fix the same issue.
  • The old bug will have investigation and discussion of the problem as originally reported. If we now start investigating the new issue in the same place, with the old summary and the old comments, testcases, attachments, steps to reproduce, reporter, etc., we will eventually get confused. This relates to the first point: quite often there are subtle differences. Keeping track of those once we have two reports in a single bug, and ensuring we address the issue fully is difficult, often impossible.
  • Bugzilla sends a lot of email, and is geared towards communication with flags and email. If you report an issue by commenting in another one, the “reporter” role of that bug is still the old reporter, and the CC list includes all the reporters of the duplicate bugs that were filed, the assignee will be the person who landed the last few patches (but they might not be able to fix the newly-reported thing, or might even have left Mozilla altogether), and so on and so forth. This confuses the “needinfo” flag, spams all those people about an issue they might no longer be seeing, and generally leads to still more confusion.
  • People track open bugs they are assigned to, and new bugs in certain components. New comments (on closed bugs) are much, much more likely to get lost in the daily bugmail avalanche – which means your issue won’t get fixed.
  • Comments are almost always low on details. Comments are often “this still doesn’t work”, with no indication of any of the environmental factors that impact where and when people see bugs (see the earlier point). When people file new bugs, we immediately get more information, normally at least the operating system and version of Firefox involved, and if people are careful about how they file the bug and describe its symptoms or steps to reproduce, we might learn about differences in the steps they’re using, compared to the old bug.
  • New bugs are cheap! Bugzilla has more than 1.1 million bugreports in it. Another one won’t hurt. Worst case scenario, we can mark it as a duplicate of the old one…

It might be interesting if we had an easy way to split a comment into a new bug, though that would still defeat some of the points raised earlier. In the meantime though, think twice before commenting on older, closed bugs with a “this is still broken” comment!

How do I get attention for my newly filed bug? Email to all the people on this old bug seems much more likely to get attention!

First off, this strategy can be detrimental in many cases (see e.g. this or this – or just consider how much “email to all the people…to get attention” sounds like “spam”).

Second, we get a lot of bugreports. We’re working on ensuring they get triaged effectively. This should already be a lot better than it was a few months ago (see this post by Benjamin Smedberg), and will continue to improve.

Finally… if you file a bug that is extremely similar to an old bug, it seems fair to me to leave a comment in the old bug, mark the new bug as blocking the old bug, and/or set the needinfo flag for the assignee (if available) of the fixed bug, to draw their attention to this new bug.

http://www.gijsk.com/blog/2015/08/why-you-might-be-asked-to-file-a-new-bugissue-instead-of-commenting-on-old-ones/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 188 187 [186] 185 184 ..
.. 1 Календарь