-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Botond Ballo: Trip Report: C++ Standards Meeting in Lenexa, May 2015

Пятница, 05 Июня 2015 г. 17:00 + в цитатник

Summary / TL;DR

 

Project What’s in it? Status
C++14 C++14 Published!
C++17 Various minor features. More substantial features under consideration include default comparisons and operator . On track for 2017
Networking TS Sockets library based on Boost.ASIO In late stages of design review
Filesystems TS Standard filesystem interface Published!
Library Fundamentals TS I optional, any, string_view, and more Voted out for publication!
Library Fundamentals TS II source code information capture, array_view, and more Expected 2016 or 2017
Array Extensions TS Old proposals (arrays of runtime bound (ARBs) and dynarray) abandoned. New direction being explored: a magic type that acts like an ARB and can be used as a local variable only.
Parallelism TS Parallel versions of STL algorithms Voted out for publication!
Concurrency TS improvements to future, latchesand barriers, atomic smart pointers Voted out for balloting by national standards bodies
Transactional Memory TS transaction support Voted out for publication!
Concepts (“Lite”) TS constrained templates In process of addressing comments from national standards bodies. Publication expected late 2015.
Reflection Code introspection and (later) reification mechanisms Still in the design stage, no ETA
Graphics 2D drawing API Standard wording expected to be ready for review at the next meeting.
Modules A component system to supersede the textual header file inclusion model Microsoft and Clang continuing to iterate on their implementations and converge on a design. The feature will target a TS, not C++17.

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee in Lenexa, Kansas. This was the first committee meeting in 2015; you can find my reports on 2014’s meetings here (February, Issaquah), here (June 2014, Rapperswil), and here (November, Urbana-Champaign). These reports, particularly the Urbana one, provide useful context for this post.

The focus of this meeting was iterating on the various ambitious proposals in progress, and beginning to form an idea of which of them will be ready for C++17. In addition, several of the Technical Specifications (TS) in flight have reached their final stages in the committee, while work continues on others.

C++14

C++14 was officially published as an Internal Standard by ISO in December 2014. Its timely publication is evidence that the committee’s plan for adopting a three-year publication cycle post-C++11 has, thus far, been successful.

C++17

What Will Make It In?

When the committee originally announced, shortly after C++11 was published, the schedule for the next two standard revisions, C++14 was described as a “minor revision”, and C++17, a “major” one.

A few things have happened since then:

  • C++14 ended up being not so minor, with fairly substantial features like generic lambdas and variable templates making it in.
  • For features not going through the TS process, there’s no intrinsic reason the three years between C++14 and C++17 would be any more or less productive for the committee than the three years between C++11 and C++14.
  • For features going through the TS process, that process has in some cases been taking somewhat longer than initially envisioned, and thus it’s not clear whether they will be published in time – and particularly, if there will be enough time after their publication to get more implementation and use experience with them – to merge them into C++17. I would say the following TS’s stand a chance of being merged into C++17:
    • Filesystems, which has already been published
    • Library Fundamentals I, Parallelism, and Transactional Memory, which were voted for publication at this meeting
    • Concepts, which is expected to be voted for publication in a post-meeting telecon, or at the next meeting (in October in Kona, Hawaii) at the latest

    However, I don’t think any of these mergers is a given. For example, there is some opposition to merging the Filesystems TS into the Standard in its current form on the basis that its design isn’t a good fit for certain less common varieties of filesystems. Concepts also has some pending design issues that may be revisited prior to a merger. In all cases, feedback from implementors and users will be key and will likely decide the TS’s fate.

As a result, I think it’s likely that C++17 will end up being approximately as “major” or “minor” of a revision as C++14 was.

Notable features that are not looking like they’ll make C++17 include:

  • Modules. Due to the significance of the change that modules bring to the language, it’s increasingly likely that the modules proposal will initially be pursued as a Technical Specification. Given the state of the proposal (fairly advanced in the design stage, but no serious draft wording yet), it’s much too late for there to be hope of merging it into C++17. That said, it is hoped that the feature will still become available to users in the 2017 timeframe, just in the form of a TS rather than being part of the Standard.
  • Reflection. This is still very much in a design stage, and also targeting a TS as a first ship vehicle. C++17 is practically out of the question.

(I talk about modules and reflection in more detail below, for those interested.)

That said, there are features not going through the TS process which are expected to be in C++17.

What Has Made It In?

My Urbana report lists the language and library features that have made it into C++17 as of the last meeting.

No new language features were voted into C++17 this meeting, in the sense of standard wording for them being merged into the C++17 working draft (a small, language lawyer-ish change, making exception specifications part of the type system came close, but was deferred to Kona due to some concerns brought up during the closing plenary session). However, there are numerous language features in the design or wording review stages that are expected to be voted into C++17 in upcoming meetings; I talk about these in the “Evolution Working Group” section below.

There were some small library features voted into C++17 at this meeting:

Technical Specifications

Procedural changes

There has been a change to ISO’s procedure for publishing a Techincal Specification since the last meeting. The process used to involve two rounds of balloting by national standards bodies, called PDTS (Preliminary Draft TS) and DTS (Draft TS). Recently, the DTS round has been dropped, leaving just the PDTS round, and making for a more agile TS publication process.

Crossed the finish line!

As a result of this procedural change, some TS’s which had successfully finished their PDTS ballots became eligible to be voted out for publication at this meeting, and after addressing PDTS ballot comments, vote them out we did! Library Fundamentals I, Transactional Memory, and Parallelism I have all been sent to ISO for official publication, which should happen within a few months.

A couple of other TS’s haven’t quite crossed the finish line yet, but are very close.

Concepts

The Concepts TS garnered a rich and opinionated collection of PDTS ballot comments. Among them were your usual editorial and wording-level technical comments, but also some design-level comments which were brought before the Evolution Working Group (EWG) for consideration.

It’s rare for the committee to make design changes to a TS or Standard at such a late stage in the publication cycle, and indeed most design-level comments were deferred (meaning, they will not be addressed in this version of the TS, but they could be revisited in a future version, or if the TS comes up for merging into the Standard). One comment, however, which was essentially a request for a (small) feature, was approved. The feature will allow using a concept name as type name in a variable declaration:

ConceptName var = expression;

The semantics is that the type of var is deduced from the type of expression (much like with auto), but the code is ill-formed if the deduced type does not satisfy the concept.

I was mildly surprised that EWG was willing to approve this addition at this late stage, but pleasantly so: I think this feature is very useful. To top off the good news, Andrew Sutton (the primary author of the Concepts TS), who couldn’t make it to the meeting itself, reported only two days later that he added support this feature in his GCC-based Concepts TS implementation! (Implementation experience is immensely valuable for guiding the committee’s decisions, because issues with a proposal feature often come up during implementation.)

As a result of this new addition, and a multitude of wording-level comments, the Core Working Group (CWG) didn’t have time to prepare final wording for the Concepts TS by the end of the meeting, so it couldn’t be voted out for publication just yet. Rather, CWG plans to hold a post-meeting teleconference to (hopefully) complete the final wording, after which the options are to hold a committee-wide teleconference to vote it out for publication, or to wait until Kona to vote on it.

Either way, the Concepts TS is practically at the brink of completion! Very exciting.

There’s also good news on the implementation side: GCC front-end developer Jason Merrill says Andrew’s Concepts implementation is expected to merge into mainline GCC within a month or so. Meanwhile, IBM, who have switched to using clang as the front-end for their newer products, announced their intention to kick off a clang-based implementation.

Concurrency I

Good progress here, too: the first Concurrency TS was sent out for its PDTS ballot! Assuming a successful ballot, it should be ready to be voted for publication in Kona.

Evolution Working Group

As usual, I spent most of the meeting in the Evolution Working Group, which does design review for proposed language features. EWG once again had a full plate of proposals to look at.

Recapping from previous posts, the outcome of an EWG design review is one of the following:

  • Approved. The proposal is approved without design changes. They are sent on to the Core Working Group (CWG), which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Here’s how this meeting’s proposals fared:

Accepted:

  • A proposal to make emplace_back() and similar play nice with aggregate types. Note that this is a library proposal for which EWG input was solicited, so it was sent to the Library Working Group (LWG) rather than CWG.
  • Microsoft’s resumable functions (a.k.a. coroutines) proposal. More about this in the “Coroutines” section below.
  • A proposal to make exception specifications part of the type system. This resolves a long-standing issue where exception specifications sort-of contribute to a function’s type but not quite, and as such their handling in various contexts (passing template arguments, conversions between function pointer types, and others) requires special rules.
  • A minor change to the semantics of inheriting constructors, which makes the semantics more intuitive and consistent with inheritance of other members.
  • A proposal for inline variables, which are basically static storage duration variables (either at namespace scope, or static data members) whose definition can appear inline, and can be defined in a header. This can already be accomplished using a static variable defined locally in an inline function, this proposal just exposes that semantics under a more straightforward syntax. The proposal had some minority opposition (on the basis that it reuses the keyword inline for a purpose somewhat unrelated to its existing use, and that it encourages the use of static storage duration variables to begin with), but otherwise had fairly strong support and was accepted.
  • A proposal to remove the keyword register, but reserve it for future use.

Further work:

  • A tweak to the folding expressions feature added last meeting, which would restrict the set of operators for which a unary fold with an empty parameter pack is valid. Support for this hinges on defining an identity element for the affected operators, but it’s not clear that such a notion is sensible in the presence of operator overloading. For example, consider the following function:
        template 
        auto concatenate(Strings... strings)
        {
          return strings + ...;
        }
    

    With the current rules, when this function is called with 1 or more strings, it returns the concatenation of its arguments (because strings overload operator + to do concatenation), but when called with no arguments, it returns the integer 0, because that’s defined as the identity element for the addition operator.

    The proposal in question would make it ill-formed to call this function with no arguments; if the author wants that to work, their recourse is to change the unary fold to the binary fold strings + ... + "".

    There was consensus that addition, multiplication, and bitwise operators should be treated this way, but others were more contentious. For example, it was argued that for the logical operators && and ||, you shouldn’t be overloading them to return things that aren’t bool anyways, so the identities true and false remain appropriate.

    A particularly interesting case is the comma operator, for which the specified identity is void(). Comma folds are expected to be common as a technique to emulate statement folding, as in the following example:

        template 
        auto call_all_functions(Functions... functions)
        {
          functions() , ...;  // expands to function1() , function2() , functionN();
        }
    

    On the one hand, it would be a shame to make people write functions , ... , void() every time they want to do this, and the comma operator generally shouldn’t be overloaded anyways, so keeping the void() identity should be reasonable. On the other hand, if people want to do statement folding, perhaps the language should allow them to do that directly, rather than relying on the comma operator to emulate it with expression folding.

    As there was no consensus on the precise course of action, the matter was referred for further work.

  • The latest modules proposal from the Microsoft folks. More about this in the “Modules” section below.
  • The alternative coroutines proposal that I presented (the author being absent). More about this in the “Coroutines” section below.
  • Contract programming. Two main proposals were discussed, with different flavours. One of them provided an assert-like facility, to be used inside function bodies, primarily for the purpose of runtime checking. The other (there was also a third very similar to it) proposed a syntax for declaring preconditions, postconditions, and invariants for a function in its interface (i.e. in its declaration), primarily for the purpose of static analysis and enabling compiler optimizations. There was consensus that both sets of goals, and both places for writing contracts (interfaces and implementations) are desirable, but there was some debate about whether the proposals should be considered as orthogonal and allowed to proceed independently, or whether the authors should collaborate and come up with a unified proposal that satisfies both use cases. In the end, the direction was to strive for a unified proposal.
  • Default comparisons. Bjarne presented the latest version of his proposal for automatically generating comparison operators for class types. The main features of the proposal are (1) that it’s opt-out, meaning you get the operators by default but you can declare them as = delete if you don’t want them; and (2) it’s carefully designed to avoid breaking existing code as follows: for any comparison operator call site, if name lookup under current rules finds a user-declared operator, it will continue finding that operator under the new rules, rather than using any auto-generated one. The proposal had strong concensus, which was a (welcome) surprise after the lack of consensus on earlier versions (and other proposals in the area) at the previous two meetings. It came close to being approved and sent to CWG, but some details of the semantics remained to be hashed out, so Bjarne was asked to return with an updated proposal in Kona.
    There was another proposal related to comparisons, which pointed out that we are currently waving our hands about matters such as different types of equality and different types of order (total, weak, partial, etc.). To rectify this, it proposed using named functions (e.g. one for each kind for order) instead of operators for comparisons. The feedback was that such machinery is useful to have, but we also want reasonable defaults which are spelt == and <, and as such, the proposed set of named functions can be developed independently of Bjarne’s proposal.
  • A proposal to extend aggregate initialization to be allowed for types with base classes in cases where the base classes are default-constructible. EWG’s feedback was to revise the proposal to also address the use case of providing values to initialize the base classes with in cases where they are not default-constructible.
  • Unified call syntax. This proposal, by Bjarne, seeks to unify the member (x.f(y)) and non-member (f(x, y)) call syntaxes by allowing functions of either kind to be invoked by syntax of either kind. The approach is to have the x.f(y) syntax look for member functions first, and fall back to looking for non-member functions only if the member lookup yields no results; conversely, f(x, y) would look for non-member functions first, and fall back to a member lookup. The resulting semantics are asymmetric (they don’t make x.f(y) and f(x, y) completely interchangeable), but fully backwards compatible. (This design was one of several alternatives Bjarne presented at the previous meeting, and it seemed to have the greatest chance for gaining consensus, primarily due to its backwards compatibility.)

    Beyond aesthetics (“I prefer my function calls to look this way”) and tooling reasons (“member call syntax gives me IntelliSense”), the primary motivation for this feature is facilitating generic proramming, which is expected to become more popular than ever with Concepts. When defining requirements on a template parameter type, either informally in today’s C++ or explicitly with Concepts, you currently have to choose whether the operations on the type are expressed as member of non-member functions. Either choice constrains users of your template: if you choose member functions, they can’t adapt third-party types that they can’t modify to model your concept; if you choose non-member functions, they will likely have to provide a lot of non-member adapters for types that would otherwise automatically model your concept. You could choose to allow either one (this is what C++11 does with the “range” concept used in the range-based for loop: the required operation of getting an iterator to the first element of the range can be spelt either begin(range) or range.begin()), but then your call sites become very unreadable because you need a lot of template/SFINAE magic to express “call X if it exists, otherwise call Y”. A unified call syntax would allow template implementers to use whichever call syntax they like, while users of the template can use either member functions or non-member functions to model the concepts, as they desire / are able to. (C++0x Concepts had a feature called “concept maps” which solved this problem by acting as a bridge between the operations in a concept definition (which is what generic code would use) and the actual operations on a type modelling the concept. However, concept maps were removed from later Concepts designs because they proved very tricky to specify and implement.)

    Unfortunately, this is a very risky change to make to the language. While the proposal itself doesn’t break any existing code, new code that takes advantage of the proposal (that is, code that invokes a non-member function via a member call syntax, or vice versa) is considerably more prone to breakage. For example, adding a new member function to a class can easily break user code which was calling a non-member function of that name via the member function syntax; this breakage can manifest as a compiler error, or as a silent behaviour change, depending on the situation.

    A lot of the finer points of the proposed semantics remain to be nailed down as well. How does the fallback mechanism work – is it activated only if the initial lookup doesn’t find any results, or also if it finds results but they’re all, say, SFINAE’d out? What is the interaction with two-phase name lookup? What happens when the member call syntax is used on an incomplete type?

    EWG was very divided on this proposal; consensus seemed fairly far off. Some people suggested changes to the proposal that would allay some of their concerns with it; one of them was to have classes opt-in to unified call syntax, another to restrict the set of non-member functions that can be found via a member call syntax to those found by ADL. Bjarne said that he intends to continue iterating on the idea.

  • A proposal for overloading operator dot. This would allow creating “smart references”, much as the ability to overload operator -> gives us smart pointers, as well enable many other patterns that take advantage of interface composition. The proposal was generally very well-received; one feature that was somewhat controversial was the ability to declare multiple “overloads” of operator dot that return different types, with the effect of bringing the interfaces of both types into the scope of the declaring type (much how multiple inheritance from the two types would). The author (also Bjarne) was asked to come back with standard wording.
  • A proposal to allow template argument deduction for constructors. The idea here is to avoid having to define factory functions for templated types, such as make_pair(), for the sole purpose of not having to explicitly write out the template argument types in a constructor call of the form pair(t, u); the proposal would allow simply pair(t, u).This proposal has been on the table for a while, but it’s been plagued by the problem that for a lot of classes, deduction based on existing constructors wouldn’t work. For example, if a class container has a constructor that takes arguments of type container::iterator, that type is a non-deduced context, so T could not be deduced from a constructor call of the form container(begin, end). The latest version addresses this by allowing class authors to optionally define “canonical factory functions” that define how the class’ template parameters are deduced from constructor arguments. Here’s how one might look like (the syntax is hypothetical):
    template 
    container(Iter a, Iter b) 
        -> container::value_type>;

    This basically says “if container is constructed from two iterators, the class’ template parameter is the value type of those iterators”. The question of where to place such a declaration came up; EWG favoured placing it at namespace scope, so as to allow third parties to provide them if desired.

    Another point that was brought up was that a constructor call of the form classname(arguments) where classname is a template class, already has a meaning inside the scope of classname: there, classname without any template arguments means “the current instantiation” (this is called injected class names in standard terminology). The proposal needs to specify whether such a constructor would change meaning (i.e. deduction would be performed instead) or not. The consensus was to try to perform deduction, and fall back to the current instantiation if that fails; this would technically be a breaking change, but the hope is that the scope of any breakage would be minor.

    Overall, the proposal had strong support and is expected to move forward.

  • A proposal to allow a template to have a non-type template parameter whose type is deduced. EWG expressed preference to the syntax template and encouraged the author to continue iterating on the idea.
  • A restricted form of static_if; the restrictions are that (1) it can only be used at local scope, (2) each branch of it forms a scope of its own, and (3) non-dependent constructs need to be well-formed in both branches. The proposal was well-received, and the author will continue working on it. It was noted that the Concepts TS doesn’t currently allow evaluating a concept outside a requires-expression, so something like static_if (ConceptName) wouldn’t necessarily work, but hopefully that restriction will be lifted in the near future.
  • Extending static_assert to allow taking for the error message not just a string literal, but any constant expression that can be converted to a string literal. The idea is to allow performing compile-time string formatting to obtain the error message.
  • noexcept(auto), which basically means “deduce the noexcept-ness of this function from the noexcept-ness of the functions it calls. Like return type deduction, this requires the body of the function being available in each translation unit that uses the function. It was brought up that, together with the proposal for making exception specifications part of the type system, this would mean that modifying the function’s body could change the function’s type (again similarly to return type deduction), but people weren’t overly concerned about that.

Rejected:

  • A proposal for allowing return type deduction for explicitly-defaulted and -deleted special member functions. This was rejected because the group realized that it would introduce a footgun: a copy or move assignment operator with an auto return type would return by value!
  • No-op constructors and destructors, which are basically a language hack that would allow certain library optimizations; a previous attempt at enabling said optimizations, destructive move, was presented at the previous meeting. EWG’s feedback was much the same as last time: though it’s dressed differently, the proposal is still an attempt to mess with the language’s lifetime rules, which people are extremely wary of doing. The proposal as written will not move forward, but Chandler Carruth (Google’s lead clang developer) had some ideas about how to allow the desired optimizations by other means, and will discuss them with the author.
  • A proposal for relaxing the rules for forming an array type declarator to allow omitting a dimension anywhere; this would allow forming types such as int[][3][][7], though not instantiating them. The author was seeking to write a multi-dimensional array class where each dimension could be determined statically or dynamically, and use a type of this form as a template parameter and interpret it as a description for which dimensions were static. EWG didn’t find this motivation compelling (the information can be conveyed by other means, such as Array>) and was generally wary of adding to the set of types that can be formed but not instantiated (an existing example of such a type is a function type whose return type is a function type).
  • A proposal for generalized dynamic assumptions. EWG liked the use cases, but felt it would make more sense as part of a unified contracts proposal than a stand-alone feature, as contracts also need a syntax to express assumptions.
  • Allowing goto in constexpr functions. The intent here was to plug gratuitous holes between what can be done in a constant expression, and what can be done in regular code. EWG liked the motivation, but preferred to see it together with proposals that plug other holes, such as using lambdas and virtual functions in constant expressions. At least one of those (lambdas) is expected to be proposed in Kona. (Bjarne jokingly wondered whether some day people would be proposing launching threads in a constexpr function.)
  • Delayed evaluation parameters, which is a proposal for evaluating function arguments in a lazy rather than eager fashion (i.e. only evaluating them when their value is needed inside the function, not before calling the function). EWG was intrigued by the idea, but the proposal wasn’t nearly fleshed out enough to be considered as a conrete proposal. Interested people are encouraged to continue exploring the design space.
  • A revised proposal to allow arrays of runtime bound as data members wasn’t looked at specifically, but its presence on the agenda prompted a broader discussion about the Arrays TS, which I talk about in the “Arrays TS” section below.

Modules

There are two groups currently working on modules: Microsoft, in their compiler, and Google, in clang. Microsoft has a draft proposal based on their design; Google hasn’t submitted a proposal based on their design yet.

The two designs differ slightly in philosophy. Microsoft’s design feels like what modules might have looked like if they were part of C++ from the beginning. It’s clean, and promises to be a good fit for new code written in a modular fashion. Google’s design, on the other hand, is geared towards making it possible to incrementally modularize large existing codebases without requiring a significant refactoring or other major changes (at least in the absence of egregiously non-modular design patterns). In other words, Microsoft’s design is more idealistic and pure, and Google’s is more practical.

Most notably, Microsoft’s design essentially requires modularizing a codebase from the bottom-up. For example, if a component of your program uses the C++ standard library, then modularizing that component requires first modularizing the C++ standard library; if the C++ standard library in turn uses the C standard library, then that too must be modularized (which is particularly unfortunate, for two reasons: (1) C standard library headers tend to be notoriously difficult to modularize due to their use of the preprocessor, and (2) they need to remain consumable by non-C++ code). Google’s design, on the other hand, specifically allows modular code to include non-modular code, so you could modularize your program component without having to modularize the C++ standard library.

To be sure, this feature of Google’s design introduces significant implementation complexity. (In my Rapperswil report, I reported that Google claimed their modules implementation was complete. I now understand what they meant was their implementation of a subset of the design that did not include this feature was complete.) I don’t have a deep understanding of the technical issues involved, but from what I’ve gathered, the difficulty is taking multiple copies of entities defined by the same non-modular code included in different modules and “merging” them to view them as a single entity.

There are other differences between the two designs, too. For example, Google’s allows exporting macros from a module, while Microsoft’s does not. Google’s design also supports cyclic dependencies between module interfaces, resolved by forward-declaring an entity from one module in the interface of the other; Microsoft’s proposal has no such support.

EWG spent half a day (and folks from the two groups additional time offline) discussing and trying to reconcile these design differences. The outcome was relatively hopeful about reaching convergence. The Microsoft folks conceded that some abilities, such as forward declaring entities from another module, are necessary. The Google folks conceded that some abilities geared towards making modules work with existing codebases, such as allowing the export of macros, don’t have to be supported directly by the language (they could be handled by compiler flags and such). The two groups agreed to produce a combined design paper for Kona.

In terms of a ship vehicle, the Google folks characterized modules as “the feature with the single greatest implementation impact so far”, and expressed a strong preference for going through a Technical Specification. This route effectively rules out modules being in C++17, though as a TS the feature is still likely to be available to users in the 2017 timeframe.

Coroutines

You may recall from my Urbana report that the outcome of the coroutines discussion there was that two flavours of coroutines, stackful and stackless (see the report for an explanation of the distinction) were sufficiently different and both sufficiently motivated by use cases that they deserved to be developed as independent proposals, with a small minority favouring trying to unify them.

Since Urbana there has been progress in all of these directions, with four papers coming back for consideration at this meeting: an iteration on the stackless proposal, an iteration on the stackful proposal, and two different attempts to unify the two approaches. EWG looked at two of these.

The stackless proposal, called “resumable functions” and championed by Microsoft, is the most mature one. It has already gone through numerous rounds of review in SG 1 (the Concurency Study Group), and is close to the stage where standard wording for it can be written. Its discussion in EWG mostly concerned details such as what to call the traits and functions involved in the proposal (there was no consensus to change from the current coroutine_ prefix), whether deducing that a function is resumable by the presence of await expressions in its body without annotating the declaration with a keyword like resumable is implementable (implementers agreed that it was as long as return statements in such a function were spelt differently), and whether yield is a reasonable keyword to standardize (consensus was that it was not, and so we’re going to get keywords prefixed with co- such as coyield and coreturn instead). Ultimately, the proposal author was given the go-ahead to write standard wording and go to CWG.

The other proposal EWG looked at was one of the attempts to unify stackful and stackless coroutines, called resumable expressions. I presented this paper because the author, Chris Kohlhoff, couldn’t make it to the meeting and I was somewhat familiar with the topic as a result of corresponding with him. Unlike resumable functions, this proposal was in the early design stage. The premise was that you could “have your cake and eat it too” by leveraging the compiler’s ability to analyze your code to avoid annotating calls to resumable functions at every level the way you have to do with await (the weakness of stackless coroutines compared to stackful), while still only requiring the allocation of as much memory as you need (the advantage of stackless over stackful). The problem was that the compiler analysis can’t see past translation unit boundaries, thus still requiring annotations there. There were also concerns about the performance cross-translation unit calls compared to resumable functions; Chris was convinced that it was no slower than resumable functions, but unfortunately I didn’t have a sufficiently good comparative understanding of the implementation models to successfully argue this point. The final opinion on the proposal was divided: some people saw imagination in it, and wanted to see it developed further; others didn’t appreciate the fact that a competing proposal to resumable functions was brought up at such a late stage, risking the standardization of the latter.

You might ask how it even makes sense for resumable functions to be sent to CWG without resumable expressions being categorically rejected. The answer is twofold; first, it’s looking like resumable functions will target a Technical Specification rather than C++17, which means there’s room for competing proposals to be developed in parallel. Second, even if it were targeting the standard, it’s conceivable that multiple kinds of coroutines can co-exist in the language (certainly in Urbana the consensus was the stackful and stackless coroutines should coexist). In any case, Chris plans to attend the Kona meeting and presumably present and updated version of the resumable expressions proposal.

The other two papers (the stackful one and a different unification attempt) were only looked at briefly by SG 1, as the author (same person for both) wasn’t present.

Arrays TS

The Array TS, which contains a language feature called “arrays of runtime bound” (ARBs) that’s essentially a toned-down version of C’s variable-length arrays (VLAs), and a library class dynarray for wrapping such a thing into a container interface, has been in limbo for the past year, as attempts to implement dynarray ran into difficulties, and proposals trying to replace it with something implementable got shot down one after the other.

At this meeting, EWG reviewed the status quo and addressed the question of what will happen to the Arrays TS going forward.

The status quo is this:

  • Many people want simple stack arrays. Emphasis on simple (no “making it a member of a class”) and stack (no “it might be on the stack or it might be on the heap” business.)
  • Some people want to be able to wrap such things into a class interface, so it knows its size, and doesn’t automatically decay to a pointer.
  • Some people additionally want to be able to copy this class and make it a member of other classes.
    • Implementation attempts have essentially demonstrated that this latter thing is impossible.

Given this state of affairs, the features currently in the Arrays TS are not going to be accepted in their current form; EWG recommended stripping the TS of its current contents, and waiting for a workable proposal to come along.

A promising direction for such a workable proposal is to have a “magic type” that acts like an ARB but knows its size and does not decay to a pointer (the implementable features that people wanted from a class wrapper). The type in question could only used for a local variable, and the underlying ARB itself wouldn’t be exposed. Several people expressed an interest in collaborating on a proposal in this direction.

Libary / Library Evolution Working Groups

With all the exciting action in EWG, I didn’t have much of a chance to follow progress on the library side in any detail, but here’s what I’ve gathered during the plenary sessions.

Note that I listed the library features accepted into C++17 at this meeting in the “C++17'' section above.

The following proposals were accepted into the second Library Fundamentals TS:

The following proposals failed to gain consensus:

  • A proposal to include certain special math functions which have been standardized independently, into C++17. The primary objection was the cost to implementors for what was perceived by some as a relatively niche user base.
  • Multidimensional bounds, offset and array_view was proposed for acceptance into Library Fundamentals II, but was voted down over issues that still remain to be addressed.

The list of proposals still under review is very long, but here are some highlights:

  • Eric Niebler’s suggested design for customization points was reviewed favourably; Eric was encouraged to experiment more with the idea and come back.
  • A proposal for nothrow-swappable traits was reviewed favourably, and the author was given guidance to put forward all of the traits mentioned in the paper.
  • A lot of time was spent reviewing a proposal for a variant class; as one of the few “vocabulary types” still missing from the standard library, this is considered very important. A lot of the discussion centred around whether variant should have an empty state, and if not, how to deal with the scenario where during assignment, the copy constructor of the right-hand side object throws. boost::variant deals with this by incurring heap allocation, which is widely considered undesirable. I believe the prevailing consensus was to have an empty state, but only allow it to arise in this exceptional situation (pun intended), and make accessing the variant in its empty state (other than assigning a new value to it) undefined behaviour; this way, ordinary code isn’t burdened with having to worry about or check for the empty state.
  • LEWG is continuing to review the Networking TS based on Boost.ASIO.

Ranges

I split out ranges into its own section because I believe it deserves special mention.

As I described in my Urbana report, Eric Niebler came to that meeting with a detailed and well fleshed-out design for ranges in the standard library. It was reviewed very favourably, and Eric was asked to “conceptify” it – meaning express the concepts defined by the proposal using the features of the Concepts TS – and develop the result into a TS. This TS would form part of an “STLv2'' refresh of the standard library which wouldn’t be subject to strict backwards-compatibility constraints with the current standard library.

Eric did not delay in doing so: he came back in Lenexa with a conceptified proposal written up as a draft TS. LEWG began a design review of this proposal, and made good progress on it; they hope to complete the review during a post-meeting teleconference and then forward the proposal to LWG.

Study Groups

Note that some Study Groups whose proposals have progressed to the main working groups and which don’t plan on looking at new proposals, are no longer considered active – for example, SG 2 (Modules) is in this category, as the modules proposal is now in front of EWG. I talk a bit about the ones still active below.

SG 1 (Concurrency)

With the Parallelism TS published and the Concurrency TS sent out for its PDTS ballot, SG 1 has been working on the second iterations of both TS’s.

Here are some of the proposals being considered for Parallelism II:

And for Concurrency II:

There are numerous other proposals in front of the SG as well which don’t have a target ship vehicle yet.

SG 5 (Transactional Memory)

SG 5 has accomplished commendable work by publishing the Transactional Memory TS, but they aren’t quite calling it quits! They plan to meet again in Kona to discuss proposals for possible future standardization in the area.

SG 6 (Numerics)

SG 6 met and looked at three specific proposals:

The first, as I described in the “Library” section above, was unsuccessfully proposed for acceptance into C++17.

The other two are being considered for a Numerics TS, along with a number of other proposals not specifically discussed at this meeting, such as unbounded-precision integer types, rational numbers, fixed-point arithmetic, decimal floating-point support, various utility functions (GCD/LCM, constexpr bitwise operations), random numbers (a couple of different proposals), and other topics for which there aren’t concrete proposals yet. (Obviously, not everything in this list will be in a single Numerics TS.)

Said Numerics TS is still in a relatively early stage; a working draft is not yet expected in Kona.

SG 7 (Reflection)

SG 7 had a busy evening session where they looked at a number of reflection-related proposals:

  • A language feature for argument stringization (misleadingly titled “parameter stringization”), attempting to obsolete one of the few remaining uses for the preprocessor. The proposal was well-received; of the various syntactic options presented, SG 7 preferred the approach of adding an annotation to a function declaration that makes the stringized arguments corresponding to all parameters available in the function body under a syntax such as function_name.param()
  • Potential improvements to source-code information capture. Two specific improvements were discussed: the ability to query the offset relative to the start of the file (in addition to a line number + offset relative to the source of the line), which meet with approval, and fine-grained intrinsics (e.g. querying the line number and the function name separately), for which SG 7 recommended waiting until the original proposal has implementation experience.
  • A proposal for language features to facilitate writing test code; consensus was that this topic is not sufficiently relevant to reflection, and should be pursued elsewhere (such as in EWG).
  • An updated version of a detailed, comprehensive proposal for static reflection (see also a slightly shorter paper outlining use cases). When the original version was presented in Urbana, it was given very short shrift, mostly because it had no presenter, and no one had had time to study it in any amount of detail. This time around, participants seemed to be more informed about it, and ended up being rather favourable to the overall approach. Most notably, the use of a “magic operator” (spelled mirrored(entity) in the proposal) rather than templates (such as reflect as the previously-discussed type property queries proposal did) opens the doors to reflecting more kinds of entities, such as typedefs (as distinct from their underlying types), uninstantiated templates, and namespaces, which SG 7 viewed as valuable. Interest in further development of the proposal was expressed.
  • Another reflection proposal, type reflection via variadic template expansion. Due to time constraints, this could only be presented very briefly. SG 7 expressed interest in a comparative analysis of the expressive power of this proposal compared to the “type property queries” and “static reflection” proposals.

Notably absent from the agenda was the latest version of the “type property queries” proposal, which had appeared to be the leading comprehensive reflection proposal in the past few meetings. The main reason it wasn’t presented was that the author couldn’t make it to the meeting, though one could also argue that SG 7 was already familiar with the overall design (the changes since the last version having been only minor), so time was better spent looking at proposals with alternative designs that still needed analysis.

SG 10 (Feature Test)

SG 10 is continuing to maintain the list of feature testing macros and keep it up to date with new things like C++17 features.

They also have a proposal for some new preprocessor features to aid feature detection: a __has_include() predicate for testing for the presence of an include, and a __has_cpp_attribute() predicate for testing for support for a particular attribute.

SG 12 (Undefined Behaviour)

SG 12 is still active, but did not meet this week as its members were busy advancing other proposals.

SG 13 (I/O)

SG 13 did not meet this week; the author of the 2D graphics proposal plans to complete standard wording for it for Kona.

*NEW* SG 14 (Game Development & Low-Latency Applications)

C++ aims to be a “zero-overhead abstraction” language, where you don’t pay for what you don’t use. It does a formidable job at this, but for some communities of C++ users, it could do an even better job.

The big sticking points are exceptions and RTTI, two language features for which you pay a bit even if you don’t use them. Projects concerned about this overhead commonly use compiler flags like -fno-exceptions and -fno-rtti, but the committee views these as nonconforming and doesn’t give them a lot of consideration. As a result, for example, a lot of standard library features require exceptions, and are therefore unusable in these projects.

There is also desire for more guarantees from the standard library, such as “empty containers never allocate” or “strings employ the small-string optimization”.

EWG looked at a a wishlist containing the above and more, and recommended creating a new Study Group to explore the area further.

The new Study Group, SG 14, will tentatively be called “Game Development & Low-Latency Applications”, because these are the most impacted communities, and the communities from which the committee hopes to get significant input.

There is a tentative plan for SG 14 to meet independently of the committee at CppCon and GDC, the idea being that members of the game development community are more likely to be able to make it to those events than to committee meetings.

Next Meeting

The next meeting of the Committee will be in Kona, Hawaii, the week of October 19th, 2015.

Conclusion

On the whole, this was a very exciting meeting! My highlights:

  • The Concepts TS being very close to publication. Concepts has been the feature I’ve been waiting for the most; I think it will revolutionize generic programming and allow C++ users to unleash the power of templates in unprecedented ways.
  • C++17 starting to take shape. While some larger pieces like modules and coroutines are unlikely to make it in, I think it will still have a good complement of features. Among other things, the success of the most recent default comparisons proposal, after the lack of consensus that plagued the ones that came before, is very encouraging – a good success story for the standards process.
  • Modules being on track to be available in the 2017 timeframe, even if it’s in the form of a Technical Specification rather than part of C++17 itself.

Things are only going to get more interesting as C++17 starts to take more concrete shape, and more Technical Specifications cross the finish line. Stay tuned!


https://botondballo.wordpress.com/2015/06/05/trip-report-c-standards-meeting-in-lenexa-may-2015/


Daniel Stenberg: I lead the curl project and this is how it works

Пятница, 05 Июня 2015 г. 13:40 + в цитатник

I did this 50 minute talk on May 21 2015 for a Swedish company. With tongue in cheek subtitled “from hobby to world domination”. I think it turned out pretty decent and covers what the project is, how we work on it and what I do to make it run. Some of the questions are not easy to hear but in general it works out fine. Enjoy!

http://daniel.haxx.se/blog/2015/06/05/i-lead-the-curl-project-and-this-is-how-it-works/


David Rajchenbach Teller: Re-dreaming Firefox (3): Identities

Пятница, 05 Июня 2015 г. 13:14 + в цитатник

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Identities. Let me emphasise that the features described in this blog post do not exist.

Sacha has a Facebook account, plus two Gmail accounts and one Microsoft Live identity. Sacha is also present on Twitter, both with a personal account, and as the current owner of his company’s account. Sacha also has an account on his bank, another one on Paypal, and one on Amazon. With any browser other than Firefox, Sacha’s online life would be a bit complicated.

For one thing, Sacha is logged to several of these accounts most of the time. Sacha has been told that this makes him easy to track, not just when he’s on Facebook, but also when he visits blogs, or news sites, or even shopping sites, but really, who has time to log off from any account? With any other browser, or with an older version of Firefox, Sacha would have no online privacy. Fortunately, Sacha is using Firefox, which has grown pretty good at handling identities.

Indeed, Firefox knows the difference between Facebook’s (and Google’s, etc.) main sites, for which Sacha may need to be logged, and the tracking devices installed on other sites through ads, or through the Like button (and Google +1, etc.), which are pure nuisances. So, even when Sacha is logged on Facebook, his identity remains hidden from the tracking devices. To put it differently, Sacha is logged to Facebook only on Facebook tabs, and only while he’s using Facebook in these tabs. And since Sacha has two GMail accounts, his logging on each account doesn’t interact with the other account. This feature is good not only for privacy, but also for security, as it considerably mitigates the danger of Cross-Site Scripting attacks. Conversely, if a third-party website uses Facebook as an identity provider, Firefox can detect this automatically, and handle the log-in.

Privacy doesn’t stop there. Firefox has a database of Terms of Service for most websites. Whenever Firefox detects that Sacha is entering his e-mail address, or his phone number, or his physical address, Firefox can tell Sacha if he’s signing up for spam or telemarketing – and take measures to avoid it. If Sacha is signing up for spam, Firefox can automatically create an e-mail alias specific to this website, valid either for a few days, or forever. If Sacha has a provider of phone aliases, Firefox can similarly create a phone alias specific to the website, valid either for a few days, or forever. Similarly, if Sacha’s bank offers temporary credit card numbers, Firefox can automatically create a single-transaction credit card number.

Firefox offers an Identity Panel (if we release this feature, it will, of course, be called Persona) that lets Sacha find out exactly which site is linked to which identity, and grant or revoke authorizations to log-in automatically when visiting such sites, as well as log in or out from a single place. In effect, this behaves as a Internet-wide Single Sign On across identities. With a little help, Firefox can even be taught about lesser known identity providers, such as Sacha’s company’s Single Sign On, and handle them from the same panel. That Identity Panel also keeps track of e-mail aliases, and can be used to revoke spam- and telemarketing-inducing aliases in just two clicks.

Also, security has improved a lot. Firefox can automatically generate strong passwords – it even has a database of sites which accept accept passphrases, or are restricted to 8 characters, etc. Firefox can also detect when Sacha uses the same password on two unrelated sites, and explain him why this is a bad idea. Since Firefox can safely and securely share passwords with other devices and back them up into the cloud, or to encrypted QR Codes that Sacha can safely keep in his wallet, Sacha doesn’t even need to see passwords. Since Firefox handles the passwords, it can download every day a list of websites that are known to have been hacked, and use it to change passwords semi-automatically if necessary.

Security doesn’t stop there. The Identity Panel knows not only about passwords and identity providers, but also about the kind of information that Sacha has provided to each website. This includes Sacha’s e-mail address and physical address, Sacha’s phone number, and also Sacha’s credit card number. So when Firefox finds out that a website to which Sacha subscribes has been hacked, Sacha is informed immediately of the risks. This extends to less material information, such as Sacha’s personal blog of vacation pictures, which Sacha needs to check immediately to find out whether they have been defaced.

What now?

I would like to browse with this Firefox. Would you?


https://dutherenverseauborddelatable.wordpress.com/2015/06/05/re-dreaming-firefox-3-identities/


Hannah Kane: Medium-term roadmap

Пятница, 05 Июня 2015 г. 02:32 + в цитатник

Earlier this week, I wrote about the short-term roadmap for teach.mozilla.org. Now I’d like to share a few details about what we envision a little farther out (Q3 and into Q4).

Badges

The bulk of the work here will be improving the user experience for both badge applicants and badge reviewers. We’ll also be rolling out some new badges that are aligned with our programmatic plans, and will recognize the key volunteer roles we’ve identified (i.e. Regional Coordinators and Club Captains).

Directory

I’m really excited about this project because it will transform the site from simply being a place to find resources to a community, and because we’ll be able to offer more customized experiences for users once we know more about them. The Mozilla Learning Network Directory will include rich mentor profiles and group pages (where “groups” include Clubs, Hives, and organizations), as well as the ability to search and browse. The initial build will also include a full integration of Discourse. (We’re drawing heavily on the Hive Directory for inspiration.)

Curriculum functionality

It’s been a long time coming, but soon we’ll begin designing a more permanent solution for making our curriculum content dynamic. This will include adding basic user interactions (“Likes,” ratings, comments), as well as dynamically facilitating the creation and display of remixes and translations. We’ll likely also have a tool for users to create and share their own playlists, and to submit curriculum for consideration.

Ongoing iteration on the engagement flow

We’ll continue to learn what works in terms of connecting people quickly to what they need, and we’ll likely continue to make changes as a result of those learnings. Our engagement strategy will get some serious power behind it as we move forward with the email tooling project that’s happening in parallel.

Thimble!

Finally, our team at CDOT are actively working on making improvements to Thimble, our open source code editor for teachers and learners. We wrote about those improvements a few weeks ago.


http://hannahgrams.com/2015/06/04/medium-term-roadmap/


Air Mozilla: German speaking community bi-weekly meeting

Четверг, 04 Июня 2015 г. 22:00 + в цитатник

German speaking community bi-weekly meeting Zweiw"ochentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

https://air.mozilla.org/german-speaking-community-bi-weekly-meeting-20150604/


Ben Hearsum: Buildbot <-> Taskcluster Bridge Now in Production

Четверг, 04 Июня 2015 г. 18:11 + в цитатник

A few weeks ago I gave a brief overview of the Buildbot <->Taskcluster Bridge that we've been developing, and Selena provided some additional details about it yesterday. Today I'm happy to announce that it is ready to take on production work. As more and more jobs from our CI infrastructure move to Taskcluster, the Bridge will coordinate between them and jobs that must remain in Buildbot for the time being.

What's next?

The Bridge itself is feature complete until our requirements change (though there's a couple of minor bugs that would be nice to fix), but most of the Buildbot Schedulers still need to be replaced with Task Graphs. Some of this work will be done at the same time as porting specific build or test jobs to run natively in Taskcluster, but it doesn't have to be. I made a proof of concept on how to integrate selected Buildbot builds into the existing "taskcluster-graph" command and disable the Buildbot schedulers that it replaces. With a bit more work this could be extended to schedule all of the Buildbot builds for a branch, which would make porting specific jobs simpler. If you'd like to help out with this, let me know!

http://hearsum.ca/blog/buildbot-taskcluster-bridge-now-in-production.html


Air Mozilla: Reps weekly

Четверг, 04 Июня 2015 г. 18:00 + в цитатник

Gregory Szorc: Changeset Metadata on hg.mozilla.org

Четверг, 04 Июня 2015 г. 16:55 + в цитатник

Just a few minutes ago, I deployed some updates to hg.mozilla.org to display more metadata on changeset pages. See 4b69a62d1905, dc4023d54436, and b617a57d6bf1 for examples of what's shown.

We currently display:

  • More detailed pushlog info. (Before you had to load another page to see things like the date of the push.)
  • The list of reviewers, each being a link that searches for other changesets they've reviewed.
  • A concise list of bugs referenced in the commit message.
  • Links to changesets that were backed out by this changeset.
  • On changesets that were backed out, we may also display a message that the changeset was backed out.
  • For Firefox repos, we also display the application milestone. This is the Gecko/app version recorded in the config/milestone.txt file in that changeset. The value can be used to quickly answer the question What versions of Firefox have this changeset.

If you notice any issues or have requests for new features, please file a bug.

This work is built on top of a feature I added to Mercurial 3.4 to make it easier to inject extra data into Mercurial's web templates. We just deployed Mercurial 3.4.1 to hg.mozilla.org yesterday. It's worth noting that this deployment happened in the middle of the day with no user-perceived downtime. This is a far cry from where we were a year ago, when any server change required a maintenance window. We've invested a lot of work into a test suite for this service so we can continuously deploy without fear of breaking things. Moving fast feels so good.

http://gregoryszorc.com/blog/2015/06/04/changeset-metadata-on-hg.mozilla.org


David Rajchenbach Teller: Re-dreaming Firefox (2): Beyond Bookmarks

Четверг, 04 Июня 2015 г. 02:48 + в цитатник

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Beyond Bookmarks. Let me emphasize that the features described in this blog post do not exist.

« Look, here is an interesting website. I want to read that content (or watch that video, or play that game), just not immediately. » So, what am I going to do to remember that I wish to read it later:

  1. Bookmark it?
  2. Save it to disk?
  3. Pocket it?
  4. Remember that I saw it and find it in my history later?
  5. Remember that I saw it and find it in my Awesome Bar later?
  6. Hope that it shows up in the New Tab page?
  7. Open a tab?
  8. Install the Open Web App for that website?
  9. Open a tab and put that tab in a tab group?

Wow, that’s 9 ways of fulfilling the same task. Having so many ways of doing the same thing is not a very good sign, so let’s see if we can find a way to unify a few of these abstractions into something more generic and powerful.

Bookmarking is saving is reading later

What are the differences between Bookmarking and Saving?

  1. Bookmarking keeps a URL, while Saving keeps a snapshot.
  2. Bookmarks can be used only from within the browser, while Saved files can be used only from without.

Merging these two features is actually quite easy. Let’s introduce a new button, the Awesome Bookmarks which will serve as a replacement for both the Bookmark button and Save As.

  • Clicking on the Awesome Bookmarks icon saves both the URL to the internal database and a snapshot to the Downloads directory (also accessible through the Downloads menu).
  • Opening an Awesome Bookmark, whether from the browser or from the OS both lead the user to (by default) the live version of the page, or (if the computer is not connected) to the snapshot.
  • Whenever visiting a page that has an Awesome Bookmark, the Awesome Bookmark icon changes color to offer the user the ability to switch between the live version or the snapshot.
  • The same page can be Awesome Bookmarked several times, offering the ability to switch between several snapshots.

By switching to Awesome Bookmarks, we have merged Saving, Bookmarking and the Read it Later list of Pocket. Actually, since Firefox already offers Sync and Social Sharing, we have just merged all the features of Pocket.

So we have removed collapsed items from our list into one.

Bookmarks are history are tiles

What are the differences between Bookmarks and History?

  1. History is recorded automatically, while Bookmarks need to be recorded manually.
  2. History is eventually forgotten, while Bookmarks are not.
  3. Bookmarks can be put in folders, History cannot.

Let’s keep doing almost that, but without segregating the views. Let us introduce a new view, the Awesome Pages, which will serve as a replacement for both Bookmarks Menu and the History Menu.

This view shows a grid of thumbnails of visited pages, iOS/Android/Firefox OS style.

  • first the pages visited most often during the past few hours (with the option of scrolling for all the pages visited during the past few hours);
  • then the Awesome Bookmarks (because, after all, the user has decided to mark these pages)/Awesome Bookmarks folders (with the option of scrolling for more favourites);
  • then, if the user has opted in for suggestions, a set of Awesome Suggested Tiles (with the option of scrolling for more suggestions);
  • then the pages visited the most often today (with the option of scrolling for the other pages visited today);
  • then the pages visited most often this week (with the option of scrolling for the other pages visited this week);

By default, clicking on an Awesome Bookmark (or history entry, or suggested page, etc.) for a page that is already opened switches to that page. Non-bookmarked pages can be turned into Awesome Bookmarks trivially, by starring them or putting them into folders.

An Awesome Bar at the top of this Awesome Pages lets users quickly search for pages and folders. This is the same Awesome Bar that is already at the top of tabs in today’s Firefox, just with the full-screen Awesome Pages replacing the current drop-down menu.

Oh, and by the way, this Awesome Pages is actually our new New Tab page.

By switching to the Awesome Pages, we have merged:

  • the history menu;
  • the bookmarks menu;
  • the new tab page;
  • the awesome bar.

Bookmarks are tabs are apps

What are the differences between Bookmarks and Tabs?

  1. Clicking on a bookmark opens the page by loading it, while clicking on a tab opens the page by switching to it.

That’s not much of a difference, is it?

So let’s make a few more changes to our UX:

  • Awesome Bookmarks record the state of the page, in the style of Session Restore, so clicking on an Awesome Bookmark actually restores that page, whenever possible, instead of reloading it;
  • The ribbon on top of the browser, which traditionally contains tabs, is actually a simplified display of the Awesome Pages, which shows, by default, the pages most often visited during the past few hours;
  • Whether clicking on a ribbon item switches to a page or restores it is an implementation detail, which depends on whether the browser has decided that unloading a page was a good idea for memory/CPU/battery usage;
  • Replace Panorama with the Awesome Page, without further change.

So, with a little imagination (and, I’ll admit, a little hand-waving), we have merged tabs and bookmarks. Interestingly, we have done that by moving to an Apps-like model, in which whether an application is loaded or not is for the OS to decide, rather than the user.

By the way, what are the differences between Tabs and Open Web Apps?

  1. Apps can be killed by the OS, while Tabs cannot.
  2. Apps are visible to the OS, while Tabs appear in the browser only.

Well, if we decide that Apps are just Bookmarks, since Bookmarks have been made visible to the OS in section 1., and since Bookmarks have just been merged with Tabs which have just been made killable by the browser, we have our Apps model.

We have just removed three more items from our list.

What’s left?

We are down to one higher-level abstraction (the Awesome Bookmark) and one view of it (the Awesome Page). Of course, if this is eventually released, we are certainly going to call both Persona.

This new Firefox is quite different from today’s Firefox. Actually, it looks much more like Firefox OS, which may be a good thing. While I realize that many of the details are handwavy (e.g. how do you open the same page twice simultaneously?), I believe that someone smarter than me can do great things with this preliminary exploration.

I would like to try that Firefox. Would you?


https://dutherenverseauborddelatable.wordpress.com/2015/06/03/re-dreaming-firefox/


Mark Surman: The essence of web literacy

Среда, 03 Июня 2015 г. 23:04 + в цитатник

Read. Write. Participate. These words are at the heart of our emerging vision for Mozilla Learning (aka Academy). Whether you’re a first time smartphone user, a budding tech educator or an experienced programmer, the degree to which you can read, write and participate in the digital world shapes what you can imagine — and what you can do. These three capabilities are the essence of Mozilla’s definition of web literacy.

FullSizeRender

As we began thinking more about Mozilla Learning over the past month, we started to conclude that this read | write | participate combination should be the first principle behind our work. If a project doesn’t tie back to these capabilities, it should not be part of our plan. Or, put positively, everything we do should get people sharing know-how and building things on the web in a way that helps them hone their read | write | participate mojo.

Many existing Mozilla projects already fit this criteria. Our SmartOn series helps people get basic knowledge on topics like privacy. Mozilla Clubs brings together people who want to teach and learn core web literacy skills. And projects like OpenNews bring together skill developers who are honing their skills in open source and collaboration while building the next wave of news on the web. These projects may seem disparate at first, but they all help people learn, hone and wield the ability to read, write and participate on the web.

If we want to insert this minimalist version of web literacy into the heart of our work, we’ll need to define our terms and pressure test our thinking. My working definition of these concepts is:

  • Read: use and understand the web with a critical mind. Includes everything from knowing what a link is to bullshit detection.
  • Write: create content and express yourself on the web. Includes everything from posting to a blog to remixing found content to coding.
  • Participate: interact with others to make your own experience and the web richer. Includes everything from basic collaboration to working in the open.

On the idea of pressure testing our framework: the main question we’ve asked so far is ‘are these concepts helpful if we’re talking about people across a wide variety of skill levels?’ Does a first time smartphone user really need to know how to read, write and participate? Does a master coder still have skills to hone in these areas? And skills to share? Also, how does our existing basic web literacy grid hold up to these questions?

Laura de Reynal and I have been running different versions of this pressure test with people we work with over the last month or so. Laura has been talking to young people and first time smartphone users. I’ve been talking to people like Shuttleworth Fellows and participants at eLearning Africa who are emerging leaders in various flavours of ‘open + tech’. Roughly, we asked each of them to list a thing they know how to do or want to know how to do in each of the read | write | participate areas. In most cases, people understood our question with little explanation and got excited about what they knew and what they could learn. Many also expressed a pride and willingness to share what they know. By this quick and dirty measure, read | write | participate passed the test of being applicable to people with a wide variety of skills and sophistication.

One notable result from the groups I talked to: they all encouraged Mozilla to be incredibly opinionated about ‘what kind of reading, writing and participating’ matters most. In particular, a number of them stressed that we could do a lot of good in the world by helping people learn and hone the sort of ‘working in the open’ participation skills that we practice every day. Backing this up, evaluation research we’ve done recently shows that the educators in the Hive and fellows in Open News really value this aspect of being part of the Mozilla community. It could be that we want to formalize our work on this and make it a focal point within our Mozilla Learning strategy.

Building our work from the last few years, there is a lot more to dig into on web literacy and how it fits into our plans. However, I wanted to get this post up to put a stake in the ground early to establish read | write | participate as the north star to which all Mozilla Leading efforts must align. Being clear about that makes it easier to have discussions about what we should and shouldn’t be doing going forward.

As a next step to dig deeper, Chris Lawrence has agreed to form a web literacy working group. This group will go back into the deeper work we’ve done on the web literacy map, tying that map into read | write | participate and also looking at other frameworks for things like 21st century skills. It should form in the next couple of weeks. Once it has, you’ll be able to track it and participate from the Mozilla Learning planning wiki.


Filed under: mozilla

https://commonspace.wordpress.com/2015/06/03/the-essence-of-web-literacy/


Benjamin Kerensa: Don’t Celebrate USA Freedom Act Passage

Среда, 03 Июня 2015 г. 23:00 + в цитатник
This Phone is TappedThis Phone is Tapped, Tony Webster (CC by 2.0)

Mozilla recently announced it’s support for the USA Freedom Act alongside allies like the EFF, but the EFF also ended up withdrawing its support because of deficiencies in the legislation and a recent opinion from an appeals court.

I think Mozilla should have withdrawn its support on this still flawed bill because while it did push forward some important reforms it still extended flawed sections of the law that infringe on individual’s civil liberties such as Section 206 “Roving Wiretap” authority program. This program essentially allows the FBI access to any phone line, mobile communications or even internet connections a suspect may be using without ever having to provide a name to anyone. This is clearly not good legislation because it allows overreach and lacks a requirement that communications or accounts being tapped are tied to the subject. While this is just one example there are many other provisions that allow intelligence and law enforcement agencies to continue their spying, just not as broadly as before.

What we need is smarter legislation that allows law enforcement and intelligence agencies to do their work without infringing on the privacy or civil liberties of everyday Americans, you know, like back when domestic spying was entirely illegal.

Wikipedia does a great job of documenting some good information about the USA Freedom Act and I would encourage folks to check out the article and research this piece of legislation more. Remember this bill only passed because it had concessions for Pro-Intelligence legislators, the same folks who created the Patriot Act and opened up spying on Americans in the first place.

I think Mozilla could have done better by withdrawing support and it is good to see that while the EFF is celebrating some parts of the USA Freedom Act it is also mourning some of the concessions.

http://feedproxy.google.com/~r/BenjaminKerensaDotComMozilla/~3/msbIExGAyK4/dont-celebrate-usa-freedom-act-passage


Armen Zambrano: mozci 0.7.2 - Support b2g jobs that still run on Buildbot

Среда, 03 Июня 2015 г. 22:23 + в цитатник
There are a lot of b2g (aka Firefox OS) jobs that still run on Buildbot .
Interestingly enough we had not tried before to trigger one with mozci.
This release adds support for it.
This should have been a minor release (0.8.0) rather than a security release (0.7.2). My apologies!
All jobs that start with "b2g_" in all_builders.txt are b2g jobs that still run on Buildbot instead of TaskCluster (docs - TC jobs on treeherder).


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/MwJv83AXJAI/mozci-072-support-b2g-jobs-that-still.html


Armen Zambrano: mozci 0.7.1 - regression fix - do not look for files for running jobs

Среда, 03 Июня 2015 г. 22:14 + в цитатник
This release mainly fixes a regression we introduced in the release 0.7.0.
The change (#220) we introduced checked completed and running jobs for files that have been uploaded in order to trigger tests.
The problem is that running jobs do not have any metadata until they actually complete.
We fixed this on #234.

Contributions

Thanks to @adusca and @glandium for their contributions on this release.

How to update

Run "pip install -U mozci" to update

Major highlights

  • #234 - (bug fix) - Do not try to find files for running jobs
  • #228 - For try, only trigger talos jobs on existing build jobs ** rather than triggering builds for platforms that were not requested
  • #238 - Read credentials through environment variables

Minor improvements

  • #226 - (bug fix) Properly cache downloaded files
  • #228 - (refactor) Move SCHEDULING_MANAGER
  • #231 - Doc fixes

All changes

You can see all changes in here:
0.7.0...0.7.1


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/WDabSl1VxzI/mozci-071-regression-fix-do-not-look.html


Air Mozilla: Product Coordination Meeting

Среда, 03 Июня 2015 г. 21:00 + в цитатник

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

https://air.mozilla.org/product-coordination-meeting-20150603/


Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 17

Среда, 03 Июня 2015 г. 20:00 + в цитатник

Selena Deckelmann: TaskCluster migration: about the Buildbot Bridge

Среда, 03 Июня 2015 г. 19:59 + в цитатник

Back on May 7, Ben Hearsum gave a short talk about an important piece of technology supporting our transition to TaskCluster, the Buildbot Bridge. A recording is available.

I took some detailed notes to spread the word about how this work is enabling a great deal of important Q3 work like the Release Promotion project. Basically, the bridge allows us to separate out work that Buildbot currently runs in a somewhat monolithic way into TaskGraphs and Tasks that can be scheduled separately and independently. This decoupling is a powerful enabler for future work.

Of course, you might argue that we could perform this decoupling in Buildbot.

However, moving to TaskCluster means adopting a modern, distributed queue-based approach to managing incoming jobs. We will be freed of the performance tradeoffs and careful attention required when using relational databases for queue management (Buildbot uses MySQL for it’s queues, TaskCluster uses RabbitMQ and Azure). We also will be moving “decision tasks” in-tree, meaning that they will be closer to developer environments and likely easier to manage keeping developer and build system environments in sync.

Here are my notes:

Why have the bridge?

  • Allows a graceful transition
  • We’re in an annoying state where we can’t have dependencies between buildbot builds and taskcluster tasks. For example: we can’t move firefox linux builds into taskcluster without moving everything downstream of those also into taskcluster
  • It’s not practical and sometimes just not possible to move everything at the same time. This let’s us reimplement buildbot schedulers as task graphs. Buildbot builds are tasks on the task graphs enabling us to change each task to be implemented by a Docker worker, a generic worker or anything we want or need at that point.
  • One of the driving forces is the build promotion project – the funsize and anti-virus scanning and binary moving – this is going to be implemented in taskcluster tasks but the rest will be in Buildbot. We need to be able to bounce between the two.

What is the Buildbot Bridge (BBB)

BBB acts as a TC worker and provisioner and delegates all those things to BuildBot. As far as TC is concerned, BBB is doing all this work, not Buildbot itself. TC knows nothing about Buildbot.

There are three services:

  • TC Listener: responds to things happening in TC
  • BuildBot Listener: responds to BB events
  • Reflector: takes care of things that can’t be done in response to events — it reclaims tasks periodically, for example. TC expects Tasks to reclaim tasks. If a Task stops reclaiming, TC considers that Task dead.

BBB has a small database that associates build requests with TC taskids and runids.

BBB is designed to be multihomed. It is currently deployed but not running on three Buildbot masters. We can lose an AWS region and the bridge will still function. It consumes from Pulse.

The system is dependent on Pulse, SchedulerDB and Self-serve (in addition to a Buildbot master and Taskcluster).

Taskcluster Listener

Reacts to events coming from TC Pulse exchanges.

Creates build requests in response to tasks becoming “pending”. When someone pushes to mozilla-central, BBB inserts BuildRequests into BB SchedulerDB. Pending jobs appear in BB. BBB cancels BuildRequests as well — can happen from timeouts, someone explicitly cancelling in TC.

Buildbot Listener

Responds to events coming from the BB Pulse exchanges.

Claims a Task when builds start. Attaches BuildBot Properties to Tasks as artifacts. Has a buildslave name, information/metadata. It resolves those Tasks.

Buildbot and TC don’t have a 1:1 mapping of BB statuses and TC resolution. Also needs to coordinate with Treeherder color. A short discussion happened about implementing these colors in an artifact rather than inferring them from return codes or statuses inherent to BB or TC.

Reflector

  • Runs on a timer – every 60 seconds
  • Reclaims tasks: need to do this every 30-60 minutes
  • Cancels Tasks when a BuildRequest is cancelled on the BB side (have to troll through BB DB to detect this state if it is cancelled on the buildbot side)

Scenarios

  • A successful build!

Task is created. Task in TC is pending, nothnig in BB. TCListener picks up the event and creates a BuildRequest (pending).

BB creates a Build. BBListener receives buildstarted event, claims the Task.

Reflector reclaims the Task while the Build is running.

Build completes successfully. BBListener receives log uploaded event (build finished), reports success in TaskCluster.

  • Build fails initially, succeeds upon retry

(500 from hg – common reason to retry)

Same through Reflector.

BB fails, marked as RETRY BBListener receives log uploaded event, reports exception to Taskcluster and calls rerun Task.

BB has already started a new Build TCListener receives task-pending event, updates runid, does not create a new BuildRequest.

Build completes successfully Buildbot Listener receives log uploaded event, reports success to TaskCluster.

  • Task exceeds deadline before Build starts

Task created TCListener receives task-pending event, creates BuildRequest Nothing happens. Task goes past deadline, TaskCluster cancels it. TCListener receives task-exception event, cancels BuildRequest through Self-serve

QUESTIONS:

  • TC deadline, what is it? Queue: a task past a deadline is marked as timeout/deadline exceeded

On TH, if someone requests a rebuild twice what happens? * There is no retry/rerun, we duplicate the subgraph — where ever we retrigger, you get everything below it. You’d end up with duplicates Retries and rebuilds are separate. Rebuilds are triggered by humans, retries are internal to BB. TC doesn’t have a concept of retries.

  • How do we avoid duplicate reporting? TC will be considered source of truth in the future. Unsure about interim. Maybe TH can ignore duplicates since the builder names will be the same.

  • Replacing the scheduler what does that mean exactly?

    • Mostly moving decision tasks in-tree — practical impact: YAML files get moved into the tree
    • Remove all scheduling from BuildBot and Hg polling

Roll-out plan

  • Connected to the Alder branch currently
  • Replacing some of the Alder schedulers with TaskGraphs
  • All the BB Alder schedulers are disabled, and was able to get a push to generate a TaskGraph!

Next steps might be release scheduling tasks, rather than merging into central. Someone else might be able to work on other CI tasks in parallel.

http://www.chesnok.com/daily/2015/06/03/taskcluster-migration-about-the-buildbot-bridge/


Daniel Glazman: In praise of Rick Boykin and its Bulldozer editor

Среда, 03 Июня 2015 г. 18:23 + в цитатник

Twenty years ago this year, Rick Boykin started a side project while working at NASA. That project, presented a few months later as a poster session at the 4th International Web Conference in Boston (look in section II. Infrastructure), was Bulldozer, one of the first Wysiwyg editors natively made for the Web. I still remember his Poster session at the conference as the most surprising and amazing short demo of the conference. His work on Bulldozer was a masterpiece and I sincerely regretted he stopped working on it, or so it seemed, when he left NASA the next year.

I thanked you twenty years ago, Rick, and let me thank you again today. Happy 20th birthday, Bulldozer. You paved the way and I remember you.

http://www.glazman.org/weblog/dotclear/index.php?post/2015/06/03/In-praise-of-Rick-Boykin-and-its-Bulldozer-editor


Daniel Stenberg: daniel weekly

Среда, 03 Июня 2015 г. 15:38 + в цитатник

daniel weekly screenshot

My series of weekly videos, in lack of a better name called daniel weekly, reached episode 35 today. I’m celebrating this fact by also adding an RSS-feed for those of you who prefer to listen to me in an audio-only version.

As an avid podcast listener myself, I can certainly see how this will be a better fit to some. Most of these videos are just me talking anyway so losing the visual shouldn’t be much of a problem.

A typical episode

I talk about what I work on in my open source projects, which means a lot of curl stuff and occasional stuff from my work on Firefox for Mozilla. I also tend to mention events I attend and HTTP/networking developments I find interesting and grab my attention. Lots of HTTP/2 talk for example. I only ever express my own personal opinions.

It is generally an extremely geeky and technical video series.

Every week I mention a (curl) “bug of the week” that allows me to joke or rant about the bug in question or just mention what it is about. In episode 31 I started my “command line options of the week” series in which I explain one or a few curl command line options with some amount of detail. There are over 170 options so the series is bound to continue for a while. I’ve explained ten options so far.

I’ve set a limit for myself and I make an effort to keep the episodes shorter than 20 minutes. I’ve not succeed every time.

Analytics

The 35 episodes have been viewed over 17,000 times in total. Episode two is the most watched individual one with almost 1,500 views.

Right now, my channel has 190 subscribers.

The top-3 countries that watch my videos: USA, Sweden and UK.

Share of viewers that are female: 3.7%

http://daniel.haxx.se/blog/2015/06/03/daniel-weekly/


Joel Maher: re-triggering for a [root] cause- version 2.0 – a single bug!

Среда, 03 Июня 2015 г. 13:46 + в цитатник

Today the orange factor email came out- the top 10 bugs with stars on them :)  Last week we had no bugs that we could take action on, and the week before we had a few bugs to take action on.

This week I looked at each bug again, annotated them with some notes as to what I did or why I didn’t do anything, here are the bugs:

  • Bug 1160008 Intermittent testVideoDiscovery
    • too old!  (from last week)
  • Bug 1073442 Intermittent command timed out
    • too old; infra issue (from week before last)
  • Bug 1121145 Intermittent browser_panel_toggle.js
    • too old!  problem got worse on April 24th (from last week)
  • Bug 1157948 DMError: Non-zero return code for command
    • too old!  most likely a harness/infra issue (from last week)
  • Bug 1161817 Intermittent browser_timeline-waterfall-sidebar.js
    • fixed already
  • Bug 1168747 Intermittent 336736-1a.html
    • resolved (via skip-if in manifest)
  • Bug 1149955 Intermittent Win8-PGO test_shared_all.py
    • too old (from last week – someone looking into it now though!)
  • Bug 1158887 Intermittent test_async_transactions.js
    • reopened – fix landed recently- re-triggering doesn’t seem helpful :(
  • Bug 1090203 Intermittent style-src-3_2.html
    • to do work
  • Bug 1081925 Intermittent browser_popup_blocker.js
    • test is disabled now (from last week)

It is nice to find a bug that we can take action on.  What is interesting is the bug has been around for a while, but we noticed about May 21 that the rate of failures went up from a couple a day to >5/day.  Details:

  • started out re-triggering on m-c.  We could see a pattern on a specific merge.
  • did re-triggers on m-i.  20 was inconclusive, and then triggered 20 more for each job- the results were still inconclusive.  There is no increasing pattern based on a specific chanageset.

I might try a full experiment soon blindly looking at bugs instead of using orange factor.


https://elvis314.wordpress.com/2015/06/03/re-triggering-for-a-root-cause-version-2-0-a-single-bug/


Mozilla Reps Community: Reps Weekly Call – May 28th 2015

Среда, 03 Июня 2015 г. 13:37 + в цитатник

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

shapeoftheweb

Summary

  • Shape of the Web project.
  • Suggested Tiles for Firefox update.
  • Featured Events.
  • Help me with my project.
  • Whistler WorkWeek – Reimbursements
  • Mozilla Reps SEA (SouthEast Asia) Online Meetup

AirMozilla video

Detailed notes

Shoutouts to Dorothee Danedjo Fouba, Mrz, Yofie, Ioana and all the Balkans communities for being awesome!

Shape of the web

Greg joined the call to talk about an important project called Shape of the web which is an online platform to share and assess the attributes we believe are necessary for the open web. It represents Mozilla’s view of the world and wants to help people to understand what’s at stake.

The projects main goals are:

  • Create a place where users can get education about the current state
    of the web
  • Give people an opportunity to get involved on an easy level and
    motivating to find more information about this and share the
    information they learned
  • Build awareness for Mozilla since a lot of people know Mozilla but
    don’t know what we do

Feel free to try the project at http://shapeoftheweb.mozilla.org/ and use it in your upcoming maker party.

Featured Events

Marketplace Day – event report

Ram reported about the first ever regional marketplace community building event with which was organized in Hyderabad. It was an overnight event with around 25 participants and volunteers working on 15 bugs, submitted 11 pull requests & on-boarded 15 NEW long-time contributors.

You can read more in Mozilla India’s blog post and in gurumukhi’s blog post.

Help me with my project!

Boris is looking for help for porting CyanogenMod, TWRP and ClockworkMod to the Flame reference device, please contact him if you are interested.

Full raw notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

https://blog.mozilla.org/mozillareps/2015/06/03/reps-weekly-call-may-28th-2015/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 163 162 [161] 160 159 ..
.. 1 Календарь