Botond Ballo: Trip Report: C++ Standards Meeting in Lenexa, May 2015 |
Project | What’s in it? | Status |
C++14 | C++14 | Published! |
C++17 | Various minor features. More substantial features under consideration include default comparisons and operator . |
On track for 2017 |
Networking TS | Sockets library based on Boost.ASIO | In late stages of design review |
Filesystems TS | Standard filesystem interface | Published! |
Library Fundamentals TS I | optional , any , string_view , and more |
Voted out for publication! |
Library Fundamentals TS II | source code information capture, array_view , and more |
Expected 2016 or 2017 |
Array Extensions TS | Old proposals (arrays of runtime bound (ARBs) and dynarray ) abandoned. New direction being explored: a magic type that acts like an ARB and can be used as a local variable only. |
|
Parallelism TS | Parallel versions of STL algorithms | Voted out for publication! |
Concurrency TS | improvements to future , latchesand barriers, atomic smart pointers |
Voted out for balloting by national standards bodies |
Transactional Memory TS | transaction support | Voted out for publication! |
Concepts (“Lite”) TS | constrained templates | In process of addressing comments from national standards bodies. Publication expected late 2015. |
Reflection | Code introspection and (later) reification mechanisms | Still in the design stage, no ETA |
Graphics | 2D drawing API | Standard wording expected to be ready for review at the next meeting. |
Modules | A component system to supersede the textual header file inclusion model | Microsoft and Clang continuing to iterate on their implementations and converge on a design. The feature will target a TS, not C++17. |
Last week I attended a meeting of the ISO C++ Standards Committee in Lenexa, Kansas. This was the first committee meeting in 2015; you can find my reports on 2014’s meetings here (February, Issaquah), here (June 2014, Rapperswil), and here (November, Urbana-Champaign). These reports, particularly the Urbana one, provide useful context for this post.
The focus of this meeting was iterating on the various ambitious proposals in progress, and beginning to form an idea of which of them will be ready for C++17. In addition, several of the Technical Specifications (TS) in flight have reached their final stages in the committee, while work continues on others.
C++14 was officially published as an Internal Standard by ISO in December 2014. Its timely publication is evidence that the committee’s plan for adopting a three-year publication cycle post-C++11 has, thus far, been successful.
When the committee originally announced, shortly after C++11 was published, the schedule for the next two standard revisions, C++14 was described as a “minor revision”, and C++17, a “major” one.
A few things have happened since then:
However, I don’t think any of these mergers is a given. For example, there is some opposition to merging the Filesystems TS into the Standard in its current form on the basis that its design isn’t a good fit for certain less common varieties of filesystems. Concepts also has some pending design issues that may be revisited prior to a merger. In all cases, feedback from implementors and users will be key and will likely decide the TS’s fate.
As a result, I think it’s likely that C++17 will end up being approximately as “major” or “minor” of a revision as C++14 was.
Notable features that are not looking like they’ll make C++17 include:
(I talk about modules and reflection in more detail below, for those interested.)
That said, there are features not going through the TS process which are expected to be in C++17.
My Urbana report lists the language and library features that have made it into C++17 as of the last meeting.
No new language features were voted into C++17 this meeting, in the sense of standard wording for them being merged into the C++17 working draft (a small, language lawyer-ish change, making exception specifications part of the type system came close, but was deferred to Kona due to some concerns brought up during the closing plenary session). However, there are numerous language features in the design or wording review stages that are expected to be voted into C++17 in upcoming meetings; I talk about these in the “Evolution Working Group” section below.
There were some small library features voted into C++17 at this meeting:
pair
and tuple
bool_constant
shared_mutex
There has been a change to ISO’s procedure for publishing a Techincal Specification since the last meeting. The process used to involve two rounds of balloting by national standards bodies, called PDTS (Preliminary Draft TS) and DTS (Draft TS). Recently, the DTS round has been dropped, leaving just the PDTS round, and making for a more agile TS publication process.
As a result of this procedural change, some TS’s which had successfully finished their PDTS ballots became eligible to be voted out for publication at this meeting, and after addressing PDTS ballot comments, vote them out we did! Library Fundamentals I, Transactional Memory, and Parallelism I have all been sent to ISO for official publication, which should happen within a few months.
A couple of other TS’s haven’t quite crossed the finish line yet, but are very close.
The Concepts TS garnered a rich and opinionated collection of PDTS ballot comments. Among them were your usual editorial and wording-level technical comments, but also some design-level comments which were brought before the Evolution Working Group (EWG) for consideration.
It’s rare for the committee to make design changes to a TS or Standard at such a late stage in the publication cycle, and indeed most design-level comments were deferred (meaning, they will not be addressed in this version of the TS, but they could be revisited in a future version, or if the TS comes up for merging into the Standard). One comment, however, which was essentially a request for a (small) feature, was approved. The feature will allow using a concept name as type name in a variable declaration:
ConceptName var = expression;
The semantics is that the type of var
is deduced from the type of expression
(much like with auto
), but the code is ill-formed if the deduced type does not satisfy the concept.
I was mildly surprised that EWG was willing to approve this addition at this late stage, but pleasantly so: I think this feature is very useful. To top off the good news, Andrew Sutton (the primary author of the Concepts TS), who couldn’t make it to the meeting itself, reported only two days later that he added support this feature in his GCC-based Concepts TS implementation! (Implementation experience is immensely valuable for guiding the committee’s decisions, because issues with a proposal feature often come up during implementation.)
As a result of this new addition, and a multitude of wording-level comments, the Core Working Group (CWG) didn’t have time to prepare final wording for the Concepts TS by the end of the meeting, so it couldn’t be voted out for publication just yet. Rather, CWG plans to hold a post-meeting teleconference to (hopefully) complete the final wording, after which the options are to hold a committee-wide teleconference to vote it out for publication, or to wait until Kona to vote on it.
Either way, the Concepts TS is practically at the brink of completion! Very exciting.
There’s also good news on the implementation side: GCC front-end developer Jason Merrill says Andrew’s Concepts implementation is expected to merge into mainline GCC within a month or so. Meanwhile, IBM, who have switched to using clang as the front-end for their newer products, announced their intention to kick off a clang-based implementation.
Good progress here, too: the first Concurrency TS was sent out for its PDTS ballot! Assuming a successful ballot, it should be ready to be voted for publication in Kona.
As usual, I spent most of the meeting in the Evolution Working Group, which does design review for proposed language features. EWG once again had a full plate of proposals to look at.
Recapping from previous posts, the outcome of an EWG design review is one of the following:
Here’s how this meeting’s proposals fared:
Accepted:
emplace_back()
and similar play nice with aggregate types. Note that this is a library proposal for which EWG input was solicited, so it was sent to the Library Working Group (LWG) rather than CWG.inline
for a purpose somewhat unrelated to its existing use, and that it encourages the use of static storage duration variables to begin with), but otherwise had fairly strong support and was accepted.register
, but reserve it for future use.Further work:
template auto concatenate(Strings... strings) { return strings + ...; }
With the current rules, when this function is called with 1 or more strings, it returns the concatenation of its arguments (because strings overload operator +
to do concatenation), but when called with no arguments, it returns the integer 0, because that’s defined as the identity element for the addition operator.
The proposal in question would make it ill-formed to call this function with no arguments; if the author wants that to work, their recourse is to change the unary fold to the binary fold strings + ... + ""
.
There was consensus that addition, multiplication, and bitwise operators should be treated this way, but others were more contentious. For example, it was argued that for the logical operators &&
and ||
, you shouldn’t be overloading them to return things that aren’t bool
anyways, so the identities true
and false
remain appropriate.
A particularly interesting case is the comma operator, for which the specified identity is void()
. Comma folds are expected to be common as a technique to emulate statement folding, as in the following example:
template auto call_all_functions(Functions... functions) { functions() , ...; // expands to function1() , function2() , functionN(); }
On the one hand, it would be a shame to make people write functions , ... , void()
every time they want to do this, and the comma operator generally shouldn’t be overloaded anyways, so keeping the void()
identity should be reasonable. On the other hand, if people want to do statement folding, perhaps the language should allow them to do that directly, rather than relying on the comma operator to emulate it with expression folding.
As there was no consensus on the precise course of action, the matter was referred for further work.
assert
-like facility, to be used inside function bodies, primarily for the purpose of runtime checking. The other (there was also a third very similar to it) proposed a syntax for declaring preconditions, postconditions, and invariants for a function in its interface (i.e. in its declaration), primarily for the purpose of static analysis and enabling compiler optimizations. There was consensus that both sets of goals, and both places for writing contracts (interfaces and implementations) are desirable, but there was some debate about whether the proposals should be considered as orthogonal and allowed to proceed independently, or whether the authors should collaborate and come up with a unified proposal that satisfies both use cases. In the end, the direction was to strive for a unified proposal.= delete
if you don’t want them; and (2) it’s carefully designed to avoid breaking existing code as follows: for any comparison operator call site, if name lookup under current rules finds a user-declared operator, it will continue finding that operator under the new rules, rather than using any auto-generated one. The proposal had strong concensus, which was a (welcome) surprise after the lack of consensus on earlier versions (and other proposals in the area) at the previous two meetings. It came close to being approved and sent to CWG, but some details of the semantics remained to be hashed out, so Bjarne was asked to return with an updated proposal in Kona.==
and <
, and as such, the proposed set of named functions can be developed independently of Bjarne’s proposal.x.f(y)
) and non-member (f(x, y)
) call syntaxes by allowing functions of either kind to be invoked by syntax of either kind. The approach is to have the x.f(y)
syntax look for member functions first, and fall back to looking for non-member functions only if the member lookup yields no results; conversely, f(x, y)
would look for non-member functions first, and fall back to a member lookup. The resulting semantics are asymmetric (they don’t make x.f(y)
and f(x, y)
completely interchangeable), but fully backwards compatible. (This design was one of several alternatives Bjarne presented at the previous meeting, and it seemed to have the greatest chance for gaining consensus, primarily due to its backwards compatibility.)
Beyond aesthetics (“I prefer my function calls to look this way”) and tooling reasons (“member call syntax gives me IntelliSense”), the primary motivation for this feature is facilitating generic proramming, which is expected to become more popular than ever with Concepts. When defining requirements on a template parameter type, either informally in today’s C++ or explicitly with Concepts, you currently have to choose whether the operations on the type are expressed as member of non-member functions. Either choice constrains users of your template: if you choose member functions, they can’t adapt third-party types that they can’t modify to model your concept; if you choose non-member functions, they will likely have to provide a lot of non-member adapters for types that would otherwise automatically model your concept. You could choose to allow either one (this is what C++11 does with the “range” concept used in the range-based for loop: the required operation of getting an iterator to the first element of the range can be spelt either begin(range)
or range.begin()
), but then your call sites become very unreadable because you need a lot of template/SFINAE magic to express “call X if it exists, otherwise call Y”. A unified call syntax would allow template implementers to use whichever call syntax they like, while users of the template can use either member functions or non-member functions to model the concepts, as they desire / are able to. (C++0x Concepts had a feature called “concept maps” which solved this problem by acting as a bridge between the operations in a concept definition (which is what generic code would use) and the actual operations on a type modelling the concept. However, concept maps were removed from later Concepts designs because they proved very tricky to specify and implement.)
Unfortunately, this is a very risky change to make to the language. While the proposal itself doesn’t break any existing code, new code that takes advantage of the proposal (that is, code that invokes a non-member function via a member call syntax, or vice versa) is considerably more prone to breakage. For example, adding a new member function to a class can easily break user code which was calling a non-member function of that name via the member function syntax; this breakage can manifest as a compiler error, or as a silent behaviour change, depending on the situation.
A lot of the finer points of the proposed semantics remain to be nailed down as well. How does the fallback mechanism work – is it activated only if the initial lookup doesn’t find any results, or also if it finds results but they’re all, say, SFINAE’d out? What is the interaction with two-phase name lookup? What happens when the member call syntax is used on an incomplete type?
EWG was very divided on this proposal; consensus seemed fairly far off. Some people suggested changes to the proposal that would allay some of their concerns with it; one of them was to have classes opt-in to unified call syntax, another to restrict the set of non-member functions that can be found via a member call syntax to those found by ADL. Bjarne said that he intends to continue iterating on the idea.
operator ->
gives us smart pointers, as well enable many other patterns that take advantage of interface composition. The proposal was generally very well-received; one feature that was somewhat controversial was the ability to declare multiple “overloads” of operator dot that return different types, with the effect of bringing the interfaces of both types into the scope of the declaring type (much how multiple inheritance from the two types would). The author (also Bjarne) was asked to come back with standard wording.make_pair()
, for the sole purpose of not having to explicitly write out the template argument types in a constructor call of the form pair(t, u)
; the proposal would allow simply pair(t, u)
.This proposal has been on the table for a while, but it’s been plagued by the problem that for a lot of classes, deduction based on existing constructors wouldn’t work. For example, if a class container
has a constructor that takes arguments of type container::iterator
, that type is a non-deduced context, so T
could not be deduced from a constructor call of the form container(begin, end)
. The latest version addresses this by allowing class authors to optionally define “canonical factory functions” that define how the class’ template parameters are deduced from constructor arguments. Here’s how one might look like (the syntax is hypothetical):
templatecontainer(Iter a, Iter b) -> container ::value_type>;
This basically says “if container
is constructed from two iterators, the class’ template parameter is the value type of those iterators”. The question of where to place such a declaration came up; EWG favoured placing it at namespace scope, so as to allow third parties to provide them if desired.
Another point that was brought up was that a constructor call of the form classname(arguments)
where classname
is a template class, already has a meaning inside the scope of classname
: there, classname
without any template arguments means “the current instantiation” (this is called injected class names in standard terminology). The proposal needs to specify whether such a constructor would change meaning (i.e. deduction would be performed instead) or not. The consensus was to try to perform deduction, and fall back to the current instantiation if that fails; this would technically be a breaking change, but the hope is that the scope of any breakage would be minor.
Overall, the proposal had strong support and is expected to move forward.
template
and encouraged the author to continue iterating on the idea.static_if
; the restrictions are that (1) it can only be used at local scope, (2) each branch of it forms a scope of its own, and (3) non-dependent constructs need to be well-formed in both branches. The proposal was well-received, and the author will continue working on it. It was noted that the Concepts TS doesn’t currently allow evaluating a concept outside a requires-expression, so something like static_if (ConceptName)
wouldn’t necessarily work, but hopefully that restriction will be lifted in the near future.static_assert
to allow taking for the error message not just a string literal, but any constant expression that can be converted to a string literal. The idea is to allow performing compile-time string formatting to obtain the error message.noexcept(auto)
, which basically means “deduce the noexcept
-ness of this function from the noexcept
-ness of the functions it calls. Like return type deduction, this requires the body of the function being available in each translation unit that uses the function. It was brought up that, together with the proposal for making exception specifications part of the type system, this would mean that modifying the function’s body could change the function’s type (again similarly to return type deduction), but people weren’t overly concerned about that.Rejected:
auto
return type would return by value!int[][3][][7]
, though not instantiating them. The author was seeking to write a multi-dimensional array class where each dimension could be determined statically or dynamically, and use a type of this form as a template parameter and interpret it as a description for which dimensions were static. EWG didn’t find this motivation compelling (the information can be conveyed by other means, such as Array>
) and was generally wary of adding to the set of types that can be formed but not instantiated (an existing example of such a type is a function type whose return type is a function type).goto
in constexpr
functions. The intent here was to plug gratuitous holes between what can be done in a constant expression, and what can be done in regular code. EWG liked the motivation, but preferred to see it together with proposals that plug other holes, such as using lambdas and virtual functions in constant expressions. At least one of those (lambdas) is expected to be proposed in Kona. (Bjarne jokingly wondered whether some day people would be proposing launching threads in a constexpr
function.)There are two groups currently working on modules: Microsoft, in their compiler, and Google, in clang. Microsoft has a draft proposal based on their design; Google hasn’t submitted a proposal based on their design yet.
The two designs differ slightly in philosophy. Microsoft’s design feels like what modules might have looked like if they were part of C++ from the beginning. It’s clean, and promises to be a good fit for new code written in a modular fashion. Google’s design, on the other hand, is geared towards making it possible to incrementally modularize large existing codebases without requiring a significant refactoring or other major changes (at least in the absence of egregiously non-modular design patterns). In other words, Microsoft’s design is more idealistic and pure, and Google’s is more practical.
Most notably, Microsoft’s design essentially requires modularizing a codebase from the bottom-up. For example, if a component of your program uses the C++ standard library, then modularizing that component requires first modularizing the C++ standard library; if the C++ standard library in turn uses the C standard library, then that too must be modularized (which is particularly unfortunate, for two reasons: (1) C standard library headers tend to be notoriously difficult to modularize due to their use of the preprocessor, and (2) they need to remain consumable by non-C++ code). Google’s design, on the other hand, specifically allows modular code to include non-modular code, so you could modularize your program component without having to modularize the C++ standard library.
To be sure, this feature of Google’s design introduces significant implementation complexity. (In my Rapperswil report, I reported that Google claimed their modules implementation was complete. I now understand what they meant was their implementation of a subset of the design that did not include this feature was complete.) I don’t have a deep understanding of the technical issues involved, but from what I’ve gathered, the difficulty is taking multiple copies of entities defined by the same non-modular code included in different modules and “merging” them to view them as a single entity.
There are other differences between the two designs, too. For example, Google’s allows exporting macros from a module, while Microsoft’s does not. Google’s design also supports cyclic dependencies between module interfaces, resolved by forward-declaring an entity from one module in the interface of the other; Microsoft’s proposal has no such support.
EWG spent half a day (and folks from the two groups additional time offline) discussing and trying to reconcile these design differences. The outcome was relatively hopeful about reaching convergence. The Microsoft folks conceded that some abilities, such as forward declaring entities from another module, are necessary. The Google folks conceded that some abilities geared towards making modules work with existing codebases, such as allowing the export of macros, don’t have to be supported directly by the language (they could be handled by compiler flags and such). The two groups agreed to produce a combined design paper for Kona.
In terms of a ship vehicle, the Google folks characterized modules as “the feature with the single greatest implementation impact so far”, and expressed a strong preference for going through a Technical Specification. This route effectively rules out modules being in C++17, though as a TS the feature is still likely to be available to users in the 2017 timeframe.
You may recall from my Urbana report that the outcome of the coroutines discussion there was that two flavours of coroutines, stackful and stackless (see the report for an explanation of the distinction) were sufficiently different and both sufficiently motivated by use cases that they deserved to be developed as independent proposals, with a small minority favouring trying to unify them.
Since Urbana there has been progress in all of these directions, with four papers coming back for consideration at this meeting: an iteration on the stackless proposal, an iteration on the stackful proposal, and two different attempts to unify the two approaches. EWG looked at two of these.
The stackless proposal, called “resumable functions” and championed by Microsoft, is the most mature one. It has already gone through numerous rounds of review in SG 1 (the Concurency Study Group), and is close to the stage where standard wording for it can be written. Its discussion in EWG mostly concerned details such as what to call the traits and functions involved in the proposal (there was no consensus to change from the current coroutine_
prefix), whether deducing that a function is resumable by the presence of await
expressions in its body without annotating the declaration with a keyword like resumable
is implementable (implementers agreed that it was as long as return
statements in such a function were spelt differently), and whether yield
is a reasonable keyword to standardize (consensus was that it was not, and so we’re going to get keywords prefixed with co-
such as coyield
and coreturn
instead). Ultimately, the proposal author was given the go-ahead to write standard wording and go to CWG.
The other proposal EWG looked at was one of the attempts to unify stackful and stackless coroutines, called resumable expressions. I presented this paper because the author, Chris Kohlhoff, couldn’t make it to the meeting and I was somewhat familiar with the topic as a result of corresponding with him. Unlike resumable functions, this proposal was in the early design stage. The premise was that you could “have your cake and eat it too” by leveraging the compiler’s ability to analyze your code to avoid annotating calls to resumable functions at every level the way you have to do with await
(the weakness of stackless coroutines compared to stackful), while still only requiring the allocation of as much memory as you need (the advantage of stackless over stackful). The problem was that the compiler analysis can’t see past translation unit boundaries, thus still requiring annotations there. There were also concerns about the performance cross-translation unit calls compared to resumable functions; Chris was convinced that it was no slower than resumable functions, but unfortunately I didn’t have a sufficiently good comparative understanding of the implementation models to successfully argue this point. The final opinion on the proposal was divided: some people saw imagination in it, and wanted to see it developed further; others didn’t appreciate the fact that a competing proposal to resumable functions was brought up at such a late stage, risking the standardization of the latter.
You might ask how it even makes sense for resumable functions to be sent to CWG without resumable expressions being categorically rejected. The answer is twofold; first, it’s looking like resumable functions will target a Technical Specification rather than C++17, which means there’s room for competing proposals to be developed in parallel. Second, even if it were targeting the standard, it’s conceivable that multiple kinds of coroutines can co-exist in the language (certainly in Urbana the consensus was the stackful and stackless coroutines should coexist). In any case, Chris plans to attend the Kona meeting and presumably present and updated version of the resumable expressions proposal.
The other two papers (the stackful one and a different unification attempt) were only looked at briefly by SG 1, as the author (same person for both) wasn’t present.
The Array TS, which contains a language feature called “arrays of runtime bound” (ARBs) that’s essentially a toned-down version of C’s variable-length arrays (VLAs), and a library class dynarray
for wrapping such a thing into a container interface, has been in limbo for the past year, as attempts to implement dynarray
ran into difficulties, and proposals trying to replace it with something implementable got shot down one after the other.
At this meeting, EWG reviewed the status quo and addressed the question of what will happen to the Arrays TS going forward.
The status quo is this:
Given this state of affairs, the features currently in the Arrays TS are not going to be accepted in their current form; EWG recommended stripping the TS of its current contents, and waiting for a workable proposal to come along.
A promising direction for such a workable proposal is to have a “magic type” that acts like an ARB but knows its size and does not decay to a pointer (the implementable features that people wanted from a class wrapper). The type in question could only used for a local variable, and the underlying ARB itself wouldn’t be exposed. Several people expressed an interest in collaborating on a proposal in this direction.
With all the exciting action in EWG, I didn’t have much of a chance to follow progress on the library side in any detail, but here’s what I’ve gathered during the plenary sessions.
Note that I listed the library features accepted into C++17 at this meeting in the “C++17'' section above.
The following proposals were accepted into the second Library Fundamentals TS:
make_array
The following proposals failed to gain consensus:
The list of proposals still under review is very long, but here are some highlights:
variant
class; as one of the few “vocabulary types” still missing from the standard library, this is considered very important. A lot of the discussion centred around whether variant
should have an empty state, and if not, how to deal with the scenario where during assignment, the copy constructor of the right-hand side object throws. boost::variant
deals with this by incurring heap allocation, which is widely considered undesirable. I believe the prevailing consensus was to have an empty state, but only allow it to arise in this exceptional situation (pun intended), and make accessing the variant in its empty state (other than assigning a new value to it) undefined behaviour; this way, ordinary code isn’t burdened with having to worry about or check for the empty state.I split out ranges into its own section because I believe it deserves special mention.
As I described in my Urbana report, Eric Niebler came to that meeting with a detailed and well fleshed-out design for ranges in the standard library. It was reviewed very favourably, and Eric was asked to “conceptify” it – meaning express the concepts defined by the proposal using the features of the Concepts TS – and develop the result into a TS. This TS would form part of an “STLv2'' refresh of the standard library which wouldn’t be subject to strict backwards-compatibility constraints with the current standard library.
Eric did not delay in doing so: he came back in Lenexa with a conceptified proposal written up as a draft TS. LEWG began a design review of this proposal, and made good progress on it; they hope to complete the review during a post-meeting teleconference and then forward the proposal to LWG.
Note that some Study Groups whose proposals have progressed to the main working groups and which don’t plan on looking at new proposals, are no longer considered active – for example, SG 2 (Modules) is in this category, as the modules proposal is now in front of EWG. I talk a bit about the ones still active below.
With the Parallelism TS published and the Concurrency TS sent out for its PDTS ballot, SG 1 has been working on the second iterations of both TS’s.
Here are some of the proposals being considered for Parallelism II:
And for Concurrency II:
There are numerous other proposals in front of the SG as well which don’t have a target ship vehicle yet.
SG 5 has accomplished commendable work by publishing the Transactional Memory TS, but they aren’t quite calling it quits! They plan to meet again in Kona to discuss proposals for possible future standardization in the area.
SG 6 met and looked at three specific proposals:
The first, as I described in the “Library” section above, was unsuccessfully proposed for acceptance into C++17.
The other two are being considered for a Numerics TS, along with a number of other proposals not specifically discussed at this meeting, such as unbounded-precision integer types, rational numbers, fixed-point arithmetic, decimal floating-point support, various utility functions (GCD/LCM, constexpr bitwise operations), random numbers (a couple of different proposals), and other topics for which there aren’t concrete proposals yet. (Obviously, not everything in this list will be in a single Numerics TS.)
Said Numerics TS is still in a relatively early stage; a working draft is not yet expected in Kona.
SG 7 had a busy evening session where they looked at a number of reflection-related proposals:
function_name.param()
mirrored(entity)
in the proposal) rather than templates (such as reflect
as the previously-discussed type property queries proposal did) opens the doors to reflecting more kinds of entities, such as typedefs (as distinct from their underlying types), uninstantiated templates, and namespaces, which SG 7 viewed as valuable. Interest in further development of the proposal was expressed.Notably absent from the agenda was the latest version of the “type property queries” proposal, which had appeared to be the leading comprehensive reflection proposal in the past few meetings. The main reason it wasn’t presented was that the author couldn’t make it to the meeting, though one could also argue that SG 7 was already familiar with the overall design (the changes since the last version having been only minor), so time was better spent looking at proposals with alternative designs that still needed analysis.
SG 10 is continuing to maintain the list of feature testing macros and keep it up to date with new things like C++17 features.
They also have a proposal for some new preprocessor features to aid feature detection: a __has_include()
predicate for testing for the presence of an include, and a __has_cpp_attribute()
predicate for testing for support for a particular attribute.
SG 12 is still active, but did not meet this week as its members were busy advancing other proposals.
SG 13 did not meet this week; the author of the 2D graphics proposal plans to complete standard wording for it for Kona.
C++ aims to be a “zero-overhead abstraction” language, where you don’t pay for what you don’t use. It does a formidable job at this, but for some communities of C++ users, it could do an even better job.
The big sticking points are exceptions and RTTI, two language features for which you pay a bit even if you don’t use them. Projects concerned about this overhead commonly use compiler flags like -fno-exceptions
and -fno-rtti
, but the committee views these as nonconforming and doesn’t give them a lot of consideration. As a result, for example, a lot of standard library features require exceptions, and are therefore unusable in these projects.
There is also desire for more guarantees from the standard library, such as “empty containers never allocate” or “strings employ the small-string optimization”.
EWG looked at a a wishlist containing the above and more, and recommended creating a new Study Group to explore the area further.
The new Study Group, SG 14, will tentatively be called “Game Development & Low-Latency Applications”, because these are the most impacted communities, and the communities from which the committee hopes to get significant input.
There is a tentative plan for SG 14 to meet independently of the committee at CppCon and GDC, the idea being that members of the game development community are more likely to be able to make it to those events than to committee meetings.
The next meeting of the Committee will be in Kona, Hawaii, the week of October 19th, 2015.
On the whole, this was a very exciting meeting! My highlights:
Things are only going to get more interesting as C++17 starts to take more concrete shape, and more Technical Specifications cross the finish line. Stay tuned!
https://botondballo.wordpress.com/2015/06/05/trip-report-c-standards-meeting-in-lenexa-may-2015/
|
Daniel Stenberg: I lead the curl project and this is how it works |
I did this 50 minute talk on May 21 2015 for a Swedish company. With tongue in cheek subtitled “from hobby to world domination”. I think it turned out pretty decent and covers what the project is, how we work on it and what I do to make it run. Some of the questions are not easy to hear but in general it works out fine. Enjoy!
http://daniel.haxx.se/blog/2015/06/05/i-lead-the-curl-project-and-this-is-how-it-works/
|
David Rajchenbach Teller: Re-dreaming Firefox (3): Identities |
Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Identities. Let me emphasise that the features described in this blog post do not exist.
Sacha has a Facebook account, plus two Gmail accounts and one Microsoft Live identity. Sacha is also present on Twitter, both with a personal account, and as the current owner of his company’s account. Sacha also has an account on his bank, another one on Paypal, and one on Amazon. With any browser other than Firefox, Sacha’s online life would be a bit complicated.
For one thing, Sacha is logged to several of these accounts most of the time. Sacha has been told that this makes him easy to track, not just when he’s on Facebook, but also when he visits blogs, or news sites, or even shopping sites, but really, who has time to log off from any account? With any other browser, or with an older version of Firefox, Sacha would have no online privacy. Fortunately, Sacha is using Firefox, which has grown pretty good at handling identities.
Indeed, Firefox knows the difference between Facebook’s (and Google’s, etc.) main sites, for which Sacha may need to be logged, and the tracking devices installed on other sites through ads, or through the Like button (and Google +1, etc.), which are pure nuisances. So, even when Sacha is logged on Facebook, his identity remains hidden from the tracking devices. To put it differently, Sacha is logged to Facebook only on Facebook tabs, and only while he’s using Facebook in these tabs. And since Sacha has two GMail accounts, his logging on each account doesn’t interact with the other account. This feature is good not only for privacy, but also for security, as it considerably mitigates the danger of Cross-Site Scripting attacks. Conversely, if a third-party website uses Facebook as an identity provider, Firefox can detect this automatically, and handle the log-in.
Privacy doesn’t stop there. Firefox has a database of Terms of Service for most websites. Whenever Firefox detects that Sacha is entering his e-mail address, or his phone number, or his physical address, Firefox can tell Sacha if he’s signing up for spam or telemarketing – and take measures to avoid it. If Sacha is signing up for spam, Firefox can automatically create an e-mail alias specific to this website, valid either for a few days, or forever. If Sacha has a provider of phone aliases, Firefox can similarly create a phone alias specific to the website, valid either for a few days, or forever. Similarly, if Sacha’s bank offers temporary credit card numbers, Firefox can automatically create a single-transaction credit card number.
Firefox offers an Identity Panel (if we release this feature, it will, of course, be called Persona) that lets Sacha find out exactly which site is linked to which identity, and grant or revoke authorizations to log-in automatically when visiting such sites, as well as log in or out from a single place. In effect, this behaves as a Internet-wide Single Sign On across identities. With a little help, Firefox can even be taught about lesser known identity providers, such as Sacha’s company’s Single Sign On, and handle them from the same panel. That Identity Panel also keeps track of e-mail aliases, and can be used to revoke spam- and telemarketing-inducing aliases in just two clicks.
Also, security has improved a lot. Firefox can automatically generate strong passwords – it even has a database of sites which accept accept passphrases, or are restricted to 8 characters, etc. Firefox can also detect when Sacha uses the same password on two unrelated sites, and explain him why this is a bad idea. Since Firefox can safely and securely share passwords with other devices and back them up into the cloud, or to encrypted QR Codes that Sacha can safely keep in his wallet, Sacha doesn’t even need to see passwords. Since Firefox handles the passwords, it can download every day a list of websites that are known to have been hacked, and use it to change passwords semi-automatically if necessary.
Security doesn’t stop there. The Identity Panel knows not only about passwords and identity providers, but also about the kind of information that Sacha has provided to each website. This includes Sacha’s e-mail address and physical address, Sacha’s phone number, and also Sacha’s credit card number. So when Firefox finds out that a website to which Sacha subscribes has been hacked, Sacha is informed immediately of the risks. This extends to less material information, such as Sacha’s personal blog of vacation pictures, which Sacha needs to check immediately to find out whether they have been defaced.
I would like to browse with this Firefox. Would you?
https://dutherenverseauborddelatable.wordpress.com/2015/06/05/re-dreaming-firefox-3-identities/
|
Hannah Kane: Medium-term roadmap |
Earlier this week, I wrote about the short-term roadmap for teach.mozilla.org. Now I’d like to share a few details about what we envision a little farther out (Q3 and into Q4).
Badges
The bulk of the work here will be improving the user experience for both badge applicants and badge reviewers. We’ll also be rolling out some new badges that are aligned with our programmatic plans, and will recognize the key volunteer roles we’ve identified (i.e. Regional Coordinators and Club Captains).
Directory
I’m really excited about this project because it will transform the site from simply being a place to find resources to a community, and because we’ll be able to offer more customized experiences for users once we know more about them. The Mozilla Learning Network Directory will include rich mentor profiles and group pages (where “groups” include Clubs, Hives, and organizations), as well as the ability to search and browse. The initial build will also include a full integration of Discourse. (We’re drawing heavily on the Hive Directory for inspiration.)
Curriculum functionality
It’s been a long time coming, but soon we’ll begin designing a more permanent solution for making our curriculum content dynamic. This will include adding basic user interactions (“Likes,” ratings, comments), as well as dynamically facilitating the creation and display of remixes and translations. We’ll likely also have a tool for users to create and share their own playlists, and to submit curriculum for consideration.
Ongoing iteration on the engagement flow
We’ll continue to learn what works in terms of connecting people quickly to what they need, and we’ll likely continue to make changes as a result of those learnings. Our engagement strategy will get some serious power behind it as we move forward with the email tooling project that’s happening in parallel.
Thimble!
Finally, our team at CDOT are actively working on making improvements to Thimble, our open source code editor for teachers and learners. We wrote about those improvements a few weeks ago.
|
Air Mozilla: German speaking community bi-weekly meeting |
Zweiw"ochentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.
https://air.mozilla.org/german-speaking-community-bi-weekly-meeting-20150604/
|
Ben Hearsum: Buildbot <-> Taskcluster Bridge Now in Production |
A few weeks ago I gave a brief overview of the Buildbot <->Taskcluster Bridge that we've been developing, and Selena provided some additional details about it yesterday. Today I'm happy to announce that it is ready to take on production work. As more and more jobs from our CI infrastructure move to Taskcluster, the Bridge will coordinate between them and jobs that must remain in Buildbot for the time being.
The Bridge itself is feature complete until our requirements change (though there's a couple of minor bugs that would be nice to fix), but most of the Buildbot Schedulers still need to be replaced with Task Graphs. Some of this work will be done at the same time as porting specific build or test jobs to run natively in Taskcluster, but it doesn't have to be. I made a proof of concept on how to integrate selected Buildbot builds into the existing "taskcluster-graph" command and disable the Buildbot schedulers that it replaces. With a bit more work this could be extended to schedule all of the Buildbot builds for a branch, which would make porting specific jobs simpler. If you'd like to help out with this, let me know!
http://hearsum.ca/blog/buildbot-taskcluster-bridge-now-in-production.html
|
Gregory Szorc: Changeset Metadata on hg.mozilla.org |
Just a few minutes ago, I deployed some updates to hg.mozilla.org to display more metadata on changeset pages. See 4b69a62d1905, dc4023d54436, and b617a57d6bf1 for examples of what's shown.
We currently display:
If you notice any issues or have requests for new features, please file a bug.
This work is built on top of a feature I added to Mercurial 3.4 to make it easier to inject extra data into Mercurial's web templates. We just deployed Mercurial 3.4.1 to hg.mozilla.org yesterday. It's worth noting that this deployment happened in the middle of the day with no user-perceived downtime. This is a far cry from where we were a year ago, when any server change required a maintenance window. We've invested a lot of work into a test suite for this service so we can continuously deploy without fear of breaking things. Moving fast feels so good.
http://gregoryszorc.com/blog/2015/06/04/changeset-metadata-on-hg.mozilla.org
|
David Rajchenbach Teller: Re-dreaming Firefox (2): Beyond Bookmarks |
Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Beyond Bookmarks. Let me emphasize that the features described in this blog post do not exist.
« Look, here is an interesting website. I want to read that content (or watch that video, or play that game), just not immediately. » So, what am I going to do to remember that I wish to read it later:
Wow, that’s 9 ways of fulfilling the same task. Having so many ways of doing the same thing is not a very good sign, so let’s see if we can find a way to unify a few of these abstractions into something more generic and powerful.
What are the differences between Bookmarking and Saving?
Merging these two features is actually quite easy. Let’s introduce a new button, the Awesome Bookmarks which will serve as a replacement for both the Bookmark button and Save As.
By switching to Awesome Bookmarks, we have merged Saving, Bookmarking and the Read it Later list of Pocket. Actually, since Firefox already offers Sync and Social Sharing, we have just merged all the features of Pocket.
So we have removed collapsed items from our list into one.
What are the differences between Bookmarks and History?
Let’s keep doing almost that, but without segregating the views. Let us introduce a new view, the Awesome Pages, which will serve as a replacement for both Bookmarks Menu and the History Menu.
This view shows a grid of thumbnails of visited pages, iOS/Android/Firefox OS style.
By default, clicking on an Awesome Bookmark (or history entry, or suggested page, etc.) for a page that is already opened switches to that page. Non-bookmarked pages can be turned into Awesome Bookmarks trivially, by starring them or putting them into folders.
An Awesome Bar at the top of this Awesome Pages lets users quickly search for pages and folders. This is the same Awesome Bar that is already at the top of tabs in today’s Firefox, just with the full-screen Awesome Pages replacing the current drop-down menu.
Oh, and by the way, this Awesome Pages is actually our new New Tab page.
By switching to the Awesome Pages, we have merged:
What are the differences between Bookmarks and Tabs?
That’s not much of a difference, is it?
So let’s make a few more changes to our UX:
So, with a little imagination (and, I’ll admit, a little hand-waving), we have merged tabs and bookmarks. Interestingly, we have done that by moving to an Apps-like model, in which whether an application is loaded or not is for the OS to decide, rather than the user.
By the way, what are the differences between Tabs and Open Web Apps?
Well, if we decide that Apps are just Bookmarks, since Bookmarks have been made visible to the OS in section 1., and since Bookmarks have just been merged with Tabs which have just been made killable by the browser, we have our Apps model.
We have just removed three more items from our list.
We are down to one higher-level abstraction (the Awesome Bookmark) and one view of it (the Awesome Page). Of course, if this is eventually released, we are certainly going to call both Persona.
This new Firefox is quite different from today’s Firefox. Actually, it looks much more like Firefox OS, which may be a good thing. While I realize that many of the details are handwavy (e.g. how do you open the same page twice simultaneously?), I believe that someone smarter than me can do great things with this preliminary exploration.
I would like to try that Firefox. Would you?
https://dutherenverseauborddelatable.wordpress.com/2015/06/03/re-dreaming-firefox/
|
Mark Surman: The essence of web literacy |
Read. Write. Participate. These words are at the heart of our emerging vision for Mozilla Learning (aka Academy). Whether you’re a first time smartphone user, a budding tech educator or an experienced programmer, the degree to which you can read, write and participate in the digital world shapes what you can imagine — and what you can do. These three capabilities are the essence of Mozilla’s definition of web literacy.
As we began thinking more about Mozilla Learning over the past month, we started to conclude that this read | write | participate combination should be the first principle behind our work. If a project doesn’t tie back to these capabilities, it should not be part of our plan. Or, put positively, everything we do should get people sharing know-how and building things on the web in a way that helps them hone their read | write | participate mojo.
Many existing Mozilla projects already fit this criteria. Our SmartOn series helps people get basic knowledge on topics like privacy. Mozilla Clubs brings together people who want to teach and learn core web literacy skills. And projects like OpenNews bring together skill developers who are honing their skills in open source and collaboration while building the next wave of news on the web. These projects may seem disparate at first, but they all help people learn, hone and wield the ability to read, write and participate on the web.
If we want to insert this minimalist version of web literacy into the heart of our work, we’ll need to define our terms and pressure test our thinking. My working definition of these concepts is:
On the idea of pressure testing our framework: the main question we’ve asked so far is ‘are these concepts helpful if we’re talking about people across a wide variety of skill levels?’ Does a first time smartphone user really need to know how to read, write and participate? Does a master coder still have skills to hone in these areas? And skills to share? Also, how does our existing basic web literacy grid hold up to these questions?
Laura de Reynal and I have been running different versions of this pressure test with people we work with over the last month or so. Laura has been talking to young people and first time smartphone users. I’ve been talking to people like Shuttleworth Fellows and participants at eLearning Africa who are emerging leaders in various flavours of ‘open + tech’. Roughly, we asked each of them to list a thing they know how to do or want to know how to do in each of the read | write | participate areas. In most cases, people understood our question with little explanation and got excited about what they knew and what they could learn. Many also expressed a pride and willingness to share what they know. By this quick and dirty measure, read | write | participate passed the test of being applicable to people with a wide variety of skills and sophistication.
One notable result from the groups I talked to: they all encouraged Mozilla to be incredibly opinionated about ‘what kind of reading, writing and participating’ matters most. In particular, a number of them stressed that we could do a lot of good in the world by helping people learn and hone the sort of ‘working in the open’ participation skills that we practice every day. Backing this up, evaluation research we’ve done recently shows that the educators in the Hive and fellows in Open News really value this aspect of being part of the Mozilla community. It could be that we want to formalize our work on this and make it a focal point within our Mozilla Learning strategy.
Building our work from the last few years, there is a lot more to dig into on web literacy and how it fits into our plans. However, I wanted to get this post up to put a stake in the ground early to establish read | write | participate as the north star to which all Mozilla Leading efforts must align. Being clear about that makes it easier to have discussions about what we should and shouldn’t be doing going forward.
As a next step to dig deeper, Chris Lawrence has agreed to form a web literacy working group. This group will go back into the deeper work we’ve done on the web literacy map, tying that map into read | write | participate and also looking at other frameworks for things like 21st century skills. It should form in the next couple of weeks. Once it has, you’ll be able to track it and participate from the Mozilla Learning planning wiki.
https://commonspace.wordpress.com/2015/06/03/the-essence-of-web-literacy/
|
Benjamin Kerensa: Don’t Celebrate USA Freedom Act Passage |
Mozilla recently announced it’s support for the USA Freedom Act alongside allies like the EFF, but the EFF also ended up withdrawing its support because of deficiencies in the legislation and a recent opinion from an appeals court.
I think Mozilla should have withdrawn its support on this still flawed bill because while it did push forward some important reforms it still extended flawed sections of the law that infringe on individual’s civil liberties such as Section 206 “Roving Wiretap” authority program. This program essentially allows the FBI access to any phone line, mobile communications or even internet connections a suspect may be using without ever having to provide a name to anyone. This is clearly not good legislation because it allows overreach and lacks a requirement that communications or accounts being tapped are tied to the subject. While this is just one example there are many other provisions that allow intelligence and law enforcement agencies to continue their spying, just not as broadly as before.
What we need is smarter legislation that allows law enforcement and intelligence agencies to do their work without infringing on the privacy or civil liberties of everyday Americans, you know, like back when domestic spying was entirely illegal.
Wikipedia does a great job of documenting some good information about the USA Freedom Act and I would encourage folks to check out the article and research this piece of legislation more. Remember this bill only passed because it had concessions for Pro-Intelligence legislators, the same folks who created the Patriot Act and opened up spying on Americans in the first place.
I think Mozilla could have done better by withdrawing support and it is good to see that while the EFF is celebrating some parts of the USA Freedom Act it is also mourning some of the concessions.
|
Armen Zambrano: mozci 0.7.2 - Support b2g jobs that still run on Buildbot |
|
Armen Zambrano: mozci 0.7.1 - regression fix - do not look for files for running jobs |
|
Air Mozilla: Product Coordination Meeting |
Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...
https://air.mozilla.org/product-coordination-meeting-20150603/
|
Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 17 |
Watch mconley livehack on Firefox Desktop bugs!
https://air.mozilla.org/the-joy-of-coding-mconley-livehacks-on-firefox-episode-17/
|
Selena Deckelmann: TaskCluster migration: about the Buildbot Bridge |
Back on May 7, Ben Hearsum gave a short talk about an important piece of technology supporting our transition to TaskCluster, the Buildbot Bridge. A recording is available.
I took some detailed notes to spread the word about how this work is enabling a great deal of important Q3 work like the Release Promotion project. Basically, the bridge allows us to separate out work that Buildbot currently runs in a somewhat monolithic way into TaskGraphs and Tasks that can be scheduled separately and independently. This decoupling is a powerful enabler for future work.
Of course, you might argue that we could perform this decoupling in Buildbot.
However, moving to TaskCluster means adopting a modern, distributed queue-based approach to managing incoming jobs. We will be freed of the performance tradeoffs and careful attention required when using relational databases for queue management (Buildbot uses MySQL for it’s queues, TaskCluster uses RabbitMQ and Azure). We also will be moving “decision tasks” in-tree, meaning that they will be closer to developer environments and likely easier to manage keeping developer and build system environments in sync.
Here are my notes:
BBB acts as a TC worker and provisioner and delegates all those things to BuildBot. As far as TC is concerned, BBB is doing all this work, not Buildbot itself. TC knows nothing about Buildbot.
There are three services:
BBB has a small database that associates build requests with TC taskids and runids.
BBB is designed to be multihomed. It is currently deployed but not running on three Buildbot masters. We can lose an AWS region and the bridge will still function. It consumes from Pulse.
The system is dependent on Pulse, SchedulerDB and Self-serve (in addition to a Buildbot master and Taskcluster).
Reacts to events coming from TC Pulse exchanges.
Creates build requests in response to tasks becoming “pending”. When someone pushes to mozilla-central, BBB inserts BuildRequests into BB SchedulerDB. Pending jobs appear in BB. BBB cancels BuildRequests as well — can happen from timeouts, someone explicitly cancelling in TC.
Responds to events coming from the BB Pulse exchanges.
Claims a Task when builds start. Attaches BuildBot Properties to Tasks as artifacts. Has a buildslave name, information/metadata. It resolves those Tasks.
Buildbot and TC don’t have a 1:1 mapping of BB statuses and TC resolution. Also needs to coordinate with Treeherder color. A short discussion happened about implementing these colors in an artifact rather than inferring them from return codes or statuses inherent to BB or TC.
Task is created. Task in TC is pending, nothnig in BB. TCListener picks up the event and creates a BuildRequest (pending).
BB creates a Build. BBListener receives buildstarted event, claims the Task.
Reflector reclaims the Task while the Build is running.
Build completes successfully. BBListener receives log uploaded event (build finished), reports success in TaskCluster.
(500 from hg – common reason to retry)
Same through Reflector.
BB fails, marked as RETRY BBListener receives log uploaded event, reports exception to Taskcluster and calls rerun Task.
BB has already started a new Build TCListener receives task-pending event, updates runid, does not create a new BuildRequest.
Build completes successfully Buildbot Listener receives log uploaded event, reports success to TaskCluster.
Task created TCListener receives task-pending event, creates BuildRequest Nothing happens. Task goes past deadline, TaskCluster cancels it. TCListener receives task-exception event, cancels BuildRequest through Self-serve
QUESTIONS:
On TH, if someone requests a rebuild twice what happens? * There is no retry/rerun, we duplicate the subgraph — where ever we retrigger, you get everything below it. You’d end up with duplicates Retries and rebuilds are separate. Rebuilds are triggered by humans, retries are internal to BB. TC doesn’t have a concept of retries.
How do we avoid duplicate reporting? TC will be considered source of truth in the future. Unsure about interim. Maybe TH can ignore duplicates since the builder names will be the same.
Replacing the scheduler what does that mean exactly?
Next steps might be release scheduling tasks, rather than merging into central. Someone else might be able to work on other CI tasks in parallel.
http://www.chesnok.com/daily/2015/06/03/taskcluster-migration-about-the-buildbot-bridge/
|
Daniel Glazman: In praise of Rick Boykin and its Bulldozer editor |
Twenty years ago this year, Rick Boykin started a side project while working at NASA. That project, presented a few months later as a poster session at the 4th International Web Conference in Boston (look in section II. Infrastructure), was Bulldozer, one of the first Wysiwyg editors natively made for the Web. I still remember his Poster session at the conference as the most surprising and amazing short demo of the conference. His work on Bulldozer was a masterpiece and I sincerely regretted he stopped working on it, or so it seemed, when he left NASA the next year.
I thanked you twenty years ago, Rick, and let me thank you again today. Happy 20th birthday, Bulldozer. You paved the way and I remember you.
|
Daniel Stenberg: daniel weekly |
My series of weekly videos, in lack of a better name called daniel weekly, reached episode 35 today. I’m celebrating this fact by also adding an RSS-feed for those of you who prefer to listen to me in an audio-only version.
As an avid podcast listener myself, I can certainly see how this will be a better fit to some. Most of these videos are just me talking anyway so losing the visual shouldn’t be much of a problem.
I talk about what I work on in my open source projects, which means a lot of curl stuff and occasional stuff from my work on Firefox for Mozilla. I also tend to mention events I attend and HTTP/networking developments I find interesting and grab my attention. Lots of HTTP/2 talk for example. I only ever express my own personal opinions.
It is generally an extremely geeky and technical video series.
Every week I mention a (curl) “bug of the week” that allows me to joke or rant about the bug in question or just mention what it is about. In episode 31 I started my “command line options of the week” series in which I explain one or a few curl command line options with some amount of detail. There are over 170 options so the series is bound to continue for a while. I’ve explained ten options so far.
I’ve set a limit for myself and I make an effort to keep the episodes shorter than 20 minutes. I’ve not succeed every time.
The 35 episodes have been viewed over 17,000 times in total. Episode two is the most watched individual one with almost 1,500 views.
Right now, my channel has 190 subscribers.
The top-3 countries that watch my videos: USA, Sweden and UK.
Share of viewers that are female: 3.7%
|
Joel Maher: re-triggering for a [root] cause- version 2.0 – a single bug! |
Today the orange factor email came out- the top 10 bugs with stars on them :) Last week we had no bugs that we could take action on, and the week before we had a few bugs to take action on.
This week I looked at each bug again, annotated them with some notes as to what I did or why I didn’t do anything, here are the bugs:
It is nice to find a bug that we can take action on. What is interesting is the bug has been around for a while, but we noticed about May 21 that the rate of failures went up from a couple a day to >5/day. Details:
I might try a full experiment soon blindly looking at bugs instead of using orange factor.
https://elvis314.wordpress.com/2015/06/03/re-triggering-for-a-root-cause-version-2-0-a-single-bug/
|
Mozilla Reps Community: Reps Weekly Call – May 28th 2015 |
Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.
Shoutouts to Dorothee Danedjo Fouba, Mrz, Yofie, Ioana and all the Balkans communities for being awesome!
Greg joined the call to talk about an important project called Shape of the web which is an online platform to share and assess the attributes we believe are necessary for the open web. It represents Mozilla’s view of the world and wants to help people to understand what’s at stake.
The projects main goals are:
Feel free to try the project at http://shapeoftheweb.mozilla.org/ and use it in your upcoming maker party.
Ram reported about the first ever regional marketplace community building event with which was organized in Hyderabad. It was an overnight event with around 25 participants and volunteers working on 15 bugs, submitted 11 pull requests & on-boarded 15 NEW long-time contributors.
You can read more in Mozilla India’s blog post and in gurumukhi’s blog post.
Boris is looking for help for porting CyanogenMod, TWRP and ClockworkMod to the Flame reference device, please contact him if you are interested.
Don’t forget to comment about this call on Discourse and we hope to see you next week!
https://blog.mozilla.org/mozillareps/2015/06/03/reps-weekly-call-may-28th-2015/
|