Gervase Markham: Not That Secret, Actually… |
(Try searching Google Maps for “Secret Location”… there’s one in Norway, one in Toronto, and two in Vancouver!)
http://feedproxy.google.com/~r/HackingForChrist/~3/daAil5n6vec/
|
Karl Dubost: Fix Your Flexbox Web site |
Web compatibility issues takes many forms. Some are really hard to solve and there are sound business reasons behind them. On the other hand, some Web compatibility issues are really easy to fix with the benefits of allowing more potential market shares for the Web site. CSS Flexbox is one of those. I have written about it in the past. Let's make another practical demonstration on how to fix some of the flexbox issues.
Spoiler alert: This is the final result before and after fixing the CSS.
How we did it? Someone had reported that the layout was broken on hao123.com on Firefox OS (Mobile). Two things are at happening. First of all, because Hao123 was not sending the mobile version to Firefox OS, we relied on User Agent overriding. Faking the Firefox Android user agent, we had access to the mobile version. Unfortunately, this version is partly tailored for -webkit-
CSS properties.
Inspecting the stylesheets with the developer tools, we can easily discover the culprit.
-> grep -i "display:-webkit" hao123-old.css display:-webkit-box; display:-webkit-box; display:-webkit-box display:-webkit-box; display:-webkit-box; display:-webkit-box display:-webkit-box; -> grep -i "flex" hao123-old.css -webkit-box-flex:1;
So I decided to fix it by adding
display:flex;
for each display:-webkit-box;
flex-grow: 1;
for -webkit-box-flex:1;
The thing which is amazing which this kind of fix is that the site dynamically fixes itself in your viewport as you go with it. You are literally sculpting the page. And if the company is telling why they should bother about it? Because for something that will take around 10 minutes to fix, they will suddenly have a much bigger coverage of devices… which means users… which means marketshares.
I started a repository to help people fixing their own Web site with the most common issues in Web Compatibility. Your contribution on the project is more than welcome.
Otsukare.
|
Hannah Kane: How to Mofo |
OpenMatt and I have been talking about the various ways of working at Mofo, and we compiled this list of what we think works best. What do y’all Mofos think?
When starting a new project:
Communication:
Doing the do:
Update: To hack on the next version of this, please visit http://workopen.org/mofo (thanks to Doug for the suggestion!)
|
Roberto A. Vitillo: Clustering Firefox hangs |
Jim Chen recently implemented a system to collect stacktraces of threads running some code for more than 500ms. A summary of the aggregated data is displayed in a nice dashboard in which the top N aggregated stacks are shown according to different filters.
I have looked at a different way to group the frames that would help us identify the culprits of main-thread hangs, aka jank. The problem with aggregating stackframes and looking at the top N is that there is a very long tail of stacks that are not considered. It might very well be that in that tail some important patterns could be lurking that we are missing.
So I tried different clustering techniques until I settled with the very simple solution of aggregating the traces by their last frame. Why the last frame? When I used k-means to cluster the traces I noticed that, for many of the more interesting clusters the algorithm found, most stacks had the last frame in common, e.g.:
Aggregating by the last frame yields clusters that are big enough to be considered interesting in terms of number of stacktraces and are likely to explain the most common issues our users experience.
Currently on Aurora, the top 10 meaningful offending main-thread frames are in order of importance:
Even without showing sample stacks for each cluster, there is some useful information here. The elephants in the room are clearly plugins; or should I say Flash? But just how much do “plugins” hurt our responsiveness? In total, plugin related traces account for about 15% of all hangs. It also seems that the median duration of a plugin hang is not different from a non-plugin one, i.e. between 1 and 2 seconds.
But just how often does a hang occur during a session? Let’s have a look:
The median number of hangs for a session amounts to 3; the mean is not that interesting as there are big outliers that skew the data. Also note that the median duration of a session is about 13 minutes.
As one would expect, the median number of hangs increases as the duration of a session does:
Tha analysis was run on a week’s worth of data for Aurora (over 50M stackframes) and I got similar results when re-running on previous weeks, so those numbers seem to be pretty stable.
There is some work in progress to improve the status quo. Aaron Klotz’s formidable async plugin initialization is going to eliminate trace 4 and he might tackle frame 8 in the future. Furthermore, a recent improvent in cycle collection is hopefully going to reduce the impact of frame 2.
http://robertovitillo.com/2014/11/25/clustering-firefox-hangs/
|
Mozilla Fundraising: Official Mozilla Gear Is Now Open for Business |
https://fundraising.mozilla.org/official-mozilla-gear-is-now-open-for-business/
|
Mozilla Thunderbird: Thunderbird Reorganizes at 2014 Toronto Summit |
In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.
As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.
Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.
At the Summit, we made a number of key decisions:
There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!
https://blog.mozilla.org/thunderbird/2014/11/thunderbird-reorganizes-at-2014-toronto-summit/
|
Brian R. Bondy: Automated end to end testing at Khan Academy using Gecko |
Developers at Khan Academy are responsible for shipping new stuff they create to khanacademy.org as it's ready.
As a whole, the site is deployed several times per day. Testing deploys of khanacademy.org can take up a lot of time.
We have tons of JavaScript and Python unit tests, but they do not catch various errors that can only happen on the live site, such as Content Security Policy (CSP) errors.
We recently deployed a new testing environment for end to end testing which will result in safer deploys. End to end testing is not meant to replace manual testing at deploy time completely, but over time, it will reduce the amount of time taken for manual testing.
The end to end tests catch things like missing resources on pages, JavaScript errors, and CSP errors. They do not replace unit tests, and unit tests should be favoured when it's possible.
We chose to implement the end to end testing with CasperJS powered by the SlimerJS engine. Actually we even have one more abstraction on top of that so that tests are very simple and clean to write.
SlimerJS is similar and mostly compatible with the more known PhantomJS, but SlimerJS is based on Firefox's Gecko rendering engine instead of WebKit. At the time of this writing, it's based on Gecko 33. CasperJS is a set of higher level APIs and can be configured to use PhantomJS or SlimerJS.
The current version of PhantomJS is based on Webkit and is too far behind to be useful to end to end tests for our site yet. There's a newer version of PhantomJS coming, but it's not ready yet. We also considered using Selenium to automate browsers to do the testing, but it didn't meet our objectives for various reasons.
They test the actual live site. They can load a list of pages, run scripts on the pages, and detect errors. The scripts emulate a user of the site who fills out forms, logs in, clicks things, waits for things, etc.
We also have scripts for creating and saving programs in our CS learning environment, doing challenges, and we'll even have some for playing videos.
Here's an example end-to-end test script that logs in, and tests a couple pages. It will return an error if there are any JavaScript errors, CSP errors, network errors, or missing resources:
EndToEnd.test("Basic logged in page load tests", function(casper, test) {
Auth.thenLogin(casper);
[
[ "Home page", "/"],
[ "Mission dashboard", "/mission/cc-sixth-grade-math"]
].map(function(testPage) {
thenEcho(casper, "Loading page: " + testPage[0]);
KAPageNav.thenOpen(casper, testPage[1]);
});
Auth.thenLogout(casper);
});
Developers are currently prompted to run the tests when they do a deploy, but we'll be moving this to run automatically from Jenkins during the deploy process. Tests are run both on the staged website version before it is set as the default, and after it is set as the default version.
The output of tests looks like this:
|
Henrik Skupin: Firefox Automation report – week 39/40 2014 |
In this post you can find an overview about the work happened in the Firefox Automation team during week 39 and 40.
One of our goals for last quarter was to get locale testing enabled in Mozmill-CI for each and every supported locale of Firefox beta and release builds. So Cosmin investigated the timing and other possible side-effects, which could happen when you test about 90 locales across all platforms! The biggest change we had to do was for the retention policy of logs from executed builds due to disk space issues. Here we not only delete the logs after a maximum amount of builds, but also after 3 full days now. That gives us enough time for investigation of test failures. Once that was done we were able to enable the remaining 60 locales. For details of all the changes necessary, you can have a look at the mozmill-ci pushlog.
During those two weeks Henrik spent his time on finalizing the Mozmill update tests to support the new signed builds on OS X. Once that was done he also released the new mozmill-automation 2.0.8.1 package.
For more granular updates of each individual team member please visit our weekly team etherpad for week 39 and week 40.
If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 39 and week 40.
http://www.hskupin.info/2014/11/25/firefox-automation-report-week-39-40-2014/
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://globau.wordpress.com/2014/11/25/happy-bmo-push-day-120/
|
Nicholas Nethercote: Two suggestions for the Portland work week |
Mozilla is having a company-wide work week in Portland next week. It’s extremely rare to have this many Mozilla employees in the same place at the same time, and I have two suggestions.
That’s it. Have a great week!
https://blog.mozilla.org/nnethercote/2014/11/25/two-suggestions-for-the-portland-work-week/
|
David Boswell: Radical Participation Idea: Slow Down |
The Portland Coincidental Work Week is next week and we’ll be working on our plans for 2015. One of the things we want to include in our planning is Mitchell’s question about what does radical participation look like for Mozilla today?
Everyone who is interested in this question is welcome to join us next Thursday and Friday for the Participation work week. Please come with ideas you have about this question. Here is one idea I’m thinking about that feels like an important part of a radical participation plan.
Slow Down
I’ve worked at small software start-ups and I’ve worked at large volunteer-based organizations. There are many differences between the two. The speed that information reaches everyone is a major difference.
For example, I worked at a small start-up called Alphanumerica. There were a dozen of us all working together in the same small space. Here’s a picture of me in my corner (to give you an idea of how old this photo is it was taken on a digital camera that stored photos on a floppy disk.)
To make sure everyone knew about changes, you could get everyone’s attention and tell them. People could then go back to work and everyone would be on the same page. In this setting, moving fast and breaking things works.
Information doesn’t spread this quickly in a globally distributed group of tens of thousands of staff and volunteers. In this setting, if things are moving too fast then no one is on the same page and coordinating becomes very difficult.
Mozilla is not a small start-up where everyone is physically together in the same space. We need to move fast though, so how can we iterate and respond quickly and keep everyone on the same page?
Slow Down To Go Fast Later
It might seem odd, but there is truth to the idea that you can slow down now in order to go faster later. There is even research that backs this up. There’s a Harvard Business Review article on this topic worth reading—this paragraph covers the main take-aways:
In our study, higher-performing companies with strategic speed made alignment a priority. They became more open to ideas and discussion. They encouraged innovative thinking. And they allowed time to reflect and learn. By contrast, performance suffered at firms that moved fast all the time, focused too much on maximizing efficiency, stuck to tested methods, didn’t foster employee collaboration, and weren’t overly concerned about alignment
For Mozilla, would radical participation look like setting goals around alignment and open discussions? Would it be radical to look at other large volunteer-based organizations and see what they optimize for instead of using start-ups as a model?
I’m very interested to hear what people think about the value of slowing down at Mozilla as well as hearing other ideas about what radical participation looks like. Feel free to comment here, post your own blog and join us in Portland.
https://davidwboswell.wordpress.com/2014/11/24/radical-participation-idea-slow-down/
|
Armen Zambrano: Pinning mozharness from in-tree (aka mozharness.json) |
"revision": "production"
|
Tristan Nitot: En vrac du lundi |
|
Christian Heilmann: Diversifight – a talk at the diversity hackathon at Spotify Sweden |
Yesterday afternoon I presented at the “Diversify” hackathon in the offices of Spotify in Stockholm, Sweden. The event was aimed at increasing diversity in IT by inviting a group of students that represented a good mix of gender and ethnic background to work together on hacks with music and music data. There was no strict competitive aspect to this hackathon and no prizes or winners – it was all about working together and seeing how a mixed group can achieve better results.
Photos by Sofie Lindblom and Ejay Janis
When I was asked to speak at an event about diversity in IT, I was flattered but also confused. Being very white and male I don’t really have a chance to speak from a viewpoint of a group that brings diversity to the mix. But I do have a lot of experience and I looked into the matter quite a bit. Hence I put together a talk that covers a few things I see going wrong, a few ideas and tools we have to make things better by bettering ourselves and a reminder that the world of web development used to be much more diverse and we lost these opportunities. In essence, the break-neck speed of our market, the hype the press and events living on overselling the amazing world of startups and the work environments we put together seem to actively discouraging diversity. And that is what I wanted the students to consider and fight once they go out and start working in various companies.
Diversity is nothing we can install – it is something we need to fight for. And it makes no sense if only those belonging to disadvantaged groups do that.
This talk is pretty raw and unedited, and it is just a screencast. I would love to give a more polished version of it soon.
You can watch the the screencast on YouTube.
The slides are available on Slideshare.
Resources I covered in the talk:
The feedback was amazing, students really liked it and I am happy I managed to inspire a few people to think deeper about a very important topic.
A big thank you to the Spotify Street Team and especially Caroline Arkenson for having me over (and all the hedgehog photos in the emails).
|
Yunier Jos'e Sosa V'azquez: El nuevo bot'on “Olvidar” de Firefox |
Protege tu privacidad con el nuevo bot'on Olvidar, disponible solo en la 'ultima versi'on de Firefox. En tan solo unos clics, puedes eliminar tu historial e informaci'on personal m'as reciente -desde los 'ultimos cinco minutos y hasta 24 horas- sin tocar el resto. El bot'on Olvidar es muy 'util si usas un equipo p'ublico y quieres limpiar tu informaci'on, o si llegas a un sitio web dudoso y necesitas salir de ah'i r'apidamente.
Sino encuentras este bot'on en la barra de herramientas de Firefox, en el Men'u , elige
Personalizar y arrastra el bot'on Olvidar hacia donde desees. All'i tambi'en puedes configurar tu navegador como m'as te plazca, quitando y a~nadiendo botones hacia el Men'u o hacia la barra de herramientas.
La 'ultima versi'on de Firefox la puedes obtener desde nuestra Zona de Descargas para Windows, Mac, Linux y Android.
http://firefoxmania.uci.cu/el-nuevo-boton-olvidar-de-firefox/
|
Sta's Malolepszy: Meet.js talk on Clientside localization in Firefox OS |
Firefox OS required a fast and lean localization method that could scale up to 70 languages, cater to the needs of hundreds of developers worldwide all speaking different languages and support a wide spectrum of devices with challenging hardware specs.
At the end of September, I went to Pozna'n to speak about localization technology in Firefox OS at Meet.js Summit. In my talk I discussed how we had been able to create a localization framework which embraces new Web technologies like Web components and mutation observers, how we'd come up with new developer tools to make localization work easier and what exciting challenges lay ahead of us.
|
Botond Ballo: Trip Report: C++ Standards Meeting in Urbana-Champaign, November 2014 |
Project | Status |
C++14 | Finalized and approved, will be published any day now |
C++17 | Some minor features so far. Many ambitious features are being explored. for (e : range) was taken out. |
Networking TS | Sockets library based on Boost.ASIO moving forward |
Filesystems TS | On track to be published early 2015 |
Library Fundamentals TS | Contains optional , any , string_view and more. No major changes since last meeting. Expected 2015. |
Library Fundamentals TS II | Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage, with many features planned. |
Array Extensions TS | Continues to be completely stalled. A new proposal was looked at but failed to gain consensus. |
Parallelism TS | Progressing well. Expected 2015. |
Concurrency TS | Progressing well. Expected 2015. Will have a follow-up, Concurrency TS II. |
Transactional Memory TS | Progressing well. Expected 2015. |
Concepts (“Lite”) TS | Progressing well. Expected 2015. |
Reflection | Looking at two different proposals. Too early to say anything definitive. |
Graphics | 2D Graphics TS based on cairo moving forward |
Modules | Microsoft and Clang have implementations at various stages of completeness. They are iterating on it and trying to converge on a design. |
Coroutines | Proposals for both stackless and stackful variants will be developed, in a single TS. |
Last week I attended another meeting of the ISO C++ Standards Committee at the Univeristy of Illinois at Urbana-Champaign. This was the third and last Committee meeting in 2014; you can find my reports on the previous meetings here (February 2014, Issaquah) and here (June 2014, Rapperswil). These reports, particularly the Rapperswil one, provide useful context for this post.
The focus of this meeting was moving forward with the various Technical Specifications (TS) that are in progress, and looking ahead to C++17.
C++14 was formally approved as an Internal Standard in August when it passed its final ballot (the “DIS”, or Draft International Standard, ballot; see my Issaquah report for a description of the procedure for publishing a new language standard).
It will take another few weeks for ISO to publish the approved standard; it’s expected to happen before the end of the year.
With C++14 being approved, the Committee is turning its attention towards what its strategic goals are for the next revision of the language standard, C++17.
As I explained in my Rapperswil report, most major new features are targeted for standardization in two steps: first, as a Technical Specification (TS), an experimental publication vehicle with no backwards-compatibility requirements, to gain implementation and use experience; and then, by incorporation into an International Standard (IS), such as C++17.
Therefore, a significant amount of the content of C++17 is expected is to consist of features being published as Technical Specifications in the near future. It’s not immediately clear which TS’s will be ready for inclusion in C++17; it depends on when the TS itself is published, and whether any concerns about it come up as it’s being implemented and used. Hopefully, at least the ones being published over the next year or so, such as Filesystems, Concepts, Parallelism, Library Fundamentals I, and Transactional Memory, are considered for inclusion in C++17.
In addition, there are some major features that do not yet have a Technical Specification in progress which many hope will be in C++17: namely, Modules and Reflection. Due to the size and scope of these features, it is increasingly likely that the committee will deem it safer to standardize these as TS’s first as well, rather than targetting them directly at C++17. In this case, there may not be time for the additional step of gaining experience with the TS and merging it into the IS in time for C++17; however, it’s too early to know with any confidence at this point.
That said, C++17 will certainly contain some language and library features, and some smaller ones have already made it in. I mentioned a few in my Rapperrswil report, but some new ones came out of this meeting:
Args
is a non-type parameter pack of booleans, then Args &&...
is a new expression which is the ‘and’ of all the booleans in the pack. All binary operators support this; for operators that have a logical identity element (e.g. 0
for addition), an empty pack is allowed and evaluates to that identity.for (elem : range)
(which would have meant for (auto&& elem : range)
), was removed. (Technically, it was never added, because the C++ working draft was locked for additions in Rapperswil while the C++14 DIS ballot was in progress. However, there was consensus in the Evolution and Core Working Groups in Rapperswil to add it, and there was wording ready to be merged to the working draft as soon as the ballot concluded and it was unlocked for C++17 additions. That consensus disappeared when the feature was put up for a vote in front of full committee in Urbana.) The reason for the removal was that in for (elem : range)
, there is no clear indication that elem
is a new variable being declared; if there already is a variable named elem
in scope, one can easily get confused and think the existing variable is being used in the loop. Proponents of the feature pointed out that there is precedent for introducing a new name without explicit syntax for declaring it (such as a type) in generalized lambda captures ([name = init](){ ... }
declares a new variable named name
), but this argument was not found convincing enough to garner consensus for keeping the feature.std::uncaught_exceptions()
, a function that allows you to determine accurately whether a destructor is being called due to stack unwinding or not. There is an existing function, std::uncaught_exception()
(note the singular) that was intended for the same purpose, but was inaccurate by design in some cases, as explained in the proposal. This is considered a language feature even though it’s exposed as a library function, because implementing this function requires compiler support.u8
character literals.auto_ptr
, random_shuffle()
, ptr_fun
, mem_fun
, bind1st
, and bind2nd
.vector
, string
, valarray
, and array
.unique_ptr
std::reference_wrapper
trivially copyablenoexcept
in containersvoid_t
alias templateinvoke
function templatesize()
, empty()
, and data()
functionsAs usual, I spent most of my time in the Evolution Working Group (EWG), which concerns itself with the long-term evolution of the core language. In spite of there being a record number of proposals addressed to EWG in the pre-Urbana mailing, EWG managed to get through all of them.
Incoming proposals were categorized into three rough categories:
Accepted proposals (note: I’m not including here the ones which also passed CWG the same meeting and were voted into the standard – see above for those):
__FILE__
, __LINE__
, and __FUNCTION__
macros that doesn’t involve the preprocessor. I think this proposal constitutes a major advance because it removes one of the main remaining uses of the preprocessor.restrict
in C, but better). Some design feedback was given, but generally the proposal was considered baked enough that the next revision can go directly to CWG.constexpr
constraints, which were one of the kinds of constraints allowed in requires-expressions. The reason for the removal is that they are tricky to specify and implement, and have no major motivating uses.constexpr
character array. This one was one two competing compile-time string proposals, the other one being a variadic char...
template class which encodes the string contents in the template arguments themselves. The two proposals present a tradeoff between expressiveness and compile-time efficiency: one the one hand, encoding the string contents in the template arguments allows processing the string via template metaprogramming, while in the other proposal the string can only be processed with constexpr
functions; on the other hand, the variadic approach involves creating lots of template instantiations for string processing, which can slow down compile times signficantly. EWG’s view was that the compile-time efficiency consideration was the more important one, especially as constexpr
functions are getting more and more powerful. Therefore, the constexpr
array-based proposal was selected to move forward. As the proposal has both core language and library components, it will be going to LEWG for design review of the library components before being sent to CWG and LWG.Proposals for which further work is encouraged:
=delete
-ing them, or defined their own; one which would allow opting in to compiler-defined comparison operators via =default
; and one which would synthesize comparison operators using reflection. As suggested by the variety of the proposals, this is a feature that everyone wants but no one can agree exactly how it should work. Design considerations that came up included opt-in vs. opt-out, special handling for certain types of fields (such as mutable
fields and pointers), special handling for classes with a single member, compile-time performance, and different strengths of ordering (such as weak vs. total orders). After discussing the proposal for half a day, we ran out of time, and decided to pick up at the next meeting in Lenexa, possibly armed with revised proposals. There was one poll taken which provided fairly clear guidance on a single aspect of the proposal: there was much stronger consensus for opt-in behaviour than for opt-out.[[noreturn]]
attribute for main()
, designed for programs that are never meant to finish, such as some software running on embedded systems. This would allow the optimizer to remove code for running some cleanup such as the destructors of global objects. EWG liked the proposal, and sent it to CWG with one change, naming the attribute [[noexit]]
instead. CWG, however, pointed out that global destructors are potentially generated by all translation units, not just the one that defines main()
, and therefore the proposal is not implementable without link-time optimization. EWG discussed the proposal further, but didn’t reach any consensus, and decided to put it off until Lenexa.operator .
), similarly to how operator ->
can be overloaded. This would enable writing “smart reference” classes, much like how overloading operator ->
enables writing smart pointer classes. This would be a significant new feature, and many design considerations remain to be explored; however, there was general interest in the idea.Ts
is a parameter pack, and N
is a compile-time integral constant, Ts.[N]
is the parameter at index N
in Ts
(or a SFINAE-eligible error if the index N
is out of range). The dot is necessary for disambiguation (if the syntax were simply Ts[N]
, then consider Ts[Ns]...
, where Ns
is a parameter pack of size equal to Ts
; is this a pack of array types T_1[N_1], T_2[N_2], ...
, or is it T_(N_1), T_(N_2), ...
?). While people weren’t ecstatic about this syntax (the dot seemed arbitrary), there weren’t any better suggestions raised, and people preferred to have the feature with this syntax than to not have it at all. The second part of the proposal was less baked, and concerned “subsetting” a parameter pack with a pack of indices to yield a new pack; EWG encouraged further thought about this part, and suggested exploring two aspects separately: pack literals (for example 0 ...< 5
might be hypothetical syntax for a pack literal which expands to 0, 1, 2, 3, 4
) and pack transformations, which are operations that take a parameter pack as input, and transform it to another parameter pack.enable_if
or similar), or giving a custom diagnostic for it (expressing the constraint via a static_assert
). The specific suggestion was to allow annotating a = delete
-ed function with a custom error message that would be shown if it were chosen as the best match in overload resolution. EWG felt that this was a problem worth solving, but preferred a more general solution, and encouraged the author to come back with one.std::vector
and similar into stack allocations.Given this state of affairs, the future of classes with runtime size (and of arrays of runtime bound, which people want to tie to classes with runtime size) continues to be uncertain.
operator auto
proposal that was discussed (and encouraged for further work) in Rapperswil. EWG felt that the two use cases (scope guards and expression templates) weren’t sufficiently similar to necessitate fixing them the same way, and that the design questions raised during the operator auto
discussion weren’t adequately addressed in this proposal; encouragement was given to continue exploring the problem space, being open to different approaches for the two use cases.export
to indicate that the function’s return value refers to this parameter. EWG didn’t like this, feeling that these annotations would be “exception specifications all over again”, i.e. components of a function declaration that are not quite part of its type, for which we need ad-hoc rules to determine their behaviour with respect to redeclarations, function pointers, overrides in derived classes, being passed as non-type template arguments, and so on. The conclusion was that the problem this proposal addresses is a problem we want solved, but that this approach was not in the right direction for solving the problem.Rejected proposals:
return {expr}
be explicit, in the sense that it would allow invoking constructors of the function’s return type even if they were explicit
. This proposal had support in Rapperswil, but after several new papers argued against it, EWG decided to shelve it.a ?: b
, which would have been equivalent to a ? a : b
. EWG felt the utility wasn’t sufficiently compelling to warrant a language change.if (T x : expr) { S }
equivalent to if (auto p = expr) { T x = *p; S }
(and similarly for while
loops and the test-expressions of for
loops). EWG felt this shorthand wasn’t sufficiently compelling, and could cause confusion due to the similarity of the syntax to the range-based for loop.EWG held a special evening session on the topic of contracts, as there was a lot of interest in them at this meeting. Several papers on the topic were presented; a couple of others were not due to lack of time or a presenter.
The only proposal that was specifically considered was a proposal to turn the assert
macro into a compiler-recognized operator with one of a specified set of semantics based on the value of the NDEBUG
macro; it was rejected, mostly on the basis that it was infeasible to muck with assert
and NDEBUG
for backwards-compatibility reasons.
Other than that, the discussion was more about high-level design aspects for contract programming rather specific proposals. Some issues that came up were:
The discussion closed with some polls to query the consensus of the room:
These views will likely guide future proposals on this topic.
Coroutines was another topic with a lot of interest at this meeting. There were three proposals on the table: “resumable functions”, “resumable lambdas”, and a library interface based on Boost.Coroutine. These proposals started out under the purview of SG 1 (Concurrency), but then they started growing into a language feature with applications unrelated to concurrency as well, so the proposals were presented in an evening session to give EWG folks a chance to chime in too.
The coroutines proposals fall into two categories: stackful and stackless, with the “resumable functions” and “resumable lambdas” proposals being variations on a stackless approach, and Boost.Coroutine proposal being a stackful approach.
The two approaches have an expressiveness/performance tradeoff. Stackful coroutines have more overhead, because a stack needs to be reserved for them; the size of the stack is configurable, but making it too small risks undefined behaviour (via a stack overflow), while making it too large wastes space. Stackless coroutines, on the other hand, use only as much space as they need by allocating space for each function call on the heap (these are called activation frames; in some cases, the heap allocation can be optimized into stack allocation). The price they pay in expressiveness is that any function that calls a resumable function (i.e. a stackless coroutine) must itself be resumable, so the compiler knows to allocate activation frames on the heap when calling it, too. By contrast, with the stackful approach, any old function can call into a stackful coroutine, because execution just switches to using the coroutine’s side stack for the duration of the call.
Within the “stackless” camp, the difference between the “resumable functions” and “resumable lambdas” approaches is relatively small. The main difference is that the “resumable lambdas” approach allows coroutines to be passed around as first-class objects (since lambdas are objects).
The authors of the “resumable functions” and Boost.Coroutine proposals have attempted to come up with a unified proposal that combines the power of “stackful” with the expressiveness of “stackless”, but haven’t succeeded, and in fact have come to believe that the tradeoff is inherent. In light of this, and since both approaches have compelling use cases, the committee was of the view that both approaches should be pursued independently, both targetting a single Coroutines Technical Specification, with the authors co-operating to try to capture any commonalities between their approaches (if nothing else then a common, consistent set of terminology) even if a unified proposal isn’t possible. For the stackless approach, participants were polled for a preference between the “resumable functions” and “resumable lambdas” approaches; there was stronger support for the “resumable functions” approach, though I think this was at least in part due to the “resumable lambdas” approach being newer and less well understood.
I had a chance to speak to Chris Kohlhoff, the author of the “resumable lambdas” proposal, susbequent to this session. He had an idea for combining the “stackless” and “stackful” approaches under a single syntax that I found very interesting, which he plans to prototype. If it pans out, it might end up as the basis of compelling unified proposal after all.
I’m quite excited about the expressivity coroutines would add to the language, and I await developments on this topic eagerly, particularly on Chris’s unified approach.
The topic of forming a Study Group to explore ways to make C++ more suitable for embedded systems came up again. In addition to the two papers presented on the topic, some further ideas in this space were containers that can be stored in ROM (via constexpr
), and having exceptions without RTTI. It was pointed out that overhead reductions of this sort might be of interest to other communities, such as gaming, graphics, real-time programming, low-latency programming, and resource-constrained systems. EWG encouraged discussion across communities before forming a Study Group.
I mentioned the library features that are targeted for C++17 in the “C++17'' section above. Here I’ll talk about progress on the Library Fundamentals Technical Specifications, and future work.
The first Library Fundamentals TS has already gone through its first formal ballot, the PDTS (Preliminary Draft Technical Specification) ballot. LWG addressed comments sent in by national standards bodies in response to the ballot; the resulting changes were very minor, the most notable being the removal of the network byte-order conversion functions (htonl()
and friends) over concerns that they clash with similarly-named macros. LWG will continue addressing the comments during a teleconference in December, and then they plan to send out the specification for its DTS (Draft Technical Specification) ballot, which, if successful, will be its last before publication.
The second Library Fundamentals TS is in the active development stage. Coming into the meeting, it contained a single proposal, for a generalized callable negator. During this meeting, several new features were added to it:
observer_ptr
, the world’s dumbest smart pointerThere will very likely be more features added at the next meeting, in May 2015; the TS is tentatively scheduled to be sent out for its PDTS ballot at the end of that meeting.
In addition to the proposals which have already been added into C++17 or one of the TS’s, there are a lot of other library proposals in various stages of consideration.
Proposals approved by LEWG and under review by LWG:
pair
and tuple
index
, and array_view
make_shared
make_array
(targeting Fundamentals II)Proposals approved by LEWG for which LWG review is yet to start:
const
-propagating wrapper (targeting Fundamentals II)Proposal for which LEWG is encouraging further work:
std::function
std::bind
(LEWG only liked _all
)ostream
buffersProposals rejected by LEWG:
vector
There will be a special library-only meeting in Cologne, Germany in February to allow LWG and LEWG to catch up a bit on all these proposals.
SG 1’s main projects are the Concurrency TS and the Parallelism TS. As with the Library Fundamentals TS, both are likely to be the start of a series of TS’s (so e.g. the Parallelism TS will be followed by a Parallelism TS II).
Besides coroutines, which I talked about above, I haven’t had a chance to follow SG 1’s work in any amount of detail, but I will mention the high-level status:
The Parallelism TS already had its PDTS ballot; comments were addressed this week, resulting in minor changes, including the addition of a transform-reduce algorithm. SG 1 will continue addressing comments during a teleconference in December, and then plans to send the spec out for its DTS ballot. As mentioned above, there are plans for a Parallelism TS II, but no proposals have been approved for it yet.
The Concurrency TS has not yet been sent out for its PDTS ballot; that is now planned for Lenexa.
Some library proposals that have been approved by LEWG for the Concurrency TS:
Task regions are still being considered by LEWG, and would likely target Concurrency TS II.
A major feature being looked at by SG 1 is executors and schedulers, with two competing proposals. The two approaches were discussed, and SG 1 felt that at this stage there’s still design work to be done and it’s too early to make a choice. This feature is targeting the second Concurrency TS as it’s unlikely to be ready in time for Lenexa, and SG 1 doesn’t want to hold up the first Concurrency TS beyond Lenexa.
Coroutines are also a concurrency feature, but as mentioned above, they are now targeting a separate TS.
EWG spent an afternoon discussing modules. At this point, Microsoft and Clang both have modules implementations, at various levels of completion. The Microsoft effort is spearheaded by Gabriel Dos Reis, who summarized the current state of affairs in a presentation.
The goals of modules are:
The aspects of a modules design that people generally agree on at this point are:
Design points that still need further thought are:
#include
s in a moduleEWG was generally pleased with the progress being made, and encouraged implementors to continue collaborating to get their designs to converge, and report back in Lenexa.
The Clang folks also reported promising performance numbers from their implementation, but detailed/comprehensive benchmarks remain to be performed.
SG 3 did not meet in Urbana. The Filesystems TS is waiting for its DTS ballot to close; assuming it’s successful (which is the general expectation), it will be published early next year.
Proposals targeting a follow-up Filesystems TS II are welcome; none have been received so far.
Organizationally, the work of SG 4 has been conducted directly by LEWG over the past few meetings. This arrangement has been formalized at this meeting, with SG 4’s chair, Kyle Kloepper, retiring, and the SG becoming “dormant” until LEWG decides to reactivate it.
In Rapperswil, LEWG had favourably reviewed a proposal for a C++ networking library based on Boost.ASIO, and asked the author (Chris Kohlhoff, whom I’ve talked about earlier in the context of coroutines) to update the proposal to leverage C++14 language features. Chris has done so, and presented an updated proposal to LEWG in Urbana; this update was also received favourably, and was voted to become the initial working draft of the Networking TS, which now joins the roster of Technical Specifications being worked on by the committee. In other words, we’re one step closer to having a standard sockets library!
I haven’t been following the work of SG 5 very closely, but I know the Transactional Memory TS is progressing well. Its working draft has been created based on two papers, and it’s going to be sent out for its PDTS ballot shortly (after a review conducted via teleconference), with the intention being that the ballot closes in time to look at the comments in Lenexa.
Topics of discussion in SG 6 included:
std::rand
which combines the security of the C++11
facilities with the simple interface of std::rand
int16_t
for floating-point typesA Numerics TS containing proposals for some of the above may be started in the near future.
There is an existing TR (Technical Report, an older name for a Technical Specification) for decimal floating-point arithmetic. There is a proposal to integrate this into C++17, but there hasn’t been any new progress on that in Urbana.
SG 7 looked at two reflection proposals: an updated version of a proposal for a set of type traits for reflecting the members of classes, unions, and enumerations, and a a significantly reworked version of a comprehensive proposal for static reflection.
The reflection type trait proposal was already favourably reviewed in Rapperswil. At this meeting, additional feedback was given on two design points:
std::reflect
which provides traits for reflecting accessible members only, and another called std::reflect_invasively
which provides traits for reflecting all members including inaccessible ones). The rationale is that for some use cases, reflecting only over accessible members is appropriate, while for others, reflecting over all members is appropriate, and we want to be able to spot uses of an inappropriate mechanism easily. Some people also expressed a desire to opt-out from invasive reflection on a per-class basis.C
is std::class_member::name
. A preference was expressed a) for an additional level of grouping of reflection-related traits into a namespace or class reflect
, e.g. std::reflect::class_member::name
, and b) for not delaying the provision of all inputs until the last component of the trait, e.g. std::reflect::class_member<1>::name
. (This last form has the disadvantage that it would actually need to be std::reflect::template class_member<1>::name
; some suggestions were thrown around for avoiding this by making the syntax use some compiler magic (as the traits can’t be implemented purely as a library anyways)).It was also reiterated that this proposal has some limitations (notably, member templates cannot be reflected, nor can members of reference or bitfield type), but SG 7 remains confident that the proposal can be extended to fill these gaps in due course (in some cases with accompanying core language changes).
The comprehensive static reflection proposal didn’t have a presenter, so it was only looked at briefly. Here are some key points from the discussion:
There is also a third proposal for reflection, “C++ type reflection via variadic template expansion”, which sort of fell off SG 7’s radar because it was in the post-Issaquah mailing and had no presenter in Rapperswil or Urbana; SG 7 didn’t look at it in Urbana, but plans to in Lenexa.
The Core Working Group continued reviewing the Concepts TS (formerly called “Concepts Lite”) in Urbana. The fundamental design has not changed over the course of this review, but many details have. A few changes were run by EWG for approval (I mentioned these in the EWG section above: the removal of constexpr
constraints, and the addition of folding expressions). The hope was to be ready to send out the Concepts TS for its PDTS ballot at the end of the meeting, but it didn’t quite make it. Instead, CWG will continue the review via teleconferences, and possibly a face-to-face meeting, for Concepts only, in January. If all goes well, the PDTS ballot might still be sent out in time for the comments to arrive by Lenexa.
As far as SG 9 is concerned, this has been the most exciting meeting yet. Eric Niebler presented a detailed and well fleshed-out proposal for integrating ranges into the standard library.
Eric’s ranges are built on top of iterators, thus fitting on top of today’s iterator-based algorithms almost seamlessly, with one significant change: the begin and end iterators of a range are not required to be of the same type. As the proposal explains, this small change allows a variety of ranges to be represented efficiently that could not be under the existing same-type model, including sentinel- and predicate-based ranges.
The main parts of the proposal are a set of range-related concepts, a set of range algorithms, and a set of range views. The foundational concept is Iterable
, which corresponds roughly to what we conversationally call (and also what the Boost.Range library calls) a “range”. An Iterable
represents a range of elements delimited by an Iterator
at the beginning and a Sentinel
at the end. Two important refinements of the Iterable
concept are Container
, which is an Iterable
that owns its elements, and Range
, which is a lightweight Iterable
that doesn’t own its elements. The range algorithms are basically updated versions of the standard library algorithms that take ranges as Iterable
s; there are also versions that take (Iterator
, Sentinel
) pairs, for backwards-compatibiltiy with today’s callers. Finally, the range views are ways of transforming ranges into new ranges; they correspond to what the Boost.Range library calls range adaptors. There is also a suggestion to enhance algorithms with “projections”; I personally see this as unnecessary, since I think range views serve their use cases better.
Eric has fully implemented this proposal, thus convincingly demonstrating its viability.
Importantly, this proposal depends on the Concepts TS to describe the concepts associated with ranges and define algorithms and views in terms of these functions. (Eric’s implementation emulates the features of the Concepts TS with a C++11 concepts emulation layer.)
The proposal was overall very well received; there was clear consensus that Eric should pursue the high-level design he presented and come back with a detailed proposed specification.
An important practical point that needed to be addressed is that this proposal is not 100% backwards-compatible with the current STL. This wasn’t viewed as a problem, as previous experience trying to introduce C++0x concepts to the STL while not breaking anything has demonstrated that this wasn’t possible without a lot of contortions, and people have largely accepted that a clean break from the old STL is needed to build a tidy, concepts-enabled “STL 2.0''. Eric’s proposal covers large parts of what such an STL 2.0 would look like, so there is good convergence here. The consensus was that Eric should collaborate with Andrew Sutton (primary author and editor of the Concepts TS) on a proposal for a Technical Specification for a concepts-enabled ranges library; the exact scope (i.e. whether it will be just a ranges library, or a complete STL overhaul) is yet to be determined.
The Feature Test Standing Document (the not-quite-a-standard document used by the committee to specify feature test macros) has been updated with C++14 features.
The feature test macros are enjoying adoption by multiple implementors, including GCC, Clang, EDG, and others.
SG 12 looked at:
memcpy()
that is currently technically undefined, but people expect it to work.SG 13 has been working on a proposal for 2D Graphics TS based on cario’s API. In Urbana, an updated version of this proposal which included some proposed wording was presented to LEWG. LEWG encouraged the authors to complete the wording, and gave a couple of pieces of design advice:
The next full meeting of the Committee will be in Lenexa, Kansas, the week of May 4th, 2015.
There will also be a library-only meeting in Cologne, Germany the week of Feberuary 23rd, and a Concepts-specific meeting in Skillman, New Jersey from January 26-28.
This was probably the most action-packed meeting I’ve been to yet! My personal highlights:
|
Pomax: New Entry |
|
Pomax: RSS description testing |
My RSS generator wasn't adding article bodyies to the RSS, which caused some problems for certain RSS readers. Let's see if this fixes it.
|
Soledad Penades: “Invest in the future, build for the web!”, take 2, at OSOM |
I am right now in Cluj-Napoca, in Romania, for OSOM.ro, an small totally non profit volunteer-organised conference. I gave an updated, shorter revised version of the talk I gave at Amsterdam past June. As usual here are the slides and the source for the slides.
It is more or less the same, but better, and I also omitted some sections and spoke a bit about Firefox Developer Edition.
Also I was wearing this Fox-themed sweater which was imbuing me with special powers for sure:
(I found it at H & M past Saturday, there are more animals if foxes aren’t your thing).
There were some good discussions about open source per se, community building and growing. And no, talks were not recorded.
I feel a sort of strange emptiness now, as this has been my last talk for the year, but it won’t be long until other commitments fill that vacuum. Like MozLandia—by this time next week I’ll be travelling to, or already in, Portland, for our work week. And when I’m back I plan to gradually slide into a downward spiral into the idleness. At least until 2015.
Looking forward to meeting some mozillians I haven’t met yet, and also visiting Ground Kontrol again and exploring new coffee shops when we have a break in Portland, though
http://soledadpenades.com/2014/11/22/invest-in-the-future-build-for-the-web-take-2-at-osom/
|