Nick Cameron: How fast can I build Rust? |
I've been collecting some data on the fastest way to build the Rust compiler. This is primarily for Rust developers to optimise their workflow, but it might also be of general interest.
TL;DR: the fastest ways to build Rust (on a computer with lots of cores) is with -j6
, RUSTFLAGS=-Ccodegen-units=10
.
I tested using a commit, from the 24th November 2016. I was using the make build system (though I would expect the same results using Rustbuild). The test machine is a dedicated build machine - it has 12 physical cores, lots of RAM, and an SSD. It wasn't used for anything else during the benchmarking, and doesn't run a windowing system. It was running Ubuntu 16.10 (Linux). I only did one run per set of variables. That is not ideal, but where I repeated runs, they were fairly consistent - usually within a second or two and never more than 10. I've rounded all results to the nearest 10 seconds, and I believe that level of precision is about right for the experiment.
I varied the number of jobs (-jn
) and the number of codegen units (RUSTFLAGS=-Ccodegen-units=n
). The command line looked something like RUSTFLAGS=-Ccodegen-units=10 make -j6
. I measured the time to do a normal build of the whole compiler and libraries (make
), to build the stage1 compiler (make rustc-stage
, this is the minimal amount of work required to get a compiler for testing), and to build and bootstrap the compiler and run all tests (make && make check
, I didn't run a simple make check
because adding -jn
to that causes many tests to be skipped; setting codegen-units > 1 causes some tests to fail).
The jobs number is the number of tasks make can run in parallel. These runs are self-contained instances of the compiler, i.e., this is parallelism outside the compiler. The amount of parallelism is limited by dependencies between crates in the compiler. Since the crates in the compiler are rather large and there are a lot of dependencies, the benefits of using a large number of jobs is much weaker than in a typical C or C++ program (e.g., LLVM). Note however that there is no real drawback to using a larger number of jobs, there just won't be any benefit.
Codegen units introduce parallelism within the compiler. First, some background. Compilation can be roughly split into two: first, code is analysed (parsing, type checking, etc.), then object code is generated from the products of analysis. The Rust compiler uses LLVM for the code generation part. Roughly half the time running an optimised build is spent in each of analysis and code generation. Nearly all optimisation is performed in the code generation part.
The compilation unit in Rust is a crate; that is, the Rust compiler analyses and compiles a single crate at a time. By default, code generation also works at the same level of granularity. However, by specifying the number of codegen units, we tell the compiler that once analysis is complete, it should break the crate into smaller units and run LLVM code generation on each unit in parallel. That means we get parallelism inside the compiler, albeit only for about half of the work. There is a disadvantage, however: using multiple codegen units means the program will not be optimised as well as if a single unit were used. This is analogous to turning off LTO in a C program. For this reason, you should not use multiple codegen units when building production software.
So when building the compiler, if we use many codegen units we might expect the compilation to go faster, but when we run the new compiler, it will be slower. Since we use the new compiler to build at least the libraries and sometimes another compiler, this could be an important factor in the total time.
If you're interested in this kind of thing, we keep track of compiler performance at perf.r-l.o (although only single-threaded builds). Nicholas Nethercote has recently written a couple of blog posts on running and optimising the compiler.
make
This experiment ran a simple make
build. It builds two versions of the compiler - one using the last beta, and the second using the first.
cg1 | cg2 | cg4 | cg6 | cg8 | cg10 | cg12 | |
---|---|---|---|---|---|---|---|
-j1 | 48m50s | 39m40s | 31m30s | 29m50s | 29m20s | ||
-j2 | 34m10s | 27m40s | 21m40s | 20m30s | 20m10s | 19m30s | 19m20s |
-j4 | 28m10s | 23m00s | 17m50s | 16m50s | 16m40s | 16m00s | 16m00s |
-j6 | 27m40s | 22m40s | 17m20s | 16m20s | 16m10s | 15m40s | 15m50s |
-j8 | 27m40s | 22m30s | 17m20s | 16m30s | 16m30s | 15m40s | 15m40s |
-j10 | 27m40s | ||||||
-j12 | 27m40s | ||||||
-j14 | 27m50s | ||||||
-j16 | 27m50s |
In general, we get better results using more jobs and more codegen units. Looking at the number of jobs, there is no improvement after 6. For codegen units, the improvements quickly diminish, but there is some improvement right up to using 10 (for all jobs > 2
, 12 codegen units gave the same result as 10). It is possible that 9 or 11 codegen units may be more optimal (I only tested even numbers), but probably not by enough to be significant, given the precision of the experiment.
make rustc-stage1
This experiment ran make rustc-stage1
. That builds a single compiler and the libraries necessary to use that compiler. It is the minimal amount of work necessary to test modifications to the compiler. It is significantly quicker than make
.
cg1 | cg2 | cg4 | cg6 | cg8 | cg10 | cg12 | |
---|---|---|---|---|---|---|---|
-j1 | 15m10s | 12m10s | 9m40s | 9m10s | 9m10s | 8m50s | 8m50s |
-j2 | 11m00s | 8m50s | 6m50s | 6m20s | 6m20s | 6m00s | 6m00s |
-j4 | 9m00s | 7m30s | 5m40s | 5m20s | 5m20s | 5m10s | 5m00s |
-j6 | 9m00s | 7m10s | 5m30s | 5m10s | 5m00s | 5m00s | 5m00s |
I only tested jobs up to 6, since there seems no way for more jobs to be profitable here, if not in the previous experiment. It turned out that 6 jobs was only marginally better than 4 in this case, I assume because of more dependency bottlenecks, relative to a full make
.
I would expect more codegen units to be more effective here (since we're using the resulting compiler for less), but I was wrong. This may just be due to the precision of the test (and the relatively shorter total time), but for all numbers of jobs, 6 codegen units were as good as more. So, for this kind of build, six jobs and six codegen units is optimal; however, using ten codegen units (as for make
) is not harmful.
make -jn && make check
This experiment is the way to build all the compilers and libraries and run all tests. I measured the two parts separately. As you might expect, the first part corresponded exactly with the results of the make
experiment. The second part (make check
) took a fairly consistent amount of time - it is independent of the number of jobs since the test infrastructure does its own parallelisation. I would expect compilation of tests to be slower with a compiler compiled with a larger number of codegen units. For one or two codegen units, make check
took 12m40s, for four to ten, it took 12m50s, a marginal difference. That means that the optimal build used six jobs and ten codegen units (as for make
), giving a total time of 28m30s (c.f., 61m40s for one job and one codegen unit).
|
Mozilla Open Innovation Team: Announcing Panel of Judges for Mozilla’s Equal Rating Innovation Challenge |
Mozilla is delighted to announce the esteemed judges for the Equal Rating Innovation Challenge
These four leaders will join Mitchell Baker (USA), Executive Chairwoman of Mozilla, on the judging panel for the Equal Rating Innovation Challenge. The judges will be bringing their wealth of industry experience and long-standing expertise from various positions in policy, entrepreneurship, and consulting in the private and public sector to assess the challenge submissions.
Mozilla seeks to find novel solutions to connect all people to the open Internet so they can realize the full potential of this globally shared resource. We’re both thrilled and proud to have gathered such a great roster of judges for the Innovation Challenge — it’s a testament to the global scope of the initiative. Each one of these leaders has already contributed in many ways to tackle the broader challenge of connecting the unconnected and it is an honour to have these global heavyweights in our panel.
The Equal Rating Innovation Challenge will support promising solutions through expert mentorship and funding of US$250,000 in prize monies split into three categories: Best Overall (with a key focus on scalability), Best Overall Runner-up, and Most Novel Solution (based on experimentation with a potential high reward).
The judges will score submissions according to the degree by which they meet the following attributes:
The deadline for submission is 6 January 2017. On 17 January, the judges will announce five semifinalists. Those semifinalists will be provided advice and mentorship from Mozilla experts in topics such as policy, business, engineering, and design to hone their solution. The semifinalists will take part in a Demo Day on 9 March 2017 in New York City to pitch their solutions to the judges. The public will then be invited to vote for their favorite solution online during a community voting period from 10–16 March, and the challenge winners will be announced on 29 March 2017.
Announcing Panel of Judges for Mozilla’s Equal Rating Innovation Challenge was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.
|
Bogomil Shopov: I’ve launched a Mozilla Donation Campaign for #CyberMonday craziness. |
I have started a small campaign today and I am so happy to see it working – 138 engagements so far and a few donations. There is no way to see the donations, but I can see more “I have donated” tweets in the target languages.
Please retweet and take action :)
Feeling like spending some money on #CyberMonday? Why don't you support the Open Web? Donate $10 to @mozilla now: https://t.co/xNCtSD6XhU
— Bogo(mil) Shopov (@bogomep) November 28, 2016
Update: Actually there is a way to check the effect of the campaign. I used a web tool to count the tweets that every user can tweet after the donation.
I can see the trend here:
The post I’ve launched a Mozilla Donation Campaign for #CyberMonday craziness. appeared first on Bogomil Shopov.
|
QMO: Firefox 51 Beta 3 Testday Results |
Hi everyone!
Last Friday, November 25th, we held Firefox 51 Beta 3 Testday. It was a successful event (please see the results section below) so a big Thank You goes to everyone involved.
First of all, many thanks to our active contributors: Krithika MAP, Moin Shaikh, M A Prasanna, Steven Le Flohic, P Avinash Sharma, Iryna Thompson.
Bangladesh team: Nazir Ahmed Sabbir, Sajedul Islam, Maruf Rahman, Majedul islam Rifat, Ahmed Safa, Md Rakibul Islam, M. Almas Hossain, Foysal Ahmed, Nadim Mahmud, Amir Hossain Rhidoy, Mohammad Abidur Rahman Chowdhury, Mahfujur Rahman Mehedi, Md Omar Faruk sobuj, Sajal Ahmed, Rezwana Islam Ria, Talha Zubaer, maruf hasan, Farhadur Raja Fahim, Saima sharleen, Azmina AKterPapeya, Syed Nayeem Roman.
India team: Vibhanshu Chaudhary, Surentharan.R.A, Subhrajyoti Sen, Govindarajan Sivaraj, Kavya Kumaravel, Bhuvana Meenakshi.K, Paarttipaabhalaji, P Avinash Sharma, Nagaraj V, Pavithra R, Roshan Dawande, Baranitharan, SriSailesh, Kesavan S, Rajesh. D, Sankararaman, Dinesh Kumar M, Krithikasowbarnika.
Secondly, a big thank you to all our active moderators.
Results:
We hope to see you all in our next events, all the details will be posted on QMO!
https://quality.mozilla.org/2016/11/firefox-51-beta-3-testday-results/
|
Alessio Placitelli: Measuring tab and window usage in Firefox |
https://www.a2p.it/wordpress/tech-stuff/mozilla/measuring-tab-and-window-usage-in-firefox/
|
Myk Melez: Embedding Use Cases |
A couple weeks ago, I blogged about Why Embedding Matters. A rendering engine can be put to a wide variety of uses. Here are a few of them. Which would you prioritize?
A headless browser is an app that renders a web page (and executes its script) without displaying the page to a user. Headless browsers themselves have multiple uses, including automated testing of websites, web crawling/scraping, and rendering engine comparisons.
Longstanding Mozilla bug 446591 tracks the implementation of headless rendering in Gecko, and SlimerJS is a prime example of a headless browser would benefit from it. It’s a “scriptable browser for Web developers” that integrates with CasperJS and is compatible with the WebKit-based PhantomJS headless browser. It currently uses Firefox to “embed” Gecko, which means it doesn’t run headlessly (SlimerJS issue #80 requests embedding Gecko as a headless browser).
A Hybrid Desktop App is a desktop app that is implemented primarily with web technologies but packaged, distributed, and installed as a native app. It enables developers to leverage web development skills to write an app that runs on multiple desktop platforms (typically Windows, Mac, Linux) with minimal platform-specific development.
Generally, such apps are implemented using an application framework, and Electron is the one with momentum and mindshare; but there are others available. While frameworks can support deep integration with the native platform, the apps themselves are often shallower, limiting themselves to a small subset of platform APIs (window management, menus, etc.). Some are little more than a local web app loaded in a native window.
A specialization of the Hybrid Desktop App, the Hybrid Desktop Web Browser is notable not only because Mozilla’s core product offering is a web browser but also because the category is seeing a wave of innovation, both within and outside of Mozilla.
Besides Mozilla’s Tofino and Browser.html projects, there are open source startups like Brave; open-source hobbyist projects like Min, Alloy, electron-browser, miserve, and elector; and proprietary browsers like Blisk and Vivaldi. Those products aren’t all Hybrid Apps, but many of them are (and they all need to embed a rendering engine, one way or another).
A Hybrid Mobile App is like a Hybrid Desktop App, but for mobile platforms (primarily iOS and Android). As with their desktop counterparts, they’re usually implemented using an application framework (like Cordova). And some use the system’s web rendering component (WebView), while others ship their own via frameworks (like Crosswalk).
Basecamp notably implemented a hybrid mobile app, which they described in Hybrid sweet spot: Native navigation, web content.
(There’s also a category of apps that are implemented with some web technologies but “compile to native,” such that they render their interface using native components rather than a WebView. React Native is the most notable such framework, and James Long has some observations about it in Radical Statements about the Mobile Web and First Impressions using React Native.)
A Mobile App With WebView is a native app that incorporates web content using a WebView. In some cases, a significant portion of the app’s interface displays web content. But these apps are distinct from Hybrid Mobile Apps not only in degree but in kind, as the choice to develop a native app with web content (as opposed to packaging a web app in a native format using a hybrid app framework) entrains different skillsets and toolchains.
Facebook (which famously abandoned hybrid app development in 2012) is an example of such an app.
A Site-Specific Browser (SSB) is a native desktop app (or simulation thereof) that loads a single web app in a discrete native window. SSBs typically install launcher icons in OS app launchers, remove or minimize browser chrome in app windows, and may include native menus and other features typical of desktop apps.
Chrome’s –app mode allows it to simulate an SSB, and recent Mozilla bug 1283670 requests a similar feature for Firefox.
SSBs differ from hybrid desktop apps because they wrap regular web apps (i.e. apps that are hosted on a web server and also available via a standard web browser). They’re also typically created by users using utilities, browser features, or browser extensions rather than by developers. Examples of such tools include Prism, Standalone, and Fluid. However, hybrid app frameworks like Electron can also be used (by both users and developers) to create SSBs.
A variety of embedded devices include a graphical user interface (GUI), including human-machine interface (HMI) devices and Point of Interest (POI) kiosks. Embedded devices with such interfaces often implement them using web technologies, for which they need to integrate a rendering engine.
The embedded device space is complex, with multiple solutions at every layer of the technology stack, from hardware chipset through OS (and OS distribution) to application framework. But Linux is a popular choice at the operating system layer, and projects like OpenEmbedded/Yocto Project and Buildroot specialize in embedded Linux distributions.
Embedded devices with GUIs also come in all shapes and sizes. However, it’s possible to identify a few broad categories. The ones for which an embedded rendering engine seems most useful include industrial and home automation (which use HMI screens to control machines), POI/POS kiosks, and smart TVs. There may also be some IoT devices with GUIs.
|
Hub Figui`ere: libopenraw 0.1.0 |
I just released libopenraw 0.1.0. It is to be treated as a snapshot as it hasn't reached the level of functionality I was hoping for and it has been 5 years since last release.
Head on to the download page to get a tarball.
Several new API, some API + ABI breakage. Now the .pc files are parallel installable.
https://www.figuiere.net/hub/blog/?2016/11/27/865-libopenraw-010
|
Mozilla Open Design Blog: Heading into the home stretch |
Over the past few weeks, we’ve been exploring different iterations of our brand identity system. We know we need a solution that represents both who Mozilla is today and where we’re going in the future, and appeals both to people who know Mozilla well and new audiences who may not know or understand Mozilla yet. If you’re new to this project, you can read all about our journey on our blog, and the most recent post about the two different design directions that are informing this current round of work.
[TL;DR: Our “Protocol” design direction delivers well on our mission, legacy and vision to build an Internet as a global public resource that is healthy, open and accessible to all. Based on quantitative surveys, Mozillians and developers believe this direction does the best job supporting an experience that’s innovative, opinionated and inclusive, the attributes we want to be known for. In similar surveys, our target consumers evaluated our “Burst” design direction as the better option in terms of delivering on those attributes, and we also received feedback that this direction did a good job communicating interconnectedness and liveliness. Based on all of this feedback, our decision was to lead with the “Protocol” design direction, and explore ways to infuse it with some of the strengths of the “Burst” direction.]
Here’s an update on what we’ve been up to:
Getting to the heart of the matter
Earlier in our open design project, we conducted quantitative research to get statistically significant insights from our different key audiences (Mozillians, developers, consumers), and used these data points to inform our strategic decision about which design directions to continue to refine.
At this point of our open design project, we used qualitative research to understand better what parts of the refined identity system were doing a good job creating that overall experience, and what was either confusing or contradictory. We want Mozilla to be known and experienced as a non-profit organization that is innovative, opinionated and inclusive, and our logo and other elements in our brand identity system – like color, language and imagery – need to reinforce those attributes.
So we recruited participants in the US, Brazil, Germany and India between the ages of 18 – 40 years, who represent our consumer target audience: people who make decisions about the companies they support based on their personal values and ideals, which are driven by bettering their communities and themselves. 157 people participated (average about 39 from each country), with a split between 49% men and 51% women. 69% were between 18 – 34 years, and 90% had some existing awareness of Mozilla.
For 2 days, they interacted with an online moderator and had the opportunity to see and respond to others’ opinions in real time.
Learnings from this qualitative research are not intended to provide statistical analysis on which identity system was “the winner.” Instead respondents talk about what they’re seeing, while the moderator uncovers trends within these comments, and dives deeper into areas that are either highly favorable or unfavorable by asking “why?” This type of research is particularly valuable at our stage of an identity design process – where we’ve identified the strategic idea, and are figuring out the best way to bring it to life. Consumers not intimately familiar with Mozilla view the brand identity system with fresh eyes, helping illuminate any blind spots and provide insights into what helps new audiences better understand us.
Tapping into internal experts
Another extremely important set of stakeholders who have provided insights throughout the entire project, and especially at this stage, is our brand advisory group, composed of technologists, designers, strategists and community representatives from throughout Mozilla. This team was responsible not only for representing their “functional” area, but also accountable for representing the community of Mozillians across the world. We met every two weeks, sharing work-in-progress and openly and honestly discussing the merits and misses of each design iteration.
In addition to regular working sessions, we also asked our brand advisory group members to represent the work with their own networks, field questions, and surface concerns. At one point, one of our technology representatives called out that several developers and engineers did not understand the strategic intent and approach to project, and needed a better framework by which to evaluate the work in progress. So we convened this group for a frank and freewheeling conversation, and everyone —the design team included — walked away with a much deeper appreciation for the opportunities and challenges.
That exchange inspired us to host a series of “brown bag” conversations, open to all staff and volunteer Mozillians. During one week in October, we hosted five 60-minute sessions, as early as 7am PT and as late as 8pm PT to accommodate global time zones and participants. We also videotaped one session and made that available on AirMozilla, our video network, for those unable to attend in person. Throughout those 5 days, over 150 people participated in those critique sessions, which proved to be rich with constructive ideas.
The important thing to note is that these “brown bag” sessions were not driving toward consensus, but instead invited critical examination and discussion of the work based on a very explicit set of criteria. Similar to the qualitative research conducted with our target consumer audience, these discussions allowed us to ask “why” and “why not” and truly understand emotional reactions in a way that quantitative surveys aren’t able to do.
The participation and contribution of our brand advisory group has been invaluable. They’ve been tough critics, wise counsel, patient sounding boards, trusted eyes and ears and ultimately, strategic partners in guiding the brand identity work. They’re helping us deliver a solution that has global appeal, is technically beautiful, breaks through the clutter and noise, scales across all of our products, technologies, programs, and communities, and is fit for the future. Most importantly, they have been an important barometer in designing a system that is both true to who we are and pushes us to where we want to go.
Closing in on a recommendation
The feedback from the qualitative consumer research indicates that the new brand identity reinforces the majority of the key attributes we want Mozilla to represent. Along with insights from our brand advisory group and leadership, this feedback helps direct our work as we move to a final recommendation and find the right balance between bold and welcoming. Our goal is to share an update at our All Hands meeting in early December, almost exactly six months from the date we first shared options for strategic narratives to kick off the work. Following that, we’ll post it here as well.
https://blog.mozilla.org/opendesign/heading-into-the-home-stretch/
|
Daniel Stenberg: HTTPS proxy with curl |
Starting in version 7.52.0 (due to ship December 21, 2016), curl will support HTTPS proxies when doing network transfers, and by doing this it joins the small exclusive club of HTTP user-agents consisting of Firefox, Chrome and not too many others.
Yes you read this correctly. This is different than the good old HTTP proxy.
HTTPS proxy means that the client establishes a TLS connection to the proxy and then communicates over that, which is different to the normal and traditional HTTP proxy approach where the clients speak plain HTTP to the proxy.
Talking HTTPS to your proxy is a privacy improvement as it prevents people from snooping on your proxy communication. Even when using HTTPS over a standard HTTP proxy, there’s typically a setting up phase first that leaks information about where the connection is being made, user credentials and more. Not to mention that an HTTPS proxy makes HTTP traffic “safe” to and from the proxy. HTTPS to the proxy also enables clients to speak HTTP/2 more easily with proxies. (Even though HTTP/2 to the proxy is not yet supported in curl.)
In the case where a client wants to talk HTTPS to a remote server, when using a HTTPS proxy, it sends HTTPS through HTTPS.
Illustrating this concept with images. When using a traditional HTTP proxy, we connect initially to the proxy with HTTP in the clear, and then from then on the HTTPS makes it safe:
to compare with the HTTPS proxy case where the connection is safe already in the first step:
The access to the proxy is made over network A. That network has traditionally been a corporate network or within a LAN or something but we’re seeing more and more use cases where the proxy is somewhere on the Internet and then “Network A” is really huge. That includes use cases where the proxy for example compresses images or otherwise reduces bandwidth requirements.
Actual HTTPS connections from clients to servers are still done end to end encrypted even in the HTTP proxy case. HTTP traffic to and from the user to the web site however, will still be HTTPS protected to the proxy when a HTTPS proxy is used.
This awesome work was provided by Dmitry Kurochkin, Vasy Okhin, and Alex Rousskov. It was merged into master on November 24 in this commit.
Doing this sort of major change in the TLS area in curl code is a massive undertaking, much so because of the fact that curl supports getting built with one out of 11 or 12 different TLS libraries. Several of those are also system-specific so hardly any single developer can even build all these backends on his or hers own machines.
In addition to the TLS backend maze, curl and library also offers a huge amount of different options to control the TLS connection and handling. You can switch on and off features, provide certificates, CA bundles and more. Adding another layer of TLS pretty much doubles the amount of options since now you can tweak everything both in the TLS connection to the proxy as well as the one to the remote peer.
This new feature is supported with the OpenSSL, GnuTLS and NSS backends to start with.
By all means, go ahead and use it and torture the code and file issues for everything bad you see, but I think we make ourselves a service by considering this new feature set to be a bit experimental in this release.
There’s a whole forest of new command line and libcurl options to control all the various aspects of the new TLS connection this introduces. Since it is a totally separate connection it gets a whole set of options that are basically identical to the server connection but with a –proxy prefix instead. Here’s a list:
--proxy-cacert --proxy-capath --proxy-cert --proxy-cert-type --proxy-ciphers --proxy-crlfile --proxy-insecure --proxy-key --proxy-key-type --proxy-pass --proxy-ssl-allow-beast --proxy-sslv2 --proxy-sslv3 --proxy-tlsv1 --proxy-tlsuser --proxy-tlspassword --proxy-tlsauthtype
https://daniel.haxx.se/blog/2016/11/26/https-proxy-with-curl/
|
Tantek Celik: My 2017-01-01 #IndieWeb Commitment: Own All My RSVPs To Public Events |
My 2017-01-01 #indieweb commitment is to own 100% of my RSVPs to public events, by posting them on my site first, and other sites second, if at all.
RSVPs will be my third public kind of post to fully own on my own site:
For a while now I’ve been posting RSVPs to indie events, such as Homebrew Website Club meetups. Those RSVPs are nearly always multi-RSVPs that are also automatically RSVPing to the Facebook copies of such indie events.
Recently I started to post some (most?) of my RSVPs to public events (regardless of where they were hosted) on my own site first, and then syndicate (POSSE) them to other sites, often automatically.
My previous post is one such example RSVP. I posted it on my site, and my server used the Bridgy service to automatically perform the equivalent RSVP on the public Facebook event, without me having to directly interact with Facebook’s UI at all.
For events on Eventbrite, Lanyrd, and other event sites I still have to manually POSSE, that is, manually cross-post an RSVP there that I originally posted on my own site.
My commitment for 2017 is to always, 100% of the time, post RSVPs to public events on my own site first, and only secondarily (manually if I must) RSVP to silo (social media) event URLs.
What’s your 2017-01-01 #indieweb commitment?
http://tantek.com/2016/330/b1/2017-01-01-indieweb-commitment-own-my-rsvps
|
Air Mozilla: Foundation Demos November 25 2016 |
Foundation Demos November 25 2016
|
Botond Ballo: Trip Report: C++ Standards Meeting in Issaquah, November 2016 |
Project | What’s in it? | Status |
C++17 | See below | Committee Draft published; final publication on track for 2017 |
Filesystems TS | Standard filesystem interface | Published! Part of C++17 |
Library Fundamentals TS v1 | optional , any , string_view and more |
Published! Part of C++17 |
Library Fundamentals TS v2 | source code information capture and various utilities | Voted for publication! |
Concepts (“Lite”) TS | Constrained templates | Published! Not part of C++17 |
Parallelism TS v1 | Parallel versions of STL algorithms | Published! Part of C++17 |
Parallelism TS v2 | Task blocks, library vector types and algorithms, context tokens (maybe), and more | Under active development |
Transactional Memory TS | Transaction support | Published! Not part of C++17 |
Concurrency TS v1 | future.then() , latches and barriers, atomic smart pointers |
Published! Not part of C++17 |
Concurrency TS v2 | TBD. Exploring synchronic types, atomic views, concurrent data structures, sycnhronized output streams. Executors to go into a separate TS. | Under active development |
Networking TS | Sockets library based on Boost.ASIO | Voted for balloting by national standards bodies |
Ranges TS | Range-based algorithms and views | Voted for balloting by national standards bodies |
Numerics TS | Various numerical facilities | Under active development |
Array Extensions TS | Stack arrays whose size is not known at compile time | Withdrawn; any future proposals will target a different vehicle |
Modules TS | A component system to supersede the textual header file inclusion model | Initial TS wording reflects Microsoft’s design; changes proposed by Clang implementers expected. Not part of C++17. |
Graphics TS | 2D drawing API | In design review stage. No new progress since last meeting. |
Coroutines TS | Resumable functions | First revision will reflect Microsoft’s await design. Other approaches may be pursued in subsequent iterations. Not part of C++17. |
Reflection | Code introspection and (later) reification mechanisms | Introspection proposal undergoing design review; likely to target a future TS |
Contracts | Preconditions, postconditions, and assertions | In design review stage. No new progress since last meeting. |
Note: At the time of publication, a few of the links in this blog post resolve to a password-protected page. They will start resolving to public pages once the post-meeting mailing is published, which should happen within a few days. Thanks for your patience!
Last week I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Issaquah, Washington (near Seattle). This was the third and final committee meeting in 2016; you can find my reports on previous meetings here (February 2016, Jacksonville) and here (June 2016, Oulu), and earlier ones linked from those. These reports, particularly the Oulu one, provide useful context for this post.
This meeting was heavily focused on C++17, with a secondary focus on in-progress Technical Specifications, and looking forward to C++20.
At the end of the last meeting, the C++17 Committee Draft (CD) – a first feature-complete draft of the C++17 spec – was sent out for comment from national standards bodies. The comment period concluded prior to this meeting, and as such, the main order of business at this meeting was to go through the comments and address them.
Note that, while the committee is obligated to respond to each comment, it is not obligated to accept the proposed resolution of the comment (if it has one); “there was no consensus for a change” is an acceptable response. (Technically, if a national standards body is unhappy with the response to their comment, it can vote “no” on the final standard, but this practically never happens; the prevailing practice is to respect the consensus of the committee.)
Addressing the CD comments is a process that typically takes two meetings. Indeed, the committee did not get through all of them this meeting; resolution of the comments will continue at the next meeting, at which point a revised draft, now labelled Draft International Standard (DIS) will be published, and sent out for a second round of comments.
Since the C++17 CD is supposed to be feature-complete, no new features were voted into C++17 at this meeting. See my Oulu report (and things linked from there) for features that were voted into C++17 at previous meetings.
However, some changes to C++17 (that don’t qualify as new features, but rather tweaks or bugfixes to existing features) were voted into C++17 at this meeting, mostly in response to national body comments.
constexpr
for std::char_traits
basic_string_view
constexpr
for std::chrono
node_handle
sstd::shared_ptr
std::any
, std::optional
, and std::variant
std::variant
s, and empty variantsshared_ptr::use_count()
and unique()
std::hash
. This allows us to express semantics such as “you can hash std::optional
if you can hash T
“.It’s worth observing that some of these library changes involve taking language changes previously accepted into the CD, such as structured bindings, and making use of them in the library (for example, adding structured bindings for node_handle
). Since these types of library changes naturally “lag behind” the corresponding language changes, there have been requests to close the door to new language features earlier than for library features, to give the library time to catch up. No formal decision along these lines has been made, but authors of language features have been asked to give thorough consideration to library impact for future proposals.
One notable change to C++17 that was proposed at this meeting was rejected: this introduction of byte
type whose intended use was unambiguously “a unit of storage” rather than “a character” or “an 8-bit integer” (which is not true of existing byte-like types such as char
or unsigned char
). The proposal involved the definition of a library type named std::byte
(defined as an enum class
), and core wording changes to grant the type special treatment, such as allowing the construction of an arbitrary object into a suitable-sized array of byte
, as is currently allowed for char
(this treatment is relied upon by the implementations of containers such as std::vector
that allocate a block of memory some but not all of which contains constructed objects). The long-term hope is for std::byte
to replace char
as the type that gets this special treatment, and eventually deprecate (and later) remove it for char
.
The proposal to add this type to C++17 failed to gain consensus. This was partly because we’re late into the process (this would have had to be another exception to the “no new features beyond the CD” rule), but also because there were objections to naming the type “byte”, on the basis that there is a lot of existing code out there that uses this name, some of it for purposes other than the one intended in this proposal (for example, as an 8-bit integer type).
It seemed like there is still a chance for this proposal to gain consensus with the type’s name changed form “byte” to something else, but this is unlikely to happen for C++17.
In spite of the heavy focus on addressing comments on the C++17 CD, notable progress was also made on Technical Specifications (TS) at this meeting.
The procedure for publishing a Technical Specification is as follows:
At this meeting, two TS’s were approved for being sent out for a PDTS ballot, and one was approved for final publication.
The Library Fundamentals TS v2 had previously been sent out for its PDTS ballot, and the library working groups had been hard at work addressing the comments over the past few meetings.
At this meeting, the process was finally completed. With a few final changes being voted into the TS (a tweak to the searchers interface and miscellaneous other fixes), the TS was approved for final publication!
The Ranges TS has passed its initial wording review. It picked up a couple of changes, and was approved to be sent out for its PDTS ballot!
The Ranges TS is unique in that it is, so far, the only TS to have a dependency on another one – the Concepts TS. There is nothing wrong with this; the only caveat is that, obviously, the Ranges TS can only be merged into the C++ standard itself after (or together with) the Concepts TS. Over time, I expect other TS’s in the future to be in a similar situation.
The Ranges TS is also notable for containing the beginnings of what could end up being a “standard library v2” – a refresh of the C++ standard library that uses Concepts and makes some (but not a gratuitous amount) of breaking changes compared to the current standard library. I’m excited to see it move forward!
The Networking TS has also passed its initial wording review. It, too, picked up a couple of changes, and was approved to be sent out for its PDTS ballot!
I’m also quite excited to see the Networking TS move forward. In addition to providing a socket library for C++, it defines foundational abstractions for asynchronous programming in C++. Its design has influenced, and will continue to influence, C++ proposals in other areas such as concurrency and (perhaps at some point in the future) input handling.
The Coroutines TS contains the co_await
proposal, based on Microsoft’s original design.
As mentioned previously, there are efforts underway to standardize a proposal for a different, stackful flavour of coroutines, as well as an exploratory effort to unify the two flavours under a common syntax. These proposals, however, are not currently slated to target the Coroutines TS. They may instead target a different TS (and if a unified syntax emerges, it could be that syntax, rather than the one in the Coroutines TS, that’s ultimately merged into the C++ standard).
In any case, the Coroutines TS is currently moving forward as-is. A proposal to send it out for its PDTS ballot came up for a vote at this meeting, but failed to gain consensus, mostly on the basis that there has not been time for a sufficiently thorough review of the core language wording parts. Such review is expected to be completed by the next meeting, at which time – if all goes well – it could be sent out for its PDTS ballot.
To recap the status of the Concepts TS: it was published last year, not merged into C++17, and now has a Working Draft that’s available to accept changes. Whether that Working Draft will eventually be published as a Concepts TS v2, or merged directly into C++20, remains to be decided.
One change was voted into the Concepts Working Draft at this meeting: a proposal to allow a requires-expression to appear in any expression context, not just in a concept definition. This is useful for e.g. using a requires-expression in a static_assert
.
There is some uncertainty regarding the longer-term future direction of Concepts. The following areas seem to be somewhat contentious:
The Modules TS has a working paper, which largely reflects Microsoft’s design. It is under active wording review.
The implementers of Modules in clang have a proposal for some design changes based on their implementation experience. It is unclear at this point whether these changes, if approved, would go into the initial Modules TS; some of them, particularly the more controversial ones such as the support for exporting macros, may end up targeting a second revision of the Modules TS instead.
The Parallelism TS v2 is making good progress. It’s expected to contain task blocks, library vector types and algorithms, and perhaps context tokens. The TS should be ready for a PDTS ballot once wording review of the vector proposal is completed.
The Concurrency TS v2 (doesn’t have a working draft yet) is also making progress. It’s expected to contain a synchronized output stream facility, atomic_view
, a synchronized value abstraction, queues, and counters, possibly among other things.
Executors, which have originally been slated for Concurrency TS v1 but got bogged down in lengthy design discussions over the years, have made notable progress at this meeting: for the first time, the various parties agreed on a unified proposal! This may end up targeting a separate Executors TS, so it doesn’t hold back the Concurrency TS v2.
The chair of SG 1 (the Study Group concerned with parallelism and concurrency) remarked that splitting proposals into “parallelism” and “concurrency” buckets seems to be less useful as time goes on and some topics, such as executors, concern both. It’s possible that these two series of TS’s will be folded together into a single series in the future, released more frequently.
In addition to the above already-inflight Technical Specifications, a number of new ones are planned for the future.
The Reflection Study Group (SG 7) has been reviewing a proposal for static introspection (design, specification). At this meeting, the proposal was approved by SG 7, and sent onward for review by the Evolution and Library Evolution working groups starting at the next meeting. The ship vehicle will be determined by these working groups, but it’s expected to be a TS (the other possibility is C++20).
I write about SG 7’s meeting this week in more detail below.
The Numerics Study Group (SG 6) is planning a Numerics TS that will contain a variety of facilities, as outlined here. The initial draft of the TS will likely just be a concatenation of the various individual proposals (which are being developed by various authors; see proposals with “Numerics” as the audience in the list here); the SG 6 chair hopes such an initial draft will be produced at the next meeting.
There was also a proposal to have a separate Technical Specification concerning random number generation, which received no objections.
The Graphics TS, which proposes to standardize a set of 2D graphics primitives inspired by cairo, is still under design review. No time was spent reviewing the proposal at this meeting, so there is no new progress to report compared to last meeting.
As usual, I spent my time in the Evolution Working Group (EWG), which spends its time evaluating and reviewing the design of proposed language features.
EWG did two things this week: first, it looked at national body comments on the C++17 CD which touched on the design of the language; and second, it looked at new language proposals, all of which were targeting C++20 or beyond (or a Technical Specification). I’ll talk about each in turn.
The national body comments can be found in two documents: the official ones, and the late ones which therefore aren’t considered official, but were still looked at. I can’t link to individual comments directly, so I refer to them below by country and number, e.g. “US 1”.
A couple of procedural notes on comment processing:
With that said, here are the comments that were looked at, grouped by the feature they concern:
std::pair p(42, "waldo"s); // deduce std::pair
T&&
parameter in a constructor of the primary template. Parameters of the form T&&
have the unique property that they behave differently depending on whether or not T
is deduced. If T
is not deduced, they are always rvalue references (unless T
is explicitly specified to be an lvalue reference type); if T
is deduced, they are forwarding references (formerly called universal references), which may end up being lvalue or rvalue references. Now, if T
is a template parameter of a class, and a constructor of that class has a T&&
parameter, normally T is not deduced; but when the constructor is used as an implicit deduction guide, it is! As a result, the semantics of the parameter silently change. To avoid this, a tweak to the rules was made to ensure that the parameter retains its non-forwarding semantics in the implicit guide context.
std::pair p(42, 43)
, std::pair p{42, 43}
, and std::pair(42, 43)
, it does not allow std::pair{42, 43}
. EWG agreed that this inconsistency should be fixed, although it did ask for a paper to be written, as specifying this was deemed sufficiently non-trivial.
a @= b
(where b
is evaluated before a
, for consistency with assignment) and a.operator=(b)
(where a
is evaluated before b
, for consistency with member function calls). (Note that @
here is meant to stand in for any operator that can be combined with =
, such as +=
, -=
, etc.) This issue was discussed previously, and the comments didn’t really bring any new information to the table; there was no consensus for a change.
[]
vs. {}
(ES 2, FI 23, US 23, US 71, Late 9, Late 12). The original proposal for decomposition declarations used the syntax auto {a, b, c}
; that was changed at the last meeting to auto [a, b, c]
. This change was fairly controversial, and several comments asked to change it back to {}
(while others encouraged keeping the []
). There are technical arguments on both sides (the []
syntax can conflict with attributes once you start allowing nested decompositions; the {}
syntax can conflict with uniform initialization if you throw Concepts into the mix and allow using a concept-name instead of auto
), so in the end it’s largely a matter of taste. The clang implementers did report that they tried both, and found the ambiguities to be easier to work around with []
. In the end, there was no consensus for a change, so the status quo ([]
syntax) remains.static
, thread_local
, constexpr
, extern
, and inline
decomposition declarations (GB 16, GB 17, US 95). There was general agreement that allowing these modifiers is desirable, but the exact semantics need to be nailed down (for example, for constexpr
is only the unnamed compound object constexpr
, or the individual bindings too?), and EWG felt it was too late to do that for C++17. The comment authors were encouraged to come back with proposals for C++20.auto [a : T, b : U, c : V] = expr
, where the decomposition is ill-formed if the types of a
, b
, and c
do not match T
, U
, and V
, respectively). EWG was open to an extension along these lines, but not in the C++17 timeframe.auto [a, b, c](expr)
in addition to auto [a, b, c] = expr
and auto [a, b, c]{expr}
; that was approved.get<>()
functions called in decomposition declarations? (Late 3). This comment concerned the case where type being decomposed is neither an aggregate strucure nor an array, but a tuple-like user-defined type where the individual bindings are extracted using get<>()
. The specification wasn’t clear as to whether these get<>()
calls occur “eagerly” (i.e. at the point of decomposition) or “lazily” (at the point of use of the individual bindings, which can potentially mean calling get<>()
multiple times for a given binding). EWG decided in favour of “lazy”, because it enabled certain use cases (for example, decomposing a type such as a status/value pair where it’s only legal to access the value if the status has a certain value), while types that do nontrivial work in get<>()
(thus making multiple calls expensive) seem rare.operator!=
from operator==
, and operator>
, operator<=
, and operator>=
from operator<
; operator==
and operator<
themselves still have to be manually defined (although automatic generation of those would be a compatible extension). The proposal also contained a new mechanism for “generating” these derived operators: rather than having the compiler generate what behave like new overloaded functions, expressions of the form a != b
are reinterpreted/rewritten to be of the form !(a == b)
if a suitable operator!=
cannot be found (and similarly for the operators derived from operator<
). It was pointed out that this mechanism, like the mechanisms underlying some of the previous proposals, suffers from the slicing problem: if a base class defines both operator==
and operator!=
(as a class in existing code might) while a derived class defines only operator==
(as a class written with this proposal in mind might), then an expression of the form a != b
, where a
and b
are objects of the derived class, will call the base’s operator!=
because there is a matching operator!=
(and as a result, any data members in the derived class will not participate in the comparison).operator==
(but not operator<
), based on the following simple rule: if operator=
is auto-generated, so is operator==
, the idea being that the semantics of assignment and equality are fundamentally entwined (if you make a copy of something, you expect the result to behave the same / be equal). The idea had support, but some people were still concerned about having even ==
be auto-generated without an explicit opt-in.= default
. This has been previously proposed, and people’s concerns largely mirrored those at the previous discussion. None of these proposals achieved consensus for C++17. The main reason had to do with another proposal concerning comparison, which wasn’t slated for C++17. This proposal tried to bring some mathematical rigour to comparison in C++, by observing that some types were totally ordered, while others were only weakly or partially ordered. The proposal suggested an API, based on functions rather than operators, to perform tasks such as a three-way comparison of two objects of a totally ordered type. This got people thinking that perhaps down the line, we could build ==
and <
on top of a three-way comparison primitive (for totally ordered types); this would, for example, enable us to generate an operator<=
that’s more efficient than what’s possible today. People liked this future direction, and in light of it, each of the above proposals seemed like a half-baked attempt to rush something into C++17, so the group settled on getting this right for C++20 instead.
constexpr
static members of enclosing class type (US 24). This comment concerned the fact that a class cannot contain a constexpr
static data member whose type is that class type itself. The comment proposed making this work by deferring the processing of such a data member’s initializer (which is what requires the type to be complete) until the end of the class definition. Doing this in general (i.e. for all constexpr
static data members) would be a breaking change, since declarations that occur later inside the class definition can use the data member. However, doing it for a subset of cases, that are currently ill-formed, would work, and some ideas were bounced around about what such a subset could be. (A simple one is “when the data member’s type matches the class type exactly”, but we might be able to do better and cover some cases where the data member’s type isn’t the class type exactly, but still requires the class type to be complete.) EWG encouraged people to work out the details and come back with a paper.__has_include
(US 104) to have a “less ugly” name, such as has__include
. This was rejected because there is existing practice using the name __has_include
Here are the post-C++17 features that EWG looked at, categorized into the usual “accepted”, “further work encouraged”, and “rejected” categories:
Accepted proposals:
template
keyword in unqualified-ids. This concerns a parsing quirk of C++. When the compiler encounters a name followed by a left angle bracket, such as foo<
, it needs to decide whether the <
starts a template argument list, or is a less-than operator. It decides this by looking up the name foo
: if it names a template (in standardese, if it’s a template-name), the <
is assumed to start a template argument list, otherwise it’s assumed to be a less-than operator. This heuristic often works, but sometimes falls down: if you have code like foo(arg)
, where foo
is a template function that’s not visible in the current scope, but rather found by argument-dependent lookup (by being in one of the associated namespaces of arg
), then at the time the compiler looks up foo
, no template-name is found, so the construct is parsed wrongly. This proposal allows the programmer to rectify that by writing template f(arg)
; think of the template
as meaning “I promise the name that follows will resolve to a template”. This mirrors the existing use of the template
keyword when naming a template nested inside a dependent type, such as Base::template nested
. Some people had reservations about this proposal resulting in programmers sprinkling template
about rather liberally, but the proposal passed nonetheless.decltype()
. This came up previously, and the feedback was “we like it, but please make sure appropriate restrictions are in place such that the bodies of lambdas never need to be mangled into a signature”. The author revised the proposal to put in place appropriate restrictions. EWG asked for one more tweak – to exclude the body of the lambda expression from the “immediate context” of an instantiation for SFINAE purposes – and with that tweak passed the proposal.[](T arg)
in addition to [](auto arg)
. The advantage of the first form is that it can express things the second form cannot, such as [](T a, T b)
. The two forms can be combined as in [](T a, auto b)
; in such a case, the invented template parameters (for the auto
s) are appended to the explicitly declared ones.[=, this]
. The semantics are the same as [=]
, but this form emphasizes that the we are just capturing the this
pointer, and not the pointed-to object by value (syntax for the latter, [this]
, was previously added to C++17). (The question of allowing this change into C++17 came up, but there wasn’t a strong enough consensus for it.)touch()
, which pretends to write to all bits of an object (to prevent the compiler from using information it might have had about the previous value of the object for optimizations), and keep()
, which pretends to read all bits of an object (to prevent the compiler from optimizing the object away on account on no one looking at it). They would be library functions, in the namespace std::benchmark
, but the compiler would impart special meaning to them. The choice of names was disputed, but that was left for LEWG to bikeshed. (There was a suggestion to put the functions in a namespace other than std::benchmark
on the basis that they could be useful for other applications, such as secure zeroing. However, it was pointed out that these functions are not sufficient for secure zeroing, as the hardware can also play shenanigans such as eliding writes.)Proposals for which further work is encouraged:
alignas
specifier to use on a type or a variable, and an alignof
operator to query the alignment of a type; missing is an alignof
operator to query the alignment of a variable. In practice implementations already provide this, and this proposal thus standardizes existing practice. The discussion centred around whether alignof x
(without parentheses) should be allowed, like sizeof x
(the consensus was “no”), and whether alignof((x))
(with double parentheses) should return the alignment of the type of x rather than the alignment of the variable x (mirroring the behaviour of decltype
, where decltype(x)
returns the declared type of the entity x
, whereas decltype((x))
returns the type of the expression x
; the consensus here was also “no”).if constexpr
in a similar way to how switch
(or the run-time pattern matching) complements if
. The proposed construct would be usable in a statement, expression, or type context, and would have “best match” semantics (where, if multiple patterns match, the best match is chosen). EWG encouraged further work on the proposal, including exploring the possibility of matching on non-type values and multiple types (you can accomplish either with the current proposal using std::integral_constant
and std::tuple
, respectively, but first-class support might be nicer).int x : width = init;
, by adding some disambiguation rules. At the time, EWG thought the rules would be hard to teach, and asked for a new syntax instead. The proposal was revised, suggesting the syntax int x : width : = init;
(notice the extra colon). EWG didn’t like this syntax, nor any of the proposed alternatives (another was int x:[width] = init;
), and instead settled on allowing the naive syntax after all, but with different disambiguation rules that avoid any code breakage, at the expense of forcing people who add initializers to their bitfields to sprinkle parentheses in more cases.operator.
that returns the wrapped class, the wrapper class declares the wrapped class as a delegate base, using the syntax class Wrapper : using Wrapped
, and provides a conversion operator to access the wrapped object (which is stored as a member, or obtained from somewhere else; it’s not implicitly allocated as part of the wrapper object as it would be with real inheritance). The beauty behind this proposal is that the desired behaviour (“when you access a member of the wrapper object, it resolves to a member of the wrapped object if there is no such member in the wrapper object”) falls naturally out of the existing language rules for inheritance (lookup proceeds from the derived class scope to the base class scope), without having to invent new language rules. As a result, the proposal enjoyed a positive reception.Both proposals will be discussed further at the next meeting.
__builtin_expect(condition, true)
and __builtin_expect(condition, false)
constructs, spelt [[likely]] condition
and [[unlikely]] condition
, and restricted to appearing in the condition of an if-statement. EWG liked the idea, but expressed a preference for putting the attribute on statements, such as if (condition) [[likely]] { ... }
; this would then also generalize to other control flow constructs like while
and switch
.A::type
, where T
is a template parameter, is a “non-deduced context”, meaning that if a function parameter’s type has this form, T
cannot be deduced from the corresponding argument type. (Template argument deduction also comes up in the process of matching class template partial specializations, so this restriction means you also can’t specialize a template for, say, “vector::iterator
for all T
“). The main reason for this restriction is that in A::type
, type
can name either a nested class, or a typedef. If it names a typedef, the mapping from A::type
to T
may not be invertible (the typedef may resolve to the same type for different values of T
), and even if it’s invertible, the inverse may not be computable (think complex metaprograms). However, if type
names a nested class (and we make the case where an explicit or partial specialization makes it a typedef ill-formed), there is no problem, and this proposal suggests allowing that. EWG’s feedback was mainly positive, but a significant concern was raised: this proposal would make whether type
names a nested class or a typedef part of A
‘s API (since one is deducible and the other isn’t), something that wasn’t the case before. In particular, in a future world where this is allowed, the common refactoring of moving a nested class out to an outer level and replacing it with a typedef (this is commonly done when e.g. the enclosing class gains a template parameter, but the nested class doesn’t depend on it; in such cases, leaving the class nested creates unnecessary code bloat), becomes a breaking change. The author may have a way to address this using alias-templates; details with forthcome in a future revision of the proposal.unreflexpr
operator which does the reverse of reflexpr
: that is, given a meta-object representing an entity in a program (as obtained by reflexpr
, or by navigating the meta-object graph starting from another meta-object obtained via reflexpr
), unreflexpr
gives you back that entity. This would avoid the need to form a pointer to a reference data member: given a meta-object representing the reference data member, you’d just call unreflexpr
on it and use the result in places where you’d dereference the pointer.std::move
, std::forward
, or similar, a copy is made, because a named rvalue reference is still an lvalue). This would be a breaking change, but the breakage is likely to be limited to situations where the code is already buggy, or using move semantics improperly. EWG encouraged exploration of the scope of breakage in real-world code.Rejected proposals:
"string"op
, where op
would get the characters of the string as arguments to a template
. This was originally rejected over compile-time performance concerns (processing many template arguments is expensive), and it was suggested that the proposal be revised with additional machinery for compile-time string processing (which would presumably address the performance concerns). This reincarnation of the proposal argued that literals of this form have a variety of uses unrelated to string processing, and as such requiring accompanying string processing machinery is overkill; however, the underlying performance concerns are still there, and as a result the proposal was still rejected in its current form. It was pointed out that a more promising approach might be to allow arrays (of built-in types such as characters) as non-type template parameters; as such, the string would be represented as a single template argument, alleviating many of the performance issues.char8_t
type for UTF-8 character data, analogous to the existing char16_t
and char32_t
. This was previously proposed, albeit in less detail. The feedback was much the same as for the previous proposal: consult more with the Library Evolution Working Group, to see what approach for handling UTF-8 data is likely to actually gain traction.template void foo(Args... args, Last last)
– basically, pattern-matching on a list of variadic arguments to separate and deal with the last one first. This is currently ill-formed, mostly because it’s unclear what the semantics should be in the presence of default arguments. EWG was disinclined to allow this, preferring instead to address the use case of “get the last element of a parameter pack” with a more general pack indexing facility (one such facility is reportedly in the works; no paper yet).Having sat in EWG all week, I don’t have a lot of details to report about the proceedings of the other groups (such as Library Evolution or Concurrency) besides what I’ve already said above about the progress of Technical Specifications and other major features.
I do have a few brief updates on some Study Groups, though:
SG 7 had an evening session this meeting, during which it reviewed and gave its final blessing to the static introspection proposal, also known as reflexpr
after the operator it introduces. This proposal will now make it way through the Evolution and Library Evolution groups starting next meeting.
Being a first pass at static introspection, the proposal has a well-defined scope, providing the ability to reflect over the data members and nested types of classes and the enumerators of enumerations, but not some other things such as members of namespaces. Reflecting over uninstantiated templates also isn’t supported, in part due to difficulty-of-implementation concerns (some implementations reportedly represent uninstantiated templates as “token soup”, only parsing them into declarations and such at instantiation time).
Notably, the proposal has a reference implementation based on clang.
The proposal currently represents the result of reflecting over an entity as a meta-object which, despite its name, is a type (keep in mind, this is compile-time reflection). A suggestion was made to use constexpr
objects instead of types, in the same vein as the Boost Hana library. SG 7 was hesitant to commit to this idea right away, but encouraged a paper exploring it.
The facilities in this proposal are fairly low-level, and there is a room for an expressive reflection library to be layered on top. Such a reflection library could in turn use a set of standard metaprogramming facilities to build upon. SG 7 welcomes proposals along both of these lines as follow-ups to this proposal.
SG 13’s main work item, the Graphics TS, is working its way through the Library Evolution Working Group.
The group did meet for an evening session to discuss a revision of another proposal concerning input devices. I wasn’t able to attend, but I understand the main piece of feedback was to continue pursuing harmony with the asynchronous programming model of the Networking TS.
SG 14 didn’t meet this week, but has been holding regular out-of-band meetings, such as at GDC
|
Matjaz Horvat: Combining multiple filters |
Pontoon now allows you to apply multiple string filters simultaneously, which gives you more power in selecting strings listed in the sidebar. To apply multiple fillters, click their status icons in the filter menu as they turn into checkboxes on hover.
The new functionality allows you to select all translations by Alice and Bob, or all strings with suggestions submitted in the last week. The untranslated filter has been changed to behave as a shortcut to applying missing, fuzzy and suggested filters.
Big thanks to our fellow localizer Victor Bychek, who not only developed the new functionality, but also refactored a good chunk of the filters codebase, particularly on frontend.
And thanks to Jarek, who contributed optimizations and fixed a bug reported by Michal!
|
Henrik Mitsch: (Fun) Your Daily ‘We Are The World’ Reminder |
Mozilla is a distributed place. About a third of its workforce are remote employees or remoties. So we speak to each other a lot on video chats. A lot.
Some paid contributors still have hobbies aside from working on the Mozilla project. For example, there’s the enterprise architect who is a music aficionado. There’s a number of people building Satellite Ground Stations. And I am sure we have many, many more pockets of awesomeness around.
And of course there are people who record their own music. So if you own a professional microphone, why not use it to treat your colleagues to a perfectly echo-canceled, smooth and noiseless version of your voice? Yay!
This is the point where I am continuously reminded of the song We Are The World from the 80ies. For example, check out Michael Jackson’s (2:41 min) or Bruce Springsteen’s (5:35 min) performances. This makes my day. Every single time.
PS: This article was published as part of the Participation Systems Turing Day. It aims to help people on our team who were born well past the 80ies to understand why I am frequently smiling in our video chats.
PPS: Oh yes, I confused “Heal the World” with “We Are The World” in the session proposal. Sorry for this glitch.
PPPS: Thank you to you-know-who-you-are for the inspiration.
https://wiltw.io/2016/11/25/fun-your-daily-we-are-the-world-reminder/
|
Support.Mozilla.Org: What’s Up with SUMO – 24th November |
Greetings, SUMO Nation!
Great to be read by you once more :-) Have you had a good November so far? One week left! And then… one month left until 2017 – time flies!
If you just joined us, don’t hesitate – come over and say “hi” in the forums!
We salute all of you!
That’s it for today, dear SUMO People :-) We hope to see you soon around the site and online in general… Keep rocking the helpful web and don’t forget there’s a lot of greatness out there. Go open & go wild!
https://blog.mozilla.org/sumo/2016/11/24/whats-up-with-sumo-24th-november/
|
Air Mozilla: Dr. James Iveniuk - Social Networks, Diffusion, and Contagion |
James Iveniuk: Social Networks, Diffusion, and Contagion
https://air.mozilla.org/social-networks-diffusion-and-contagion/
|
Air Mozilla: Reps Weekly Meeting Nov. 24, 2016 |
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
|
Hannes Verschore: Spidermonkey JIT improvements in FF52 |
Last week we signed off our hard work on FF52 and we will start working on FF53. The expected release date of this version is the 6th of March. In the meantime we still have time to stabilize the code and fix issues that we encounter.
In the JavaScript JIT engine a lot of important work was done.
WebAssembly has made great advances. We have fully implemented the draft specification and are requesting final feedback as part of a cross-browser Browser Preview in the W3C WebAssembly Community Group. Assuming the Browser Preview concludes without major changes before 52 is released, we’ll enable WebAssembly by default in 52.
Step by step our CacheIR infrastructure is improving. In this release primitive value getprop was ported to the CacheIR infrastructure.
GC sometimes needs to discard the compilations happening in the helper threads. It seemed like we waited for the compilation to stop one after another. As a result it could take a while till all compilations were discarded. Now we signal all threads at the same time to stop compiling and afterwards wait for all of them to finish. This was a major win in our investment to make sure GC doesn’t interrupt the execution for too long.
The register allocator also received a nice improvement. Sometimes we were adding spills (stack to/from registers moves) while they were not needed. A fix was added to combat this.
Like in every release a long list of differential bugs and crashes have been fixed as well.
This release also include code from contributors:
I want to thank every one of them for their help! They did a tremendous job! If you are interested in helping out, we have a list of mentored bugs at bugsahoy or you can contact me (h4writer) online at irc.mozilla.org #jsapi.
|